Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set up validation and benchmarking testbench #219

Open
lukasheinrich opened this issue Aug 28, 2018 · 2 comments
Open

set up validation and benchmarking testbench #219

lukasheinrich opened this issue Aug 28, 2018 · 2 comments
Assignees
Labels
feat/enhancement New feature or request

Comments

@lukasheinrich
Copy link
Contributor

lukasheinrich commented Aug 28, 2018

Description

Is your feature request related to a problem? Please describe.

Currently testing / comparing the outputs of pyhf and roohf (i'll coin that term) is hard and allows manually (or even visually) comparing outputs

Describe the solution you'd like

A testbench suite that allows for for a given input to

  1. generate pyhf JSON
  2. generate roohf XML+ROOT (via stub for XML writing #218) and then workspace via hist2workspace
  3. run limit setting for both
  4. compare / print report

something like this that tests both formats (XML/JSON) and both implementations

                  .___  JSON __ json2xml __ hist2workspace __ ROOT CLs
                  |___  JSON __ pyhf CLs
source of truth --|        
                  |___ XML + ROOT __ hist2workspace __ ROOT CLs
                  |___ XML + ROOT __ xml2json __ pyhf CLs

Relevant Issues and Pull Requests

@matthewfeickert matthewfeickert added the feat/enhancement New feature or request label Aug 28, 2018
@lukasheinrich
Copy link
Contributor Author

as part of #231 I wrote this bit that might be useful

def makespec(nchans,nsamps,nbins,nsysts):
    channels = []
    for cc in range(nchans):
        backgrounds = []
        for ss in range(nsamps):
            mods = []
            for nn in range(nsysts):
                mods.append(
                    {'name': 'syst_{}_{}_{}'.format(cc,ss,nn), 'type': 'shapesys', 'data': [7.]*nbins}
                )
                mods.append(
                    {'name': 'h_syst_{}_{}_{}'.format(cc,ss,nn), 'type': 'histosys', 'data': {'hi_data': [51.]*nbins, 'lo_data': [49.]*nbins}}
                )
                mods.append(
                    {'name': 'n_syst_{}_{}_{}'.format(cc,ss,nn), 'type': 'normsys', 'data': {'hi': 0.95, 'lo': 1.05}}
                )
                mods.append(
                    {'name': 'sf_{}_{}_{}'.format(cc,ss,nn), 'type': 'shapefactor', 'data': None}
                )
            backgrounds.append(
                {'name': 'background_{}_{}'.format(cc,ss),'data': [50.0]*nbins,'modifiers': mods}
            )
        c = {
            'name': 'channel_{}'.format(cc),
            'samples': [
               {'name': 'signal','data': [5.0]*nbins, 'modifiers': [{'name': 'mu', 'type': 'normfactor', 'data': None}]},
            ] + backgrounds
        }
        channels.append(c)
    spec = {'channels': channels}
    return spec

@lukasheinrich
Copy link
Contributor Author

lukasheinrich commented Sep 8, 2018

the first step for this is to have a script

pyhf-validate(?) run-roohf /path/to/xml --resultfile results_roohf.json
pyhf-validate(?) run-pyhf /path/to/xml --tensorlib numpy --resultfile results_pyhf.json

the former should do the equivalent of

hist2workspace /path/to/xml
python run_cls_singlepoint.py /path/to/model.root

while the latter should implement

pyhf xml2json /path/to/xml | pyhf cls (?) 

such that we can then do

pyhf-validate diff --precision 0.01 results_roohf.json results_pyhf.json

which should return appropriate exit codes?

@lukasheinrich lukasheinrich changed the title set up validation testbench set up validation and benchmarking testbench Sep 9, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feat/enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants