-
Notifications
You must be signed in to change notification settings - Fork 90
feat: Add backend option to pyhf CLI #534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This still has some work to do on it, as TensorFlow is now complaining that ...
File "/home/mcf/Code/GitHub/pyhf/src/pyhf/utils.py", line 162, in pvals_from_teststat
if sqrtqmu_v < sqrtqmuA_v:
File "/home/mcf/venvs/pyhf-dev/lib/python3.6/site-packages/tensorflow/python/framework/ops.py", line 690, in __bool__
raise TypeError("Using a `tf.Tensor` as a Python `bool` is not allowed. "
TypeError: Using a `tf.Tensor` as a Python `bool` is not allowed. Use `if t is not None:` instead of `if t:` to test if a tensor is defined, and use TensorFlow ops such as tf.cond to execute subgraphs conditioned on the value of a tensor. though I'm not sure why it hasn't picked this up before. To fix that requires replacing with something along the lines of if not qtilde: # qmu
nullval = sqrtqmu_v
altval = -(sqrtqmuA_v - sqrtqmu_v)
else: # qtilde
def _sqrtqmu_v_case():
nullval = sqrtqmu_v
altval = -(sqrtqmuA_v - sqrtqmu_v)
return nullval, altval
def _sqrtqmuA_v_case():
qmu = tensorlib.power(sqrtqmu_v, 2)
qmu_A = tensorlib.power(sqrtqmuA_v, 2)
nullval = (qmu + qmu_A) / (2 * sqrtqmuA_v)
altval = (qmu - qmu_A) / (2 * sqrtqmuA_v)
return nullval, altval
import tensorflow as tf
nullval, altval = tf.cond(
tf.math.less(sqrtqmu_v[0], sqrtqmuA_v[0]), _sqrtqmu_v_case, _sqrtqmuA_v_case
) but to make that backend agnostic is going to take sometime (and I need some sleep). TODO:
|
As I've answered our only FAQ posted at the moment I should also add another before this goes in. |
ab450ac
to
81b79c8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm generally ok with these changes, but I don't think we should be supporting conditionals -- especially as the only place that is being used is for single-valued numbers which should be done in numpy
only (and we're going to remove that code in the future).
Can you elaborate on why?
So you just want to have one line in
?
Why in numpy? Shouldn't this be backend independent?
I've been defocused for the last months. What is getting removed and why? |
Given @kratsg's suggestion, this PR is going to change to promote the backend option to be a |
Does this need to stay as a draft? |
I think this would come after #583. Right now I don't feel too confident in the non-numpy backend to successfuly do the fits tbh |
similar #535 I would vote to reopen this because we have a path to make the non-np backends fit robustly & reproduce the sbottom results |
c206a7e
to
ead8688
Compare
Codecov Report
@@ Coverage Diff @@
## master #534 +/- ##
==========================================
+ Coverage 94.99% 95.01% +0.02%
==========================================
Files 47 47
Lines 2678 2689 +11
Branches 369 370 +1
==========================================
+ Hits 2544 2555 +11
Misses 88 88
Partials 46 46
Continue to review full report at Codecov.
|
@matthewfeickert do you still want to add this? |
Sorry I missed this. Yes, I think this should still go in and not get closed, though we need to decide if we will accept it like it is (with modifications necessary for merge) or if we should do PR #541 first instead. |
i would do this now and not wait for the TF2 migration |
You and I have the same thought :) |
76f6253
to
0b09b50
Compare
tensorlib is not updating, so use get_backend so that the optimizer is set for the proper backend if a new optimizer is set
0b09b50
to
5b733e7
Compare
Description
Add ability to change the computational backend used by
pyhf
from the command line with a--backend
flag.Example:
Checklist Before Requesting Reviewer
Before Merging
For the PR Assignees: