-
-
Notifications
You must be signed in to change notification settings - Fork 50
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix autoreparam because dims are no longer static #363
Conversation
7bbb7d8
to
28a670b
Compare
except NotConstantValueError: | ||
raise ValueError("Size should be static for autoreparametrization.") | ||
_, size, *_ = node.inputs | ||
eval_size = size.eval() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is this better?
Can you do constant_fold([rv_shape], raise_if_not_constant=False)
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I remember hitting missing input errors, but I was doing plain eval
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm the problem is you do need constant values with the shared variable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I reverted this due to this uncertainty. I only need this shape once, the computation of it can be addressed better. I saw in one place you did manual function compute with linker="py" and no optimizations, this would be suitable here as well but I do not want to duplicate code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can do eval(mode="FAST_COMPILE")
if you want it to be fast. Still does some rewrites but not much and avoids C compilation
[rv_shape] = constant_fold([rv.shape]) | ||
except NotConstantValueError: | ||
raise ValueError("Size should be static for autoreparametrization.") | ||
[rv_shape] = constant_fold([rv.shape], raise_if_not_constant=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ignore my idea, this won't work for you. When it can't constant_fold, it returns the best graph it can but you still need constants for np.zeros
raise ValueError("Size should be static for autoreparametrization.") | ||
_, size, *_ = node.inputs | ||
eval_size = size.eval(mode="FAST_COMPILE") | ||
if eval_size is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
eval_size will never be None
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait why are you evaling size instead of rv.shape?
rv.shape.eval()
That's the one you should use
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was afraid I get the dependence on the original RV, ah, I do not seem to do
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you use eval you can only get a numerical output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need shape of transformed or original? If it is the transformed, you could request model.initial_point before starting the dispatch process?
Alternatively we should be able to use shape inference but may need to implement the infer_shape
of the ModelVariable. Then the helper infer_static_shape
, possibly followed by an eval, should do the job and not complain about value variables because those shouldn't be in the graph after infer_shape
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you point me to which test failed when you tried rv.shape.eval()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need shape of transformed or original? If it is the transformed, you could request model.initial_point before starting the dispatch process?
This looks like an interesting solution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an interesting solution
model.initial_point()
Seems to give a point in the sampling space, I need the untransformed one
Looks good to me, just a useless |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a tricky one!
Oh still failing :( ? |
Yeah, I still get this input error
|
@ricardoV94 should I revert back to an initial solution? Without the fix, autoreparam can't be used on any model with dims( |
Can you dprint the graph to see why the value variables are there? |
This is how I do the replacement:
|
This reverts commit a9f582c.
I reverted to the solution which is working but not ideal. I'll open an issue to address the future-proof refactoring after we resolve this bug |
Cab you add an xfail test with the case we know to be broken now? |
@@ -174,15 +176,18 @@ def vip_reparam_node( | |||
) -> Tuple[ModelDeterministic, ModelNamed]: | |||
if not isinstance(node.op, RandomVariable | SymbolicRandomVariable): | |||
raise TypeError("Op should be RandomVariable type") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to raise NotImplementedError for op.ndim_supp>0?
Not sure if for those you would want one lambda per shape or size, mentioning because those will always be different for multivariate RVs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will not pass the dispatch anyway, it is not needed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ndim_supp>0, will never be supported I think, but I'm not sure, maybe it will, e.g. Dirichlet, there are some reparameterizations with Gamma https://stats.stackexchange.com/questions/548620/reparameterization-trick-for-the-dirichlet-distribution
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one lambda per shape or size
Per size for sure 100%
I believe that if I work with multivariate rv, it will be the only way to have one lambda per size (one per independent draw)
Co-authored-by: Ricardo Vieira <[email protected]>
Can you please elaborate on this? If size is None, lambda should be scalar, no? PS I wish github has threads |
No description provided.