-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
setting uncertainties #7
Comments
So I don't think it's overfitting so much as just the fact that you are averaging over the noise in Finally, in many cases it's not the populations themselves that are of interest, but some observable that is a weighted average of populations. In that case, the most relevant thing to do is
|
Thanks for your answer. In an attempt to shorten my question I oversimplified. Indeed, the figure attached was exactly what you suggest to do. Those values were the observables for different MCMC samples and the experimental values. |
I have some confusion with how
uncertainties
work.I would expect that the values obtained from
p = belt_model.accumulate_populations()
have uncertainties close to the ones that I have setup. However the RMS difference with themeasurements
is much smaller.I guess the reason is that the error of each individual realization is close to the
uncertainties
. If I do:then each of the
p_vals
has RMS differences close touncertainties
. But then asp
is an average of these values, the standard error of this mean is much smaller.Despite I understand this behaviour, I still think that the uncertainties should affect
p
as we are otherwise overfitting. In other words, gentler reweightings could result inp
's that are still within the experimental error. With that in mind, I tried playing withregularization_strength
without sucess.To clarify this, here is an example.
In blue you see some realizations of
p_vals
, in red themeasurements
and in green the average ofp_vals
The text was updated successfully, but these errors were encountered: