-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bugs in destriper framework #310
Comments
An update on point 3. : Indeed, the function computing the errors ( So I would rather modify that function to return a unique array that is then used for the preconditioner, so we can save some memory. This means that I would also change the way this quantity is saved in |
Now, the errors on baselines are not looped over detectors when saved, since they do not depend on them (they are summed over deterctors). The same is done for loading a `DestriperResult` from file. See also #310 for more details.
Fix bug happening when no destriping is perfomed, but report is requested
Hi @ggalloni , thanks a lot for the tests and the fixes!
Is the error happening in this part of the code? for cur_baseline, cur_error, cur_lengths in zip(
results.baselines, results.baseline_errors, results.baseline_lengths
):
baseline_hdu = fits.BinTableHDU.from_columns(
[
fits.Column(
name=f"BSL{det_idx:05d}",
array=cur_baseline[det_idx, :],
format="1E",
unit="K",
)
for det_idx in range(cur_baseline.shape[0])
]
)
# Etc. Indeed, |
Apparently Looking at the code producing the errors this actually makes sense given that the noise is summed over detectors in |
Mmm, I would like to see your code. I ran the test desired_results.baselines = [
np.array([[-0.06836026 -0.23471543 -0.29916509 0.3306597 0.27158108]]),
]
desired_results.baseline_errors = [
np.array([[2.15951183 1.82564974 1.79309935 2.24839557 2.21996142]]),
] So it seems that sometimes the errors follow one memory layout (as in the test above), sometimes another (as in your code). May you please share the code you are using where you see this behavior? It might be that some part of the destriper messes up the layout of the errors, but this is not caught by the test. |
How many detectors are there in that test? I guess just one? If so, the code works as-is. This is just because I am using a setup very similar to the Still, I will build a minimal code to reproduce this. |
Yes, indeed using that test as a template and saving the results at the end breaks. Here, is a code to reproduce it. On the master it breaks, while on the branch of #309 it works.
|
That's terrific, thank you very much! I have added a new test using your script in #309, so we'll never miss this issue again. |
I am working on the data splits for the destriper (#309 ) and I encountered a couple of bugs.
samples_per_baseline=None
(so the destriping procedure is not performed) while askingappend_to_report = True
, since it tries to build the convergence plot.save_destriper_results
: within_save_baselines
, when it tries to save the errors,cur_error[idx_det, :]
generates an index error. I am not familiar with the algebra here, but from the code, I think that the first dimension ofcur_error
should be the number of observations and not the number of detectors. Is that so? If so, probably substituting the index withidx
(which runs over observations) is sufficient (?)I already addressed point 1. in #309, and probably I will do the same for 2., but probably the third needs some deeper thinking. What do you think @ziotom78 @paganol?
The text was updated successfully, but these errors were encountered: