-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix Issue 1348 (Negative Values in FOV plot when no comparison star is used; Negative pixel scale in FOV plot) #1349
base: develop
Are you sure you want to change the base?
Conversation
Release EXOTIC 4.2.3 Hotfix: Fix TypeError for noisy or drifting background imaging
…EXOTIC does not use a comparison star.
That's a great find. What I am also curious about is why is the scaled pixel values negative too here? They should all be positive. |
@tamimfatahi you are right! I didn't notice that. I was too focused on the negative aperture that I didn't noticed that the pixel scales had negative values too. I will take a look at it later tonight. |
@tamimfatahi about the negative pixel values in the FOV plots, it seems these values are present in the image data, not introduced by the visualization pipeline. Image statistics show predominantly negative values: Mean value: -1797.61 I guess the negative values likely come from the data calibration process. |
@ivenzor Thank you for the investigation. I'll accept this PR as this relates specifically to the aperture. However, if you can look into why that's occurring as well, it would be much appreciated. |
@ivenzor @tamimfatahi Is there anything else to be done here before it's ready for merge? Specifically, I am asking about the comment on why negative coordinates show up in the FOV plots. |
I think @ivenzor would need to look into why this is occurring. I can't really tell where in the process it's happening. @ivenzor You mentioned that the raw image is negative. Are you saying that pre-EXOTIC, the images are negative? Or EXOTIC is not calibrating the images properly? |
@tamimfatahi @jpl-jengelke The input images before exotic have positive values as expected:
I think the negative values are introduced in the calibration steps. I need to do more testing and think this through more carefully, but if we’re performing differential photometry between the target star and other stars, aren't we primarily interested in the relative flux differences rather than the actual values themselves? @rzellem We could merge the negative FOV fix, and I’ll take a closer look at the negative pixel values opening a new issue. |
@ivenzor - for the transit measurements, typically we only care about differential measurements, but for the absolute flux calibrations (for stellar variability monitoring), we will likely care about the actual values themselves (but technically...these are also differential measurements) re: negative values for background subtraction - this physically shouldn't happen at all - do you know what cases this happens? in other words, are the background values negative because we do a previous calibration step that turns them negative? or are we using a mean while we should be using a median? |
Hi @rzellem, I took a look at the images in the data-upload-aavso channel:
To be honest, I hadn’t noticed either issue, even though they’ve appeared since mid-October. There may have been negative apertures and pixel scales before that date, but since they weren’t plotted, there’s no easy way to verify. Regarding the negative aperture, I became aware of it two days ago when Anthony pointed out a negative aperture value in a plot by Cledison. The fix in this PR was intended to address that case, which is straightforward and, as far as I know, only affects the plotted value. As for the negative pixel scale, I wasn’t aware of it until yesterday when Tamim pointed it out while reviewing the PR for the negative aperture issue. I haven’t had the chance to do much testing on this yet, but I noticed that the image passed to the plot function contains many negative values, resulting in the negative scale. I checked the original FITS.gz images, and they have normal positive values. My guess is that something is happening during the calibration or background subtraction process. I’ll do more testing later tonight when I’m home. |
@tamimfatahi @rzellem I think I now know why we got negative values sometimes, it is indeed in the calibration section of the code, but is not EXOTIC fault, al least not entirely, but based on the dark frames input data: DARK FRAME 1 STATISTICS: DARK FRAME 2 STATISTICS: GENERAL DARK STATISTICS: RAW FIRST IMAGE STATISTICS: FIRST IMAGE AFTER CALIBRATION STATISTICS: The all white saturated dark frame is messing with the median of the general dark and we are overcorrecting the science frames, ending with negative values. This only happen if we got a dark file which is not a dark file... That's why this issue happen only occasionally. EXOTIC needs to validate that the dark files are indeed "dark". I will work on it and I will add it to this PR for your consideration. |
…dark frame calculation. This caused negative values in the images after being calibrated.
@tamimfatahi @rzellem The PR now includes fixes for the original negative aperture value in issue #1348, as well as a fix for the issue with negative values in the science frames caused by overcorrection from a faulty dark frame. EXOTIC now identifies and ignores potentially faulty dark frames using the following logic:
|
This needs @rzellem input to okay this, but I do have an alternative method: What if you instead took the median of the image and checked if any pixel is above 3sigma? That could be an alternative way of handling it. The problem with this is that a single pixel can discard the dark frame due to something like cosmic rays. I'll let Rob ponder either method (or a combination of both!). |
@tamimfatahi, I’m afraid I don’t fully understand your suggestion. What I'm attempting is to discard a bad dark frame so it isn’t included in the 'general dark' used to calibrate the science frames. A bad dark frame in this context refers to a frame that isn’t dark at all—for example, in my case, it was a pure white frame, completely saturated. What I tried to do is use the maximum pixel value of the dark file as a quick proxy for the saturation value (any hot pixel in the dark frame will yield this value). Then, I compare the median value of the entire dark frame to this saturation value. If the median of the dark file is greater than 50% of the saturation value, I assume the dark file is not valid (it’s too bright). In my case, the bad dark frame’s median was 100% of the saturation value (a completely white frame). I compared this to other good dark frames I had nearby, their medians were about 10% of the saturation value. That’s why I considered a 50% cutoff. |
Ah, it's completely saturated. Then the method I mentioned wouldn't work since its for cosmic rays mainly. How about using something like I'm a little weary of depending on cosmic rays to determine the maximum pixel brightness of an image. What happens if there are no cosmic rays present in the dark frame? If the maximum pixel value in the dark frame is 300 ADU and the median is 290 ADU, yet the data type permits a maximum value of 65,535 ADU (since it's If my analysis is incorrect, please let me know. |
Hi @tamimfatahi, thanks for your feedback. My approach is not assuming cosmic rays but rather the presence of hot pixels or any bright pixel in the dark frame. I'm assuming every dark frame has at least one of those who yield the saturation value I need (at least every camera I have used has several of them). About your suggestion, I think that would give us the theoretical max value of the dtype but I don't know how the instrument would use the dynamic range. Would the theoretical max value will be used if the dark file is somewhat saturated? That's why I chose to use the value of a hot pixel as a saturation proxy. You have much more experience in this, if the theoretical value is reached in practice we can go that route. I will do some testing later tonight about this. |
I believe you're right on that. I just checked MObs CCDs and they go up to 4095 ADU, yet are stored in a 16bit format. If you feel confident that there will always be saturated pixels in these exposures, then that's a good way to go about it! |
Two improvements, instead of comparing the max in each dark frame, we can check all the dark files first to get the overall max and then follow the same logic: use that max as a proxy for saturation and then compared the mean of each dark file frame to that saturation value as the criteria for rejecting the dark frame. The other improvement is not to use 50% as a cutout, but 80% to be even more sure the dark frame is not valid:
Example using a bad dark frame: |
I was going to suggest this morning to look at one of the light exposures and grab the max pixel value, as there should always be at least one there. Looks like you found a different approach to help ensure there is a saturated pixel |
What's the TL;DR - is this ready to be approved? |
We need your expertise here @rzellem:
|
|
Sounds like @ivenzor took care of #1 - but I would do a check by looking at the max pixel value according to the bit size of each pixel. That won't necessarily tell if you if saturation is occurring (e.g., for low gains), but it will give you a good idea. For 2: Well, you could potentially have a camera with terrible dark noise - if that's the case, then you could violate your trigger and remove valid data. What about doing a neighbor check- by comparing a dark to other darks? That will tell you if one (or maybe a few) of them is bad. But a user could confuse darks with bias or flats ... and I'm not sure there's a really good way to do that. One potential way would be to see if the FITS header has info about the dark current for the camera, but I doubt that all cameras have this listed in the header. So I think doing a "how does this one image compare to the others of the same cal type" test is probably the safest thing to do. |
@rzellem, it seems all methods have some drawbacks. For example, besides what you already mentioned, if we use the 'comparison of the same type' method, we won’t be able to discard invalid dark frames if there’s only one dark file or if all the available dark frames are saturated or too bright. But I agree that probably this is the safest method to use. |
@tamimfatahi I have updated the PR, it will now use similarity among dark frames to exclude dark frames. Can you take a look? |
This PR fixes issue #1348
The plot_fov function was modified to fix the negative values in the FOV plots when no comparison star is used.