-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Player One SDK 3.6.1 gain issue #887
Comments
I played around a little - this is in fact a bug introduced with the update to SDK 3.6.1 and it's easily reproducible. Reproduction:
You can swap in the older (3.6.0) SDK by relinking the library (without changing the driver version) and the anomalous behavior goes away. I connected a debugger and confirmed that the INDI driver (v1.13) is calling the SDK to set the gain value after restoring the saved config. In the case of SDK 3.6.0 this has the expected effect. However, in the case of SDK 3.6.1, this first call doesn't change the camera gain (which is initialized to 0). Subsequent calls do, though. N.B. the driver doesn't make any calls to the SDK unless you change the gain to a different value (see here), simply setting the gain to the current property value is a no-op. |
Root cause identified. The new SDK exposes capabilities to enable auto mode for gain and the 3 white balance channels. Existing driver code, which is now triggered by these new capabilities, treats associated numeric values as floats, when in fact, all the existing auto-able config parameters (gain, WB_R, WB_G, WB_B) are integer valued (see here). When the auto gain switch property is updated at connection time (after the numeric property has already been processed), the associated gain value is sent as well. The type error results in a garbage value being sent to the camera, which is interpreted as a 0. I'll open a PR with the extremely simple fix, but if future revisions of the SDK ever introduce auto capabilities for float-type values, new logic will be required. |
@ijessen |
Thank you for the PR @ijessen Can you recheck? |
@ijessen Thank you for fixing the issue. |
@PlayerOneAstronomy no problem - the bug wasn't in the SDK. I will say, though, that I would have expected the library call ( @hiro3110i at least for my Poseidon-C the issue is only triggered by the update to SDK 3.6.1. The issue doesn't exist when I downgrade the SDK to 3.6.0 (while leaving the driver version at 1.13). The new SDK exposes auto gain for the Poseidon-C, which in turn triggers the driver code path with the bug. @knro looks good. I built and installed the driver from master and the issue is resolved. One question from me - since I'm by no means a C++ expert - why doesn't the compiler balk at |
Describe the bug
Full gain setting does not have the expected effect of producing .003 e-/ADU.
driver: indi_playerone_ccd v1.13
library: libPlayerOneCamera.so.3.6.1
Imaging results suggest the actual gain is something 190 or less (no less than .125 e-/ADU).
It's unclear to me whether the issue is related to driver, SDK, or camera hardware. My instinct says SDK, but I figured I'd open the issue here since I know both @PlayerOneAstronomy and @hiro3110i may be able to chime in.Update: Cause is for sure the update to SDK 3.6.1.
To Reproduce
see updated reproduction steps in later comment
Exact steps to reproduce the behavior.1. Pull latest, build, and install both library and driver2. Run driver, launch and connect Ekos3. Set gain to 5504. Do not touch offset (left at default, out of the box, value of 20)5. Enable LRN mode (not sure if necessary, but that was the mode I was shooting in)6. Confirm "auto-gain" is disabled7. Image in 16-bit RAW modeExpected behavior
Images should reflect .003 e-/ADU by exhibiting a minimum step size between differently-illuminated pixels of 333 ADU (with expected variance due to fixed pattern and read noise). Instead, examination of the resulting images put a ceiling on the minimum step size of only 8 ADU, corresponding to a minimum .125 e-/ADU.
Screenshots
The following screenshot shows statistics of a small selection of pixels within the vignetted corner of a 30" image. This image was shot at 550 gain and 20 offset. Testing shows that each offset unit corresponds to an 8 ADU increase in bias, therefore the min value of each pixel in this scenario (absent noise) is 160 (this is confirmed by darks). Note the mean and median values in this selection are 168, min 161, and max 175. This would make sense for pixels that are being illuminated by a small number of photons, where the e-/ADU value is anywhere above .125. An e-/ADU of .125 would correspond to most pixels in this selection being illuminated by a single photon, some being illuminated by two, and some by none.
The next screenshot shows statistics of a small selection of pixels at the heart of a bright star (located below the statistics window) from the same image as above. The max pixel value within this star is 702. If the gain of 550 were in fact applied, that would correspond to maaaaybe 2 photons received from this bright star in 30". There also would be no explanation for the appearance of gradients and other signal (see the nearby galaxies?) at brightnesses between the heart of this star and the background.
FITS header confirms the gain setting of 550
Finally, the raw FITS file is attached.
M_86_Light_427.zip
The text was updated successfully, but these errors were encountered: