-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tone-mapping should be stable when mDCv and cLLi are static #319
Comments
Just to be clear, I think "...completely define the tone mapping algorithm..." might just mean the decoder's choice of tone mapping is now set, right? IE we aren't defining what tone mapping algorithm the decoder should use. Just that it should be consistent with itself. (The intention being that the decoder doesn't change which tone mapping algorithm it uses based on the image, for example...just the metadata.) |
Does "SHALL completely define" contradict "Ancillary chunks may be ignored by a decoder."? |
It shouldn't. But those are still ancillary chunks. |
If a decoder sees that mDCv and cLLI are present but ignores them and does tone mapping based on the image contents instead, has it violated the spec? |
I think it should be strongly discouraged at the very least. |
Yes, the idea is that the HDR metadata, if present, should set all internal parameters of the algorithm, no matter what they are. |
It is a little tricky because the HDR metadata in cLLi is itself dependent on the image contents. I could imagine a scenario where a new tone mapping algorithm is developed and the mDCv and cLLi chunks are not sufficient. So even though they are provided in an image, the decoder might ignore them. However, that theoretical future decoder will run into the problem that spawned this issue: Two images next to each other or in a sequence like a flipbook might have a jarring, unintentional jump if they use different tone mapping algorithms. The goal is to make the decoder consistent with itself. It can use the new tone mapping algorithm both times. That's fine. Said another way: So long as the decoder ignores both chunks consistently and always applies its tone mapping algorithm, that is fine. |
I think it should be a little bit more stringent: it should be possible for the author to strongly hint that multiple images should be tone-mapped ignoring their contents. |
|
It's a good thing the PNG spec defines some terms, so that we have the option of using them consistently. A PNG datastream consists of a static image and, optionally, a frame-based sequence. ( which may or may not include the static image as the first frame).
I just noticed that the casual reader might conclude that a static (non-animatedPNG has no frames and thus, that they can't use The spec is actually clear on that, because the MaxCLL is defined on "the entire playback sequence" but I think we could be a bit more explicit that MaxCLL can indeed be defined on a static PNG. |
Do we need to define what happens if a user creates and mDCV tag that differs from the monitor specified in the specification? For example, sRGB has a 80nit monitor in the spec, Adobe RGB has a 160nit monitor, BT.709 and BT.2020 both have 100 nit defined in BT.2035 and HLG has a variable monitor brightness with a corrective gamma adjustment. How do you actually use the mDCV tag to maintain subjective appearance? |
Then the chunks should be |
True, although the PNG spec has always interpreted critical to mean "no image of any sort can be displayed without it", not "needed to display the image correctly". Which is why we have only 4 critical chunks: Notably, chunks which are needed to make the image display correctly, like |
+1 |
I can't see ChrisL's original comment here on github however my point was not that the chunks should be critical (they shouldn't) but that unless they are critical all decoders are completely free to ignore them; it's just a QoI issue. At least the use of the word SHALL in the OP comment creates a massive conflict in the specification; the checks are ancillary but the decoder "shall" use them to control tone mapping. In effect it's making them backdoor critical chunks. Hence my comment. |
We have plenty of existing cases where an ancillary chunk includes normative wording (shall, must, should). In general, this seems to mean "you can get some sort of image without this, so it is ancillary" and also "if you use this chunk (readers) or create this chunk (writers) then ...". A few examples:
|
I just realized that a display system with an ambient light detector (eg an HDR TV) whose tone mapping responds to the current HDR headroom, would not comply with that requirement. Also, in some cases it is the display not the decoder which does the tone mapping, and the display does not know what PNG chunks were in the image (or even that it was a PNG image). |
Chris, honestly; what on earth does this have to do with PNG? Like, dude, what on earth does tone mapping have to do with PNG? |
PNG's that are HDR can be displayed on an SDR display (and often will be), so tone mapping is important. This occurs in MacOS eDR right now so you can place HDR and SDR images in different application windows. |
John, honestly, you must have heard of HDR? You may have missed that PNG now supports HDR images as well as SDR ones. Here is an explainer |
You may find that even without an ambient light detector, it doesn't comply. TV manufacturers want their products to look different in a show room to allow the consumer to choose based on look and most will need to follow regional power saving regulations. There are a number of initiatives to get a more standardised look. As well as HDR to SDR tone-mapping, PQ HDR systems require HDR to HDR tone mapping to adjust a display referred signal to a lower capability monitor, which is where the metadata added to PNG originated. There was some discussion in the W3C Color on the Web group on a minimum viable tone-mapping and we presented a relatively simple technique for HDR-SDR mapping complete with ambient light adaptation: https://bbc.github.io/w3c-tone-mapping-demo/ This could obviously be built upon, for example a better gamut reduction algorithm could be included, which I know is something that @svgeesus has been investigating. |
If present, mDCv and cLLi SHALL completely define the tone mapping algorithm used by the decoder when rendering the image to a display.
mDCv and cLLi SHOULD be set. This is particularly important when drawing a temporal sequence of images. If mDCv and cLLi are not set, the tone mapping algorithm can vary over the sequence, resulting in temporal artifacts.
The text was updated successfully, but these errors were encountered: