-
Notifications
You must be signed in to change notification settings - Fork 10
Feature/negative offset fix #31
Feature/negative offset fix #31
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR! I admit I find some of these changes confusing, particularly as they don't seem to be analogous to functionality in the OpenEXR C++ API.
src/frame_buffer.rs
Outdated
} | ||
|
||
/// Return the offset of the first display window pixel with reference to the first data window pixel | ||
pub fn origin_offset<T: Sized>(&self) -> isize { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a good reason for this to be public?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can be used as an index to the byte of the "first pixel" at 0,0 in the frame_buffer pixel data by the client app.
However, you are right as the way it is now it has no much use. In fact, now that I think of it, this should actually be split into two functions. A public one which returns the offset to the pixel (in pixels) and one which returns the byte offset, which can be kept private.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A public one which returns the offset to the pixel (in pixels)
What would an external caller do with this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just added an example in negative_window.rs
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let origin_offset = {
let mut fb = FrameBufferMut::new_with_origin(x, y, width, height);
println!("Loading buffer as {}x{}", width, height);
fb.insert_channels(&[("R", 0.0), ("G", 0.0), ("B", 0.0)], &mut pixel_data);
exr_file.read_pixels(&mut fb).unwrap();
exr_file.read_pixels(&mut fb).unwrap();
fb.origin_offset() as usize
};
// check the pixel value at (0,0). "==" is
assert!(f32::abs(pixel_data[origin_offset].0.to_f32() - 0.5f32) < 0.0001f32);
@@ -67,6 +74,18 @@ impl<'a> FrameBuffer<'a> { | |||
self.dimensions | |||
} | |||
|
|||
/// Return the origin of the frame buffer. | |||
pub fn origin(&self) -> (i32, i32) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's not clear which origin this is, what it's relative to, or why the framebuffer needs to be aware of it. Upstream does not seem to have a similar member or accessor.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This origin is needed to calculate the offset to the data pointer in insert_raw
. Values of the origin are taken from the data_window.min
header field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation, and indeed naming, should make it clear what this value is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good, what do you suggest? data_window_origin
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As an alternative, the origin
field (or the derived offset
) may be added as a parameter in all the insert_channel*
calls.
Would it be more acceptable to you? I still prefer the option of it being a member of the framebuffer struct as it is easier to validate and less prone to errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't necessarily object to the approach taken here, I just haven't fully grasped how it works yet. I'm totally on board with papering over ill-conceived upstream APIs when it's feasible to do so and doesn't make things even more confusing elsewhere; we do after all already do similar-looking things with the dimensions
field.
unsafe { | ||
self.insert_raw( | ||
name, | ||
T::pixel_type(), | ||
data.as_ptr() as *const c_char, | ||
(data.as_ptr() as *const c_char).offset(origin_offset), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain the logic behind this offset, and the similar one below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From the behaviour observed I believe that the underlying API expects the data
passed here, in insert_raw
, to be pointing at the pixel mapped to (0,0) in the data_window
rather than the "first" pixel in the framebuffer.
Also, in short: it works.
As the reason why, no idea. I've seen this pattern in other APIs though.
I can only guess that the reason why there is no offset
field in the native FrameBuffer API itself is that the client is expected to do the math itself and pass the correct pointer. If this is the case I believe this to be unergonomic and having the lib calculating it from the (x, y)
offset to be much clearer and more practical.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the pixel mapped to (0,0) in the data_window rather than the "first" pixel in the framebuffer.
I'm not sure what this means. Are you talking about the pixel at coordinates (0, 0) relative to the origin of the data window, or of the display window? What is the "'first'" pixel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it's the pixel at coordinates (0,0) relative to the origin of the data window, which in my case are negative. In this case, this pixel has a positive offset in the framebuffer. This pixel would have offset 0 if the origin of the data window were to be (0,0) - as it is in many common cases.
There is strong evidence for the insert_raw
call to be requiring a pointer to this specific pixel as its third parameter.
Also, to clarify, the "first pixel" in the frame buffer is the one mapped to coordinates (data_window.min.x, data_window.min.y)
. This pixel,, being at the origin, by definition, has offset 0 in the framebuffer.
I haven't checked what happens if the coordinates of the origin of the data window are positive, but this is out of the scope of this bug fix. It is likely to work but I haven't verified all the edge cases for UB.
I'm going to make another test case for positive coordinates of the data window origin but I have higher priority on the negative origin case as I already have a lot of images with negative origin I am currently unable to load, but none with positive > 0 coordinates.
I believe the display_window
to be inconsequential in this context, but I may stand corrected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this makes sense, but I'll need to squint at the upstream documentation a bit more to be certain. Would like to hear @cessen's thoughts too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Ralith sounds very sensible - just let me know if you need further changes in the branch.
Sorry for being late to the party, here! First of all, @norru thank you SO MUCH for tracking this down, and for putting together this PR. You have been such an enormous help! Second: skimming the changes (I haven't examined them super closely yet), it all seems to make sense to me. However, I think it should be possible to keep all of the origin stuff out of the API, and just calculate it automatically on-the-fly for the relevant FFI calls based on the data window. If I'm not wrong about that, then I would much rather do things that way. There isn't any reason to make people think about and coordinate this themselves if we don't absolutely have to. |
Hi @cessen, many thanks! I had a look at doing the calculation on the fly before doing the If you can point me at a general direction of that I could look deeper, or we could just keep this fix as it is for now, and revisit the architecture at a later date (say, by allowing the framebuffer wrapper to do an introspection on the On another note, the next feature I'd need to work on would be the Deep image read/write for a project I'm working on (in fact the same project that required this fix). |
Hi @norru, You could be right about it not being possible. But I'd like to make double-sure before accepting the PR. Do you mind if I put this off until the weekend, and investigate it myself? If I can't figure it out by the end of the weekend then I'll merge, and worst-case we make another release later if/when I figure it out. Re: deep images, |
Sounds reasonable :) As soon as I free up a slot (plenty of moonlighting these days!) I'll try to cobble up a proof of concept for writing and reading deep data, possibly in the next couple of weeks. I cannot commit to a deadline so if anyone needs it badly, please feel free to override! |
Okay, so I did some investigation, and we definitely can handle it without forcing the user to specify an origin in the framebuffer. Essentially, we just make a new framebuffer and populate it with the offset versions of all of its slices, and pass that new buffer to the C++ library. Doing this will require exposing some additional APIs from the C++ library (e.g. to iterate over the slices of a framebuffer), but it's totally doable. (Alternatively, we can store pure rust structs for the slices, and only create the C++ framebuffer just before passing it.) Having said that, I'm now not so sure that's actually the best way to go. In the name of mirroring the C++ API as closely as reasonably possible, the origin APIs you've created are a pretty good safe approximation of how the underlying C++ API actually works. Moreover, I'm now thinking it would also be good to see how the deep image support affects some of the APIs before making a decision on this. Tiled image support would also probably be good to add before committing to how the framebuffer API works. So I think we should merge this PR with its current API since it's pretty transparent about how the C++ API works, and then we'll see how the deep image API ends up looking and possibly the tiled image API. Then we can come back to this and make a better API decision based on those experiences. Does that sound good? Having said that, I would like to see a test or two for the positive origin coordinates case. @norru Would you mind tackling that before I merge? I'm happy to tackle improving the documentation after the merge. |
Hi @cessen, sounds about right! Thanks for further investigating. From the sound of it, the alternative to exposing the I will add the test for the positive offset later tonight :) EDIT: talking about the deep images, let's take that discussion to another ticket. My first step will be to do some familiarization and a PoC to understand how the C++ API works, then I will be able to sketch up a wrapping Rust API. |
Hi @cessen, I have added the positive window test and it looks fine. |
Well, I guess for us as the library implementers it wouldn't be very ergonomic. ;-) But IMO it would be a more ergonomic API to expose (it would just be the same API as we have now before your changes). The downsides are that it wouldn't match how C++ OpenEXR's APIs work, and would have a small amount of hidden overhead. So I think a wait-and-see approach is best here. Anyway, this looks good to me! And yeah, please open a ticket for the deep image stuff. :-) Thanks so much! I super appreciate your help on getting this tracked down and fixed. |
Thanks for the merge! I am a bit worried about the hidden overhead - especially if there is no way to bypass it and if it's not as small ie for images with lots of scanlines? Anyway, there is no need to rush to worry right now :) |
The overhead would be pretty small. It doesn't grow with image resolution, it only grows with number of channels. We'd need to test, of course, but I would be really surprised if you didn't need to have a massive number of channels (approaching an appreciable fraction of the number of pixels in the image) before the overhead wouldn't just be dwarfed by everything else anyway. I'd actually be more concerned about the impact for small images, because of that. In any case, we can examine that once we have more experience with the other parts of OpenEXR's APIs. |
Fixes #30