-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reading only part of a progressive JPEG 2000 #1
Comments
The Second Life protocol is my main use-case for jpeg2k. Right now I have only used jpeg2k with the Bevy engine. It's asset decoding support doesn't provide a way to ask for low-res textures, so it wasn't a big priority to expose the full OpenJpeg interface. There is some support right now for lower res decoding (to get a smaller texture when the full resolution isn't needed). I haven't tried OpenJpeg with only part of the image file. When loading the image from a file, OpenJpeg can do seeking to only read what it needs. So if a lower resolution or small decode area is asked for in the decode parameters, then it will only read what it needs. I don't think the OpenJpeg streams where designed for doing partial reads over the network. You can see the current decode parameters in the example: I haven't looked into how the Viewer decides how many bytes (HTTP byte range requests) to request. The J2K header might provide some of this info, but the first http request would most likely always ask for the same number of bytes. A "smart" asset server could possibly store an index/metadata extracted from the J2K image and return the J2K header + first resolution level back to the viewer. The full J2K spec also has JPIP I wasn't able to find much examples on how to use OpenJpeg when making this crate. So this first release was just getting it to work. I am interested in feedback on what API to expose (wrapping all unsafe access to openjpeg-sys). |
I can read, say, 2K bytes, and then ask for a decode. That's what Second Life viewers do. They ask the HTTP server for a part of the file. Can I get the info that tells me what resolutions are available and how much of the file they need? Or simply say "here's a vector of bytes, give me the best resolution in there." |
It seems that doing progressive decoding is not as easy as I thought. Progressive download/decoding should work like:
I have just done some testing with partially downloaded J2C (Jpeg 2000 codestream, which is the format SL uses). Even when using j2c data captured from traffic between the SL Viewer and asset server. From what I have seen the SL viewer always requests the first 600 bytes for each texture before requesting more. One texture was downloaded in chunks: 600, 5,545, 18,433. I haven't confirmed if the Viewer was progressively displaying that texture. So far I haven't been able to find any details on how to do progressive decoding with OpenJpeg. |
Right. I have all the priority queue stuff running in rust. . But I download the whole image, convert it to a PNG, and reduce it in size to simulate reading part of the JPEG 2000 file. The SL viewers are using the Kakadu viewer if built by Linden Lab or Firestorm. If you build it yourself, which I used to do, it uses OpenJPEG, unless you buy a Kakadu license. Here's a discussion of the current build procedure.. So it does seem to work. I've built it myself in the past, but don't have the current build environment set up. |
The SL viewer does the same calls ( For now you can use this crate to decode the textures directly to the resolution that you need (LOD style). For that you just need the One improvement to the API would be to allow getting the image size before doing the decode step, so that the |
Also there is another Open Source Jpeg2000 library Grok and crate grokj2k-sys. It's performance is close to Kakadu. Grok v9.5.0 (CLI Originally I was going to support both OpenJpeg and Grokj2k, but failed to get Grok to decode the image (only got the image header info, the component data was NULL). |
Until partial decoding is fixed in OpenJpeg, you will still need to download the full image. OK for now. Would you please file an issue with OpenJPEG to get them to fix that? Thanks. Grok I tried Grok. It won't cross compile from Linux to Windows, or didn't in an earlier version. There was a dependency problem I need to revisit that. From the issues list, there are a lot of problems with incorrect decoding, bu they are getting fixed. I'd suggest revisiting that in a few months. I think that's a good long-term direction. Grok support in your package would be useful. There is a Grok interface for Rust, "grokj2k-sys" but it's Affero GPL 3.0 licensed, which is very restrictive for a library shim, especially since Grok itself is only 2-clause BSD licensed. If you link grokj2k-sys, your whole program becomes Affero GPL 3.0. Keep at it, please. Multiple people need a JPEG 2000 for Rust that Just Works. Thanks. |
When I add grok support it would be behind a feature flag. To bad about the AGPL license. It wouldn't be to hard to make a new sys crate for the From library. |
I'll be trying your package soon. Just got past a big problem in Rend3. |
One short term option would be to backport the openjpeg-sys crate to the 1.5 release. Not sure if there are any security issues with that older release. A feature flag can be used to select the older release. |
Luckily someone had already started to fix decoding of partial download in OpenJpeg. Their PR was out-of-date and had some outstanding cleanup. I updated/fixed that PR an submitted a new one: uclouvain/openjpeg#1407 For the time being, I am going to fork openjpeg-sys to use my branch. |
Grok itself is AGPL: https://github.com/GrokImageCompression/grok/blob/master/LICENSE |
hmm. It's license is complex, since some of it is covered by |
it is originally a fork of OpenJPEG which is 2-clauses BSD licensed, but Grok specific changes (and there are a tons. it is close to a rewrite) are AGPL only, so for all practical purposes its use is governed by AGPL |
Based on the license, it seems that supporting Grok in this crate will not happen. Safer to just fork this repo and create a different create for Grok later (might not happen). I should have a new release soon with support for decoding partial j2c streams. |
FYI, I pushed the code for version 0.6.0 that has partial decode support. Right now I can't publish it when it uses a git dependency. For now you can use: to get the new version. |
This code might be useful to you for converting the j2k image components into a |
Oh, right. I saw "2 clause BSD" there, but that's from before their fork. Grok is a commercial product; there's a pay version. The free version seems to be restricted to avoid it being used much. Oh well. |
That's useful. I wonder what code is generated for
If the Rust compiler can figure out that reduces to a memcopy, I'd be really impressed. |
I have found that it is best to use iterators for code like this. The Rust compiler can reason better about the bounds and avoid generating bounds checks inside the loop. |
I don't think there is anyway to use a memcopy for that, since the r,g,b,a components need to be interleaved for the texture. I might have found a faster way using Since components -> pixels is going to be common code, I will add helper functions to https://godbolt.org/ supports many different languages too. |
@John-Nagle You can see the new Using |
Finally got back to this. I'm trying to get partial decoding to work. All the right stuff seems to be implemented at the jpeg2k, openjpeg-sys, and OpenJpeg levels. But they don't play well together. The "strict-mode" feature has to be turned on; otherwise jpeg2k silently ignores turning off strict mode in parameters. So I use, in Cargo.toml,
and got the compile error:
So I tried just compiling jpeg2k standalone, getting the latest version with git, then:
This got the same compile error, just compiling jpeg2k by itself. It looks like jpeg2k is at version 0.6.1 in the repository but at 0.6.2 in crates.io. Something is out of sync. What's puzzling is that it should still work. jpeg2k pulls in openjpeg-sys = { version = "1.0", default-features = false } although it really needs 1.0.7 for strict mode to work. However, when I check Cargo.lock, I see
so the latest version was used anyway. Looking inside openjpeg-sys, it turns out that opj_decoder_set_strict_mode was added to openjpeg-sys in 2022, and it is in at version 1.0.7. It's in there. See https://github.com/kornelski/openjpeg-sys/blob/main/src/ffi.rs#L1092 So this ought to compile. It doesn't. |
Ah, I see what's wrong. Crates.io and Github are out of sync for openjpeg-sys.. Filed an issue over at openjpeg-sys. Don't know if it will do any good. |
Sorry I forgot to push 0.6.2 here. It was a small bug fix for 4 component images (RGBA).
openjpeg-sys = { git ="https://github.com/kornelski/openjpeg-sys.git", default-features = false, branch = "main" } I have also been working on a c2rust port of |
openjpeg-sys was just updated on crates.io to 1.0.8, adding support for opj_decoder_set_strict_mode. "cargo update" fetched that, and now your package works with "strict mode" off. Here's an example of a decoded picture: And this is what happens when you truncate the data from from 650678 bytes to 65736 bytes: So, progressive mode works now! If I truncate the data too much, I get "Error: Null pointer from openjpeg-sys". |
I published jpeg2k version 0.6.3 with the minimum version set to 1.0.8 for openjpeg-sys. Maybe if the image is encoded with more resolution levels it will decode with less bytes. I wonder what size the SecondLife client uses when requesting the first chunk of images. |
I'm not sure. The viewer source code is on Github now, with better search tools than the old Bitbucket system. Here's my code for that, not yet in use. My current plan is to request 4K bytes the first time. That will get me an image at least 32 pixels on the longest side. If I need more, I'll make a second request of the asset servers. By that time, I'll know from the first request how big the image is. In my own viewer, I want to have about one texel per screen pixel. So, no matter how big the image is, I will only request what I need. My current fetcher is running the OpenJPEG command line program in a subprocess, launching it once for each image, I like the Rust port idea. The main problem with the OpenJPEG C code is its long history of buffer overflows and CERT advisories. Rust will help, but only if it's safe Rust. What you get out of c2rust looks like C in Rust syntax, with explicit pointer manipulation. You have a big job ahead cleaning that up. I appreciate that you're tackling it. |
I had a packet dump of SL client (Singularity 1.8.9.8338) and OpenSimulator. Looks like the client requests byte range I think the official client uses a commercial J2K library, the open source builds seem to use OpenJpeg.
The main reason I went the c2rust route is that Openjpeg has a large amount of test cases and the generated Rust code compiled and worked. The biggest issue I had with the generated code is that c2rust expands C macros, but I have replaced those with Rust macros and removed the duplicate code. It will be a long and slow process. I do small refactors and run rerun the tests, this helps to ensure the refactors don't add bugs that would be hard to find later. Once the core code has been ported to safe Rust, I plan to split out the C Openjpeg interface and make it a wrapper around the safe Rust core.
Another short-term solution is to compile the Openjpeg code to WASM with a simple API (pass raw j2k bytes in, get simple Image object with header and image data out). Not sure if threads are supported when targeting wasm, but if processing many images (SL-style clients) then using a thread pool to process multiple images in parallel would work. WASM engines like Someone else did that to safely use Openjpeg in a service: #2 |
Is there anything else I should do now? Or do you have enough information to work on the problem? |
Not a crash, but valgrind did split out this error:
Reading uninitialized memory might be why it randomly crashes. |
t2.c, line 1150, looks suspicious.
I'm not sure what's going on in that code, but it seems to me that there should be some kind of check there for going off the end of the input data. Since I'm submitting truncated files to the decoder, that's a likely cause of trouble. |
I am adding caching (dumping to files) of the data passed in for decoding, which might help with getting a test-case. |
Finally got
It seems related to the reduction level and a partial j2k image. |
Thank you. That makes it clear that it's not in your code or my code. What do we do now? |
Those uninitialized reads could cause other random issues depending on what value was in memory before. With caching of the fetched asset data I got crashes at the same spot:
Here is the change I used to cache the asset files: https://github.com/Neopallium/jpeg2000-decoder/tree/cache_assets |
This may be the same problem as uclouvain/openjpeg#1427 |
To generate the that test-case:
Edit: Here is the command to trigger the uninitialized reads:
Note that the file is 44237 bytes long (http range requests are inclusive on both ends). |
I think that is the same as the issue that requires that we disable thread support. Edit: Maybe not, since the file size is much smaller and doesn't crash or show any errors from valgrind. This last test-case seems to be related to partial images and reduction level. |
I am thinking of sandboxing |
Converting OpenJPEG to compile to WASM would take some effort. WASM is not a supported target for OpenJPEG. You previously mentioned that you were converting OpenJPEG to Rust. That seems more likely to be useful. As I understand it, JPEG 2000 files have this structure. One way to approach this might be to write, in Rust, the parts which parse the stream into its components. Each subsection would be passed to the next level down as a slice. When a slice of "body data" is extracted, that would go to the existing decoder code. The body data decoding code could be the existing C code from OpenJPEG. Or it could be that decoder code translated with c2rust. Someone else already did most of that top-down work. See https://github.com/iszak/jpeg2000 But they never did the low-level code that does the actual decoding of the compressed image. |
I have created a sandbox wrapper for the jpeg2k decoding: The Sandboxed decoder object can be shared across threads. Right now each image decode request will instantiate the wasm module (just allocates memory/stack space). It is possible to improve that by making each instance re-usable, but for this first release I just wanted to get it working.
That is a long-term ongoing project. The sandbox wrapper is just a useful short-term solution, it should provide better performance then spawning a process while still protecting the application from crashes.
I remember looking at that project when I first started looking for a Jpeg2000 decoder in Rust. The reason I have started from porting most of OpenJpeg (just the Most of the rewrite work that I have already done are to some of the low-level parts (t1, mqc, dwt). Since it was easier to just re-factor the internal unsafe code. Still a lot more cleanup to do, but if I keep working on those low-level parts, then they might be reusable in jpeg2000. Once those sub-modules become usable, they can be split out into crates that both projects can use. Next time I decide to work on the port, I will check that repo and see what parts might be useable there and focus my effort there. |
Thanks very much. I've been running it on the same set of 510 files, with the same file truncation to get a 128x128 file, with no failures. Works in debug, release, and cross-compiled from Linux to x86_64-pc-windows-gnu. The sandbox wrapper is a good working solution for now. 510 images decoded down to 128x128 in 29 seconds. If I ask for too much "reduction", I still get "Null pointer from openjpeg-sys". At that point, is the instance of the decoder corrupted? Or can I use it again? |
It can be used again. The decoder object Each Internally |
Thanks. I'm impressed by how fast this is. The JPEG decoder is doing much low-level bit-pushing yet still runs at acceptable speed. |
Benchmarking - decoding 510 images. OpenJPEG, running in C:
Running in sandbox:
So, 2.6x slower when sandboxed. But, if I rerun the C version a few times:
|
Nice to see some benchmarks of the sandbox. I should be able to lower the overhead of the sandbox by make the instances reusable (but recreate them if decoding fails, to clear out any corruption from the instance's memory). The difference in speed might not be as noticeable if the images are processed in parallel. Should be easy to parallelize the benchmark using I just got my Rust port of openjpeg to compile correctly for 32bit targets (like If you want to try it:
|
Thanks. I intend to process images in parallel. My real graphics program has multiple threads downloading and decompressing, until all CPUs are busy. This runs at a lower priority than rendering, etc. so I can use all available compute power. The program you are seeing is the test fixture only. I will take a look at the all-Rust form. |
Well, the c2rust version is compatible with the C version. It has the same bug. |
Tried
on the c2rust version.
This compares to 12 seconds for the C version and 29 seconds for the WASM version. One time I got:
c2rust correctly reproduced the invalid memory reference in the C code! If you want to try this, it's branch "c2rusttest" in the same project. (Current branches: "main": uses C version. "sandbox": uses WASM version. "c2rusttest": uses c2rust version.) |
Yup, I don't expect it to fix those bugs yet. But it is nice to be able to switch between the two.
Thanks that will be useful for comparing performance. |
Progress! Fixed in OpenJPEG (probably). I built OpenJPEG from source and tried the single file test case with the truncated file. Valgrind was happy. But I haven't tried the full 510 file set yet, because that tester is in Rust. Fix at OpenJPEG level: uclouvain/openjpeg#1459 Version update requested to openjpeg-sys: https://github.com/kornelski/openjpeg-sys/issues/10 Once they do that, you can update, and, with luck, it will work. |
@John-Nagle I just added support for reading just the image header. This might be useful for you to grab the image details (width, height, # components), before calculating a reduction level. in Also in the sandbox crate I made the same for benchmarking read_header: For the sandbox requests there is a Reading just the header is fast even with the sandbox:
5,000 header reads in less then a second. That was with One thing I noticed is that OpenJpeg's thread support doesn't work well with |
Oh, nice. Reading the header by itself is very useful. The info I need is the size of the original image and the info needed to calculate bytes per pixel (number of components and precision) so I can calculate the reduction factor before decoding. So now I can just:
How many bytes are needed to just get that header info? Does it vary with the amount of metadata? Would I ever need more than 1K? I have threading disabled at the decoder level. I have multiple threads doing download-decode-update cache-etc on multiple images in parallel. These images are mostly too small for within-image threading to help much, if at all. Parallel Open JPEG is for zooming in and out on huge multi-gigabyte images. |
New version passed 510 file test. New version passed Valgrind:
|
It looks like some details of the components are also decoded from the header. I see
The You can use j2k_detect_format() to check if the bytes are JP2 file format or J2K codestram format. For JP2 more bytes might be needed depending on how much extra metadata is included in the file. Note the For the J2K codestream format, I was able to decode the header with just 123 bytes:
Part of that looks to include the name of the software that encoded the image Openjpeg re-encode of test image (127 bytes):
It might be possible to improve header decoding from a partial stream, since it seem that the "comment" field used by encoders is the last part. At least for J2k codestream format. The JP2 file format most likely need more bytes. With HTTP request pipelining (or HTTP/3?) support on the server, it should be possible to batch small requests (200-300 bytes) to decode the headers. I might look into adding a simple pure Rust header decoder to |
Ah, good. Can I read the component info from the header? The last time I looked, the API let me get num_components but I couldn't access the component headers themselves. This is for file size estimation; I need to know how much space each pixel consumes. Anyway, now I have a usable decoder. Thanks. |
Let me know if you find a file that is missing the component headers. When only reading the header (or one short partials) the component pixel data will be null, but the other fields seem to be initialized correctly. I added component info to |
OpenJPEG can give me the header info with the image size, and then I can ask for a fraction of the resolution without reading the entire stream. But I don't think you expose that functionality. Is there some way to do that?
Use case is wanting a low-rez version from the asset server for Open Simulator or Second Life. Often, many assets are only read at low-rez because they are for distant objects. So the network connection only reads part of the data and then is closed.
(Current code is running OpenJPEG in a subprocess, and reading too much. Looks like this: https://player.vimeo.com/video/640175119)
The text was updated successfully, but these errors were encountered: