Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory efficient image storage #482

Open
astrsh opened this issue Nov 18, 2024 · 6 comments
Open

Memory efficient image storage #482

astrsh opened this issue Nov 18, 2024 · 6 comments
Labels
Platform: All Issue/PR is applicable to all platforms. Status: Accepted Issue/PR is accepted. Status: Backlog Issue/PR is not currently being worked on. Type: Performance Issue/PR involves performance issues/improvements.

Comments

@astrsh
Copy link
Member

astrsh commented Nov 18, 2024

Currently the library-image addon makes use of a BufferedImage wrapper to manage images. This works fine for relatively small images but has issues for very large images that on disk may be very small, but when stored in memory via BufferedImage get decompressed.

Support post by discord user @ patrick_vip https://discord.com/channels/715448651786485780/1307781629221273700 - Using STITCHED_BITMAP with cumulative image size approximately 10 MB, total dimensions are approximately 42k x 53k, in-memory consumption rises to more than 6 GB resulting in OOM most likely due to BufferedImage storing every individual pixel. Consumption according to thread dump analysis:
image
There is opportunity for more efficient in-memory storage for these particular images, where there are very large dimensions, but little variance in colours on a pixel-by-pixel basis. One notable optimization could be storing pixel data in a quadtree such that large regions of the same colour (common in these kinds of images) can be efficiently stored. Such optimizations would be inefficient for gradient-noise-like images (e.g. heightmaps) so this would likely be a user-specified optimization. Other ideas include vector support (SVGs should currently work but will still get loaded into a BufferedImage).

@astrsh astrsh added Status: Accepted Issue/PR is accepted. Platform: All Issue/PR is applicable to all platforms. Type: Performance Issue/PR involves performance issues/improvements. Status: Backlog Issue/PR is not currently being worked on. labels Nov 18, 2024
@solonovamax
Copy link
Member

solonovamax commented Nov 18, 2024

we could look at making our own custom implementation for images.

the problem is, if we want to load an entire 8 bit RGBA image into an array, then it fundanentally requires a certain amount of memory.
for a 42k x 53k, the minimum would be 8.2 GB. if we reduced it to plain RGB and dropped the alpha channel, then its 6.2 GB (reflective of the memory usage experienced). ($\left(42000\ \text{pixels} \times 53000\ \text{pixels}\right) \times \frac{3\ \text{channels}}{\text{pixel}} \times \frac{8\ \text{bits}}{\text{channel}} \times \frac{\text{byte}}{8\ \text{bits}} \times \frac{\text{kilobyte}}{1024\ \text{bytes}} \times \frac{\text{megabyte}}{1024\ \text{kilobyte}} \times \frac{\text{gigabyte}}{1024\ \text{megabyte}} \approx 6.2\ \text{GB}$)

the thing is, arrays are just so much damn faster than, say, a quad tree. using a quad tree will almost certainly significantly impact the performance.
same can be said of just storing the encoded png (or any other encoded form) in memory and then decoding it for each lookup.

an alternative would be to pre-compute everything that uses the image, serialize that to disk, and so long as the hash of the image never changes, continue to use that.

@solonovamax
Copy link
Member

solonovamax commented Nov 18, 2024

why did github cook my $\LaTeX$
nvm, it only cooked it on mobile (even in a web browser)

@astrsh
Copy link
Member Author

astrsh commented Nov 18, 2024

Additional overhead would likely be negligble relative to the procedural alternatives, the granularity of quadtrees can be tailored such that leaves are larger 2D arrays (say $8^2$) rather than individual pixels, which should cut down a good amount of the max depth. I think some additional cpu overhead would be much more preferable to memory consumption in the gigabytes where applicable. I'd like to make large scale image based generation feasible given most people utilizing image-library facilities will most likely be using images with side lengths measured in tens of thousands of blocks anyway.

Reduced memory would be preferable to reduced processing overhead as well as maps are likely to be pregenerated and or mostly generated anyway, and so a lot of decompressed image memory will never be used after a certain point in a world's lifecycle, these things would also be addressed by the available load on use cache options in combination with stitched bitmap support

@solonovamax
Copy link
Member

Additional overhead would likely be negligble relative to the procedural alternatives, the granularity of quadtrees can be tailored such that leaves are larger 2D arrays (say $8^2$) rather than individual pixels, which should cut down a good amount of the max depth. I think some additional cpu overhead would be much more preferable to memory consumption in the gigabytes where applicable. I'd like to make large scale image based generation feasible given most people utilizing image-library facilities will most likely be using images with side lengths measured in tens of thousands of blocks anyway.

Reduced memory would be preferable to reduced processing overhead as well as maps are likely to be pregenerated and or mostly generated anyway, and so a lot of decompressed image memory will never be used after a certain point in a world's lifecycle, these things would also be addressed by the available load on use cache options in combination with stitched bitmap support

perhaps instead something with RLE or something might be better

for RLE, basically just:

  • consider all horizontally adjacent pixels of the same colour. identify the starting index & the colour.
  • split each image into rows. index into an array using the row.
  • split each row into sections equi-distant. index into an array using the column to find the appropriate section. (the section size could be either unique to each row, or common across all rows. if it's unique that would be because it's computed dynamically and you wish to optimize the number of elements within the inner array to, for example, limit it to a max of 8 to avoid long binary searches because a binary search is O(log n), whereas indexing into the array is O(1), as the index can be directly computed)
  • within each section, there is an array of Pair<Integer, Color>, where the integer is the starting column index (ending index need not be specified). binary search this array to find the correct element. alternatively: the color can be represented by an int, so pack the index & the colour into a single long and use an array of logs. use the lower bits as the index or something.

@solonovamax
Copy link
Member

also, rather than "load on use", what could possibly be done is that we take the image & split it up into tiles of, say, 512x512, then serialize all the tiles to disk.

then, we just have a cache that loads the tiles from disk & caches them, evicting them from the cache if they haven't been used for, say, 30 minutes. (or alternatively could just use a WeakReference, allowing them to be discarded by the jvm whenever it feels like it. maybe a combination of both? so it's a strong reference initially, then after a certain time it's downgraded to a weak reference until it's accessed again/loaded again.)

this way it doesn't require users to manually split their images into tiles & instead does it automatically.

@astrsh
Copy link
Member Author

astrsh commented Nov 18, 2024

That's roughly already supported https://github.com/PolyhedralDev/Terra/blob/master/common/addons/library-image/src/main/java/com/dfsek/terra/addons/image/config/image/ImageCache.java
Issue with auto tiling is BufferedImage has a size limit - I haven't quite figured out the exact limit, but there's a hard upper limit based on the maximum array size, so areas convering a large enough area will be required to be manually tiled by users anyway

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Platform: All Issue/PR is applicable to all platforms. Status: Accepted Issue/PR is accepted. Status: Backlog Issue/PR is not currently being worked on. Type: Performance Issue/PR involves performance issues/improvements.
Projects
None yet
Development

No branches or pull requests

2 participants