-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there a way to reduce the size of output pdf files? #42
Comments
I second this request. |
pdftoppm is producing pngs. Maybe if it could be swapped over to jpeg, perhaps as an option, it would shrink the pdf |
Having just tested the plugin for the first time, I really feel this need to be prioritised. Here are my file differences for comparison: Original file size: 59.1 MiB It seems that not only the file format but also the resolution and possibly colour space of the image files could use some tweaking? I doubt that most scanned or otherwise rasterised PDFs come as high as 300dpi, so exporting PNGs at that resolution will definitely increase file size. Assuming that these files are only used for generate OCR information -- ie, colour elements from the original PDF will remain intact in the OCR'ed file -- a compromise could be to export the page images as greyscale, which will shrink the file size by half, and might also reduce image noise. Exporting pages as JPGs can also contribute to smaller file sizes. If I save the same greyscale image as PNG and JPG (90% quality), the latter is only third of the PNG file size. But lowering JPG quality might also impact the readability of the text. Issue #23 suggests making image resolution configurable by the user, and it could be really helpful in reducing interim image file sizes, but at the same time makes the process more fidgety, as I know I would end up trying different resolutions to balance OCR quality vs file size... It may be necessary instead to post-process the generated PDF; I can't tell from the Poppler documentation if it is any help in file compression? I resorted to an online PDF service which reduced the 438.7 Mib PDF to 46.9 Mib, with the OCR intact, but it would be nice to save the bandwidth and process the file locally. Especially since I have close to a hundred PDFs in my Zotero library that need the OCR treatment... |
I'm using https://github.com/ocrmypdf/OCRmyPDF as a manual workaround. Maybe it will be sufficient for your case. |
Back in June when I last tried this, I also resorted to using OCRmyPDF after trying zotero-ocr. |
Yeah, the size can become quite large. Tesseract itself creates the PDF with the input we give it. Tesseract would also run on jpg images, but the quality of the OCR output also depends on the inputed images and the colors. The |
Reducing the resolution like in pull request #41 would reduce the size a lot. Using JPEG 2000 files with lossy compression would allow really small PDF files. Ideally that should be implemented in Tesseract. |
I think we can reasonably use JPEG as a pdftoppm output and tesseract input, preferably as a user-selectable option. A quick and dirty test looks promising. |
We produce nearly all of our OCR from JPEG. I'd use JPEG instead of PNG if this works and is easy to implement. Too many options confuse the users. Therefore I'd not add a new one for switching between JPEG and PNG. A future solution could eliminate pdftoppm and extract the image which is part of the original PDF. This would eliminate any format conversion and automatically get the image in the best quality. |
I would definitely prefer getting rid of pdftoppm (as per the ongoing discussion in #80 ), but I still don't have a clue how to implement that so I'm looking for a pragmatic solution - which could of course be replaced by something better eventually. It's a simple change, as far as I can see. |
The tests on my local code yesterday have been quite successful. Playing around with the JPEG options of pdftoppm, I have found pretty good default values that reduce the image size by a factor of 2 to 4 compared to PNG without visible loss of quality at 300 dpi. In my samples (2 pdfs: 268 pages, black & white book scan + 4 pages, color commercial pamphlet), the size of the output PDFs is now roughly the same as the original, sometimes a bit smaller. I'm counting this as a success :-) I still need to double-check a few things before I push the commit. @stweil would you like to review the code before we create a new release? |
Thank you for your work on this issue. If you create a pull request, I can review the changes there, and it will automatically be added to the release notes. I'd make the new release after the new code was merged. |
I tested this ocr tool on some PDFs I downloaded from Academia.edu and the results were great. However, there's a problem: it increased the file size by A LOT (ex: a 11.8 MB file turned a 107 MB pdf).
I was hoping to use this tool to create searchable and conveniently highlightable PDFs using scans from physical books I have, but scanned files are normally huge on their own. When I ran zotero-ocr on one of my scans (257 MB) I ended up with a file that's over 2GB in size (it won't even open). :(
Is there something I can do to decrease the file sizes?
(I use Zotero 6.0.9 on Windows and have installed the latest version of zotero-ocr)
The text was updated successfully, but these errors were encountered: