-
Notifications
You must be signed in to change notification settings - Fork 291
Avoid reading ahead and then seeking back #1189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
7c9019b to
da98643
Compare
|
@rhuijben thanks a lot for both PR. Ill review them shortly |
|
These are some basic cleanups for further parser improvements. I'm wondering how stable you want to keep the tokenizer api. We can remove that initial byte and the 'readnextbyte' flag we pass everywhere to simplify things... The peek byte can handle that case. If we extend the peek to be an optionally larger buffer (can be done without changing the tokenizer api) there are many other optimizations possible.. Including using optimized searches instead of per byte walks. That can reduce at least ten to twenty percent of parsing time in the test suite.. I have some code locally that optimized the number tokenizer this way (the highest cpu user in many tests) |
|
@rhuijben I'm always happy to add performance improvements (I think ill add some benchmarking soon). @EliotJones we might want to release 0.12 soon? (Before refactoring the tokenizer api) |
|
If you do performance tests, be careful around which source you use. (Read: test both) Most tests load the whole file in ram before processing, while you want more like stream processing if doing server batches or really huge files. Currently this is still necessary for good read throughput, but that is because the stream bytes don't do efficient proper buffering. And we seek back far too often. I think I should be able to improve this quite a bit without touching any of the higher layers. Just not sure how many third party tokenizers use the current extension api... And want to remain stable. We can also wrap the old api with a newer api if necessary, to keep the public api stable... Just not sure about the current guarantees. |
|
|
||
| bytes.Seek(fetchFrom); | ||
|
|
||
| Span<byte> byteBuffer = new byte[bytes.Length - fetchFrom]; // TODO: Maybe use PoolArray? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
byteBuffer max size is 1024 bytes, right? I believe this would be a good use of stackalloc - unless not possible
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be. Not sure about proper limits for stackalloc. (Usually I would look at that for a few hundred bytes max. But pdf processing is not a generic library design, that should be able to run on very limited machines)
With the next patch set this would be in the look ahead buffer of the input bytes and there would be no copying at all.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see my comment in the PR and https://learn.microsoft.com/en-us/dotnet/csharp/language-reference/operators/stackalloc
|
@rhuijben I've submitted a first comment, have a look when you can. Also, do you mind rebasing this branch onto |
…by using seek. Optimize the FirstPassParser to just fetch a final chunk before doing things char-by-char backwards.
da98643 to
dd50546
Compare
|
[Just rebased. No changes yet] |
|
@rhuijben thansk a lot for that. I've started the review, but can you explain why many |
|
[I worked on quite a few parsers over the years] The current parser model requires each parser and more importantly parser users to handle the first and last byte of what is parser different than all others. E.g. before using the position/offset you have to know what the last parser did. The input already hands you the next byte via the peek function so there is zero reason to make things hard on the caller, and we can just fix the parsers to do the right thing. So before looking at improving the parsers individually, and the input bytes to provide a bit more info I fixed this assumption to at least work the same for all parsers in this PR. If the parser api can be changed I would even recommend removing this flag completely. It makes it much too easy to introduce subtle offset and off-by-one bugs in corner cases. This is exactly what this PR is all about. The parser changes in this PR can be reviewed one by one. |
|
@rhuijben thanks a lot for the clarity here. I think it's fine to change the parser api, I'd just like to first release an official version (this does not mean you need to wait to do PR). @EliotJones if no objection, I'll do that early next month. Regarding I do not believe an official limit can exist, but this is what MS does: int length = 1000;
Span<byte> buffer = length <= 1024 ? stackalloc byte[length] : new byte[length];If you are sure the array will be 1024 bytes at max, then Let me know if you want to push this change, I'm happy to merge the PR as it is |
|
@rhuijben I finally managed to run some benchmarks: Array
Code available https://github.com/BobLd/UglyToad.PdfPig.Benchmarks/tree/array Memory stream
Code available https://github.com/BobLd/UglyToad.PdfPig.Benchmarks/tree/memory-stream Given these numbers, I'm unsure if this is worth the added complexity. Do you have an opinion? |
|
The real benefit is not in the current pr, but in further improvements later on. Moving to the other model allows extending the reader with a bigger peek buffer without overhead and that part will make the real difference. Not seeking back will also allow wrapping smarter readers over other readers, so we don't have to read some parts (such as image streams) multiple times. This will make things more performant and use a lot less memory (copying). I have some of this in a branch in my GitHub fork if you want to look at the direction I'm thinking. (This branch is not PR ready. It changes most tests to do things from a stream instead of an in memory copy, to allow seeing the differences earlier on during my tests. But it shows some progress) |
|
@rhuijben Thanks again for the clarity - much appreciated. I'll merge as-is and look into the work in your fork. Thanks again |
|
@BobLd I will see what parts of that branch are ready for new PRs, and I'll try if I can use your new tests to see what to 'atrack' first. The VS2026 test versions expose more profiling features without needing the very expensive versions of VS, to allow finding the hot code paths in an easy way. |
|
@rhuijben thanks for that. Side note: the full tests that run upon merging now fail, if you can have a look. Thanks! |
|
I will see where that happens. From he GitHub test output it looks like type1 font parse issue that isn't caught by the .net testrun. |
|
I added a But it is probably interesting to see where the PDF processing fails. |
|
@rhuijben yes it more complicated to download the documents than I expected. The website is there https://digitalcorpora.org/corpora/file-corpora/cc-main-2021-31-pdf-untruncated/ Reading at the documentation here, the document you are looking should be in I'll have a 2nd look tonight |
|
Ok, can download the file now. Thanks for mangling the url to something readable... My local tests got different urls and all ending in an Amazon 404. |
index aac323f3..6d42ca5e 100644
--- a/src/UglyToad.PdfPig.Fonts/Type1/Parser/Type1Tokenizer.cs
+++ b/src/UglyToad.PdfPig.Fonts/Type1/Parser/Type1Tokenizer.cs
@@ -72,8 +72,10 @@
case '/':
{
bytes.MoveNext();
- TryReadLiteral(out var name);
- Debug.Assert(name != null);
+
+ if (!TryReadLiteral(out var name))
+ name = ""; // Should not happen, but does
+
return new Type1Token(name, Type1Token.TokenType.Literal);
}
case '<':Restores the old behavior of just creating an empty name for no data and with that fixes the test. (Not sure if it is really the behavior we want) |


This builds upon PR #1188 (=first commit of this PR).
Cleans up all builtin tokenizers to no longer read one byte too much but instead use the Peek() on the input bytes.
This allows simplifying logic in quite a few places where other parsers had to compensate for reading a byte early on. Or where parsers/tokenizers had to seek back to make things work.
Also make ReadsNextByte a constant virtual method instead of a hidden readonly variable in each parser.