Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark against pragmatic segmenter #2

Open
Immortalin opened this issue Feb 16, 2019 · 4 comments
Open

Benchmark against pragmatic segmenter #2

Immortalin opened this issue Feb 16, 2019 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@Immortalin
Copy link

No description provided.

@fnl fnl added the enhancement New feature or request label Feb 16, 2019
@fnl
Copy link
Owner

fnl commented Feb 16, 2019

Good point; When I developed the first version, segtok, there were no good benchmark datasets for sentence segmentation around that had sufficient coverage of the tricky cases this library can do. That is, all I found were examples of trivial sentence segmentation problems that virtually any statistical tagger can do well on, too. But if someone has a pointer to a really tough test set with stuff like author abbreviations, enumerations, typos, mathematical and scientific content, and/or social domain text (that might be abusing sentence terminal markers), that would be worth adding. Otherwise, I think the 50+ test cases I have collected as examples of such problems are my current "benchmark": I haven't found a single other library that can do all those cases.

@fnl
Copy link
Owner

fnl commented Feb 16, 2019

The above being said, what I am currently not interested in or would have time to do is go compare my library manually against another, case-by-case. So if someone wants to fulfill the specific request made by Immortalin here (or you yourself?), please feel free to make that comparison, though. I am sure either library will have its particular strengths.

But that being said, for an unbiased comparison, what would be more important is an impartial sentence segmentation dataset that covers the more tricky cases we find in the wild.

@fnl fnl added the help wanted Extra attention is needed label Feb 16, 2019
@fnl
Copy link
Owner

fnl commented Nov 11, 2019

Another interesting tool to compare/benchmark against: https://github.com/nipunsadvilkar/pySBD

Note that pySBD is supposedly based on the Pragmatic Segmenter.

@reepush
Copy link

reepush commented Mar 21, 2020

For my use case syntok works just perfect. Thanks @fnl for this project!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants