Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal: use more tests #156

Closed
jerch opened this issue Jul 3, 2016 · 8 comments
Closed

Proposal: use more tests #156

jerch opened this issue Jul 3, 2016 · 8 comments

Comments

@jerch
Copy link
Member

jerch commented Jul 3, 2016

The emulator is a fairly complex beast already and has like no test cases. Do you have any plans for testing xterm.js?
I feel a bit uneasy about playing with the code and making PRs with literally no chance to spot regressions at least. For example those #145 and #147 issues are candidates for fixing a problem while introducing a bigger one at some random place.

@Tyriar
Copy link
Member

Tyriar commented Jul 4, 2016

The current plan is to add tests as we go, mocha was only just added so the more tests you can contribute in PRs, the better.

@jerch
Copy link
Member Author

jerch commented Jul 4, 2016

I have some xterm output comparison tests, can try to apply them. They are high level therefore it will be tricky to spot the faulty piece of code with just those.
At low level stage I have complete tests for this parser, but xterm.js does the parsing differently. Maybe I can adopt some of them idk.

@jerch
Copy link
Member Author

jerch commented Jul 5, 2016

Added the xterm tests --> jerch@36930e9
10 of 44 basic tests are passing, this is a beginning :) The tests are just a high level indicator of xterm conformance (comparable to vttest) and not usable for tracking down a problem to a single line of code. That would need tests with a higher granularity (like testing every single function and escape sequence separately).
Note: I took most of the xterm test cases from https://github.com/MarkLodato/vt100-parser/tree/master/test and adjusted them slightly.

@parisk
Copy link
Contributor

parisk commented Jul 5, 2016

That's great @jerch and it's definitely a very good start. Even getting 10 out of 44 vttest tests passing at first we will at least now if a PR breaks any of these (and there it will be much easier to debug the code).

I am really looking forward to seeing these tests incorporated in xterm.js. Please let me know how can I help you out with this.

@jerch
Copy link
Member Author

jerch commented Jul 5, 2016

Had a closer look at the current escape sequence parsing. My test cases cant be applied to it, they are not compatible state and transition-wise. The parsing doesn't fit into Paul's reference parser model, therefore I'm not sure if the parser always does the right thing (Paul claims to have the most complete model). Up for discussion, since it would need a really big re-factoring to get it in line....

@jerch
Copy link
Member Author

jerch commented Jul 5, 2016

@parisk Well the xterm tests would have to be tested and fixed one by one. You can select a specific test case by altering the for loop in the test file and switch the noisy output on to compare the outputs.
At this point comes the hard work - to find the faulty handling in the code you would have to go down the callstack - that is a big problem with the parsing in one big write call (the callstack is just uhm write). Therefore I'd suggest to write smaller test cases for single escape sequences once the xterm test case fails to narrow it somewhat.
No clue if this procedure will work at all, since the parser seems to have flaws itself and might needed to be fixed first (see my note above).

@jerch jerch mentioned this issue Jul 5, 2016
@parisk
Copy link
Contributor

parisk commented Jul 11, 2016

@jerch I agree that the best way to go is writing smaller test cases. Sounds more manageable and helpful. Also doing this gradually will help us deal only with each small portion fails at each time.

Let's get back to this after merging #161 and releasing 1.0.0 (this week).

@Tyriar
Copy link
Member

Tyriar commented Sep 17, 2016

I don't think there's any benefit keeping this open anymore since it's just a general "add more tests" issue, we've been steadily adding tests as we touch things and through PRs and will continue to do so. If there is a particular area in need of tests then a specific issue can be filed on that. Thanks for helping push this along @jerch

@Tyriar Tyriar closed this as completed Sep 17, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants