-
Notifications
You must be signed in to change notification settings - Fork 120
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixes #158 Fixed the issue where no_loop() was drawing another extra … #162
Conversation
I haven't dug into how this looked before #158, but checking the caller seems complex -- and it potentially special cases anything else that might call draw, like redraw (and setup running multiple times under some circumstances). no_loop can also be called as part of events, etc. In Processing and p5.js, loop() and noLoop() are simple. They change a boolean state, Only after draw() returns, control checks whether The result is that:
|
Yep, the fix is complex, I didn't considered the events, take a look at no_loop() function in userspace.py in p5/sketch and then see the control loop in base.py in p5/sketch, if you want a place to start with the problem The PR I gave for #158 will mess up using no_loop() in setup(), so better revert it and reopen #158 |
@jeremydouglass , I think this condition should be |
Right. Let me update that list with the 0-1 fall-through. |
@tushar5526 , this PR does fix the issue. But, honestly I think the whole looping mechanism in p5py is unnecessarily complicated. And there is another bug which I found recently #163 . Like @jeremydouglass mentioned:
Whenever, My suggestion would be to have another variable (say if self.looping:
... render the frame
if self.no_loop:
self.looping = False This would also help us in resolving #163. |
Ahh ! that would be better |
Not closed just discarded the previous PR |
https://join.slack.com/t/p5py/shared_invite/zt-e8mxhses-b2Vqc~zcU_kC5vCnJqrFxA |
Tests for the internal APIs is already added but there are not tests for sketchs itself. We can use Visual Reference Testing (comparing output images) for testing the public APIs. |
@parsoyaarihant
This is what we did with Processing.R as e2e tests that we would generate from our documentation -- so all documentation sketches were tests. That approach worked really, really well. processing-r/Processing.R#10 It can sometimes require a bit more continuous integration setup, as in some cases you can't run CI on a normal "headless" machine if you need certain kinds of graphic output. Also, visual reference testing doesn't work on interactive sketches. There are ways, but it is good to start with the easy things first -- static output. |
Hey @jeremydouglass can you elaborate more about how visual testing is done in Processing for R. |
Also, if you can explain how you are comparing two images here. |
@tushar5526 apologies for delayed reply.
Sure. The idea: take any sketch in the reference that has a picture, run it, and check the output against the picture. It should (fuzzy) match -- the pixel values should be almost the same. If it does, it passes. If not, then either the engine changed (may need many new pictures) or the sketch reference code was updated and broke / changed its output (need to update that picture) etc. The test suite is set up for headless run after the continuous integration build stage, e.g. Travis. Here is our Travis setup: https://github.com/processing-r/Processing.R/blob/master/.travis.yml The process is a bit odd. We use a generator script -- a python script with a jinja template... ... to automatically generate one .java test file for each example found in the reference. The goal is to automatically put the code from the reference into a sketch, run it, then compare its output to the reference image. Here is an example test sketch that the script generates by taking the reference example and the image and wrapping them: The autogenerated tests all extend E2eTestBase.java, which is where that image test code is called. e2e is "end to end" testing -- so we go from the provided example code to the image and check the actual output in order to pass CI. The base class always includes ImageUtils, which is the actual image test that you asked about: The image test code itself is a fuzzy equality between two images based on pixels (which is appropriate to the source domain, a pixel-based rendering language). It compares the pixel values of the two image, but allows for minor variations in e.g. antialiasing. It takes the R, G, B channels of each pixel, measures the difference between the two images, and then divides the total difference by the number of pixels. The "THRESHOLD" is currently 0.01, or 1 % difference. This means that the test would not detect for example I planned this feature along with @gaocegege and it was all implemented by him during his Google Summer of Code project. He did a fantastic job. |
…frame when used in draw()
Now no_loop() checks from where it is being called and draws another frame if it is called from setup() and dont call another frame when called in draw()
Closes #158 properly
The earlier PR of mine is causing a problem if we call no_loop() in setup() method, things are coded in a way to draw another frame when we use no_loop(), so if there is a no_loop() present in draw() , first draw() is called initially the no_loop() is encountered, but no_loop() is set in a way so that it calls exactly one another frame, hence we are getting the bug mentioned in this issue
no_loop() was made to draw another loop because, if no_loop() is present in setup(), then first setup() is called, which calls no_loop(), no_loop() do not let the draw() function get called in the next the next iteration(hence nothing would be drawn after that) that's why the developer made a check that no_loop() will draw one more frame so if no_loop() is present in setup() things will work properly
Possible fixes for this bug and the problem I mentioned is, if no_loop() was called from setup() then we call draw() in other iteration , and if it is called in draw(), then we dont call another draw_loop() in the next iteration, or better let this bug be there !
I have tested this approach and tested it with the current bugs plus also with redraw() function and checked frame_count to be correct, but you may test it again in some other way :
Things to test
-no_loop() in draw()
-redraw()
-checking whether frame_count is correct after each frame is drawn