Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes #158 Fixed the issue where no_loop() was drawing another extra … #162

Closed
wants to merge 0 commits into from
Closed

Conversation

tushar5526
Copy link
Member

…frame when used in draw()

Now no_loop() checks from where it is being called and draws another frame if it is called from setup() and dont call another frame when called in draw()

Closes #158 properly

The earlier PR of mine is causing a problem if we call no_loop() in setup() method, things are coded in a way to draw another frame when we use no_loop(), so if there is a no_loop() present in draw() , first draw() is called initially the no_loop() is encountered, but no_loop() is set in a way so that it calls exactly one another frame, hence we are getting the bug mentioned in this issue

no_loop() was made to draw another loop because, if no_loop() is present in setup(), then first setup() is called, which calls no_loop(), no_loop() do not let the draw() function get called in the next the next iteration(hence nothing would be drawn after that) that's why the developer made a check that no_loop() will draw one more frame so if no_loop() is present in setup() things will work properly

Possible fixes for this bug and the problem I mentioned is, if no_loop() was called from setup() then we call draw() in other iteration , and if it is called in draw(), then we dont call another draw_loop() in the next iteration, or better let this bug be there !

I have tested this approach and tested it with the current bugs plus also with redraw() function and checked frame_count to be correct, but you may test it again in some other way :

Things to test

  • no_loop() in setup()
    -no_loop() in draw()
    -redraw()
    -checking whether frame_count is correct after each frame is drawn

@jeremydouglass
Copy link
Contributor

jeremydouglass commented May 1, 2020

I haven't dug into how this looked before #158, but checking the caller seems complex -- and it potentially special cases anything else that might call draw, like redraw (and setup running multiple times under some circumstances). no_loop can also be called as part of events, etc.

In Processing and p5.js, loop() and noLoop() are simple. They change a boolean state, looping, between true and false.

Only after draw() returns, control checks whether looping is true or false before calling draw it again. That's it, no other rules.

The result is that:

  1. setup(): no_loop()
    ... draws frame 0, stops
  2. setup(): no_loop() draw():
    ... draws frame 0, draws frame 1, stops
  3. draw(): no_loop()
    ... draws frame 1, stops
  4. draw(): if framecount==3: no_loop()
    ... draws frame 1-3, stops
  5. setup() ... draw(): if framecount==3: no_loop()
    ... draws frame 0-3, stops
  6. etc etc.

@tushar5526
Copy link
Member Author

tushar5526 commented May 1, 2020

Yep, the fix is complex, I didn't considered the events,

take a look at no_loop() function in userspace.py in p5/sketch and then see the control loop in base.py in p5/sketch, if you want a place to start with the problem

The PR I gave for #158 will mess up using no_loop() in setup(), so better revert it and reopen #158
@parsoyaarihant

@arihantparsoya
Copy link
Member

draw(): no_loop()
... draws frame 0, stops

@jeremydouglass , I think this condition should be draws frame 1, stops. The 0 frame is the frame which is rendered by setup(). The first frame rendered by draw() is indexed 1.

@jeremydouglass
Copy link
Contributor

Right. Let me update that list with the 0-1 fall-through.

@arihantparsoya
Copy link
Member

arihantparsoya commented May 3, 2020

@tushar5526 , this PR does fix the issue. But, honestly I think the whole looping mechanism in p5py is unnecessarily complicated. And there is another bug which I found recently #163 . Like @jeremydouglass mentioned:

In Processing and p5.js, loop() and noLoop() are simple. They change a boolean state, looping, between true and false.

Whenever, no_loop is called, that frame should be rendered and p5.sketch.looping should be set to false AFTER that frame is rendered. Same thing should happen in redraw(), self.looping is set to true, then, AFTER that frame is rendered, we reset it to false. Then we get rid of all these conditional statements.

My suggestion would be to have another variable (say p5.sketch.no_loop) which can work along with p5.sketch.looping to switch its value after the frame is rendered. When no_loop() is called, we dont set p5.sketch.looping = False instead we set p5.sketch.no_loop = True, then after the frame is rendered, we can do something like this:

if self.looping:
    ... render the frame

if self.no_loop:
    self.looping = False

This would also help us in resolving #163.

@tushar5526
Copy link
Member Author

Ahh ! that would be better
One suggestion should we make a discord or slack for dev questions of p5py, the core code don't have any documentation, so it's quite difficult to change things in the core code.
Plus p5.js have a bot which tests new pull requests on various platforms, using docker probably, do we have any bot or anything else for testing ?

@tushar5526 tushar5526 closed this May 4, 2020
@tushar5526
Copy link
Member Author

Not closed just discarded the previous PR

@arihantparsoya
Copy link
Member

One suggestion should we make a discord or slack for dev questions of p5py, the core code don't have any documentation, so it's quite difficult to change things in the core code.

https://join.slack.com/t/p5py/shared_invite/zt-e8mxhses-b2Vqc~zcU_kC5vCnJqrFxA

@arihantparsoya
Copy link
Member

Plus p5.js have a bot which tests new pull requests on various platforms, using docker probably, do we have any bot or anything else for testing ?

Tests for the internal APIs is already added but there are not tests for sketchs itself. We can use Visual Reference Testing (comparing output images) for testing the public APIs.

@jeremydouglass
Copy link
Contributor

jeremydouglass commented May 7, 2020

@parsoyaarihant

We can use Visual Reference Testing (comparing output images) for testing the public APIs.

This is what we did with Processing.R as e2e tests that we would generate from our documentation -- so all documentation sketches were tests. That approach worked really, really well.

processing-r/Processing.R#10
https://github.com/processing-r/Processing.R/blob/master/hack/generate-e2e-test.py

It can sometimes require a bit more continuous integration setup, as in some cases you can't run CI on a normal "headless" machine if you need certain kinds of graphic output. Also, visual reference testing doesn't work on interactive sketches. There are ways, but it is good to start with the easy things first -- static output.

@tushar5526
Copy link
Member Author

Hey @jeremydouglass can you elaborate more about how visual testing is done in Processing for R.
From what I can understand from tests is that there is a URL containing the expected output image, and you are generating the tests on a headless machine then comparing the produced image and expected image.
Can you explain more about it ?
Thanks!

@tushar5526
Copy link
Member Author

Also, if you can explain how you are comparing two images here.
https://github.com/processing-r/Processing.R/blob/master/src/test/e2e/util/ImageUtils.java#L25

@jeremydouglass
Copy link
Contributor

jeremydouglass commented May 12, 2021

@tushar5526 apologies for delayed reply.

Hey @jeremydouglass can you elaborate more about how visual testing is done in Processing for R.

Sure. The idea: take any sketch in the reference that has a picture, run it, and check the output against the picture. It should (fuzzy) match -- the pixel values should be almost the same. If it does, it passes. If not, then either the engine changed (may need many new pictures) or the sketch reference code was updated and broke / changed its output (need to update that picture) etc.

The test suite is set up for headless run after the continuous integration build stage, e.g. Travis. Here is our Travis setup:

https://github.com/processing-r/Processing.R/blob/master/.travis.yml

The process is a bit odd. We use a generator script -- a python script with a jinja template...

https://github.com/processing-r/Processing.R/blob/master/hack/generate-e2e-test.py

... to automatically generate one .java test file for each example found in the reference. The goal is to automatically put the code from the reference into a sketch, run it, then compare its output to the reference image. Here is an example test sketch that the script generates by taking the reference example and the image and wrapping them:

https://github.com/processing-r/Processing.R/blob/master/src/test/e2e/core/function/Ellipse1Test.java

The autogenerated tests all extend E2eTestBase.java, which is where that image test code is called. e2e is "end to end" testing -- so we go from the provided example code to the image and check the actual output in order to pass CI.

https://github.com/processing-r/Processing.R/blob/master/src/test/e2e/core/E2eTestBase.java

The base class always includes ImageUtils, which is the actual image test that you asked about:

https://github.com/processing-r/Processing.R/blob/master/src/test/e2e/util/ImageUtils.java

The image test code itself is a fuzzy equality between two images based on pixels (which is appropriate to the source domain, a pixel-based rendering language). It compares the pixel values of the two image, but allows for minor variations in e.g. antialiasing. It takes the R, G, B channels of each pixel, measures the difference between the two images, and then divides the total difference by the number of pixels. The "THRESHOLD" is currently 0.01, or 1 % difference.

https://github.com/processing-r/Processing.R/blob/master/src/test/e2e/core/E2eTestBase.java#L14

This means that the test would not detect for example point() being in the wrong place, but will detect most lines, shapes, gradients, etc. being correct while still allowing for a small bit of antialiasing or jitter.

I planned this feature along with @gaocegege and it was all implemented by him during his Google Summer of Code project. He did a fantastic job.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

no_loop incorrectly allows an extra draw frame
3 participants