Skip to content

Conversation

@zicklag
Copy link
Contributor

@zicklag zicklag commented Sep 17, 2020

( Will, hopefully ) fix issue #3353

PR checklist:

  • make succeeds (on *nix)
  • make reftests succeeds
  • tested examples with the following backends:

Copy link
Member

@kvark kvark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for trying these ideas out!
Fortunately, I found a few places that would explain why it doesn't work yet :)

Copy link
Contributor Author

@zicklag zicklag left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sweet. I'll come back and fix those probably later tonight. Thanks!

@zicklag
Copy link
Contributor Author

zicklag commented Sep 18, 2020

I fixed the issues you pointed out and I'm still getting the jagged edges. 😕

image

Any other ideas. 🙂


// Wait for rendering to finish
unsafe {
let wait_result = gl.client_wait_sync(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this actually needs to be wait_sync, not client_wait_sync

Copy link
Contributor Author

@zicklag zicklag Sep 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting, glow doesn't actually have a wait_sync. 😲

https://docs.rs/glow/0.6.0/glow/?search=wait_sync

Should I open an issue for glow on that? Maybe that is just a simple PR to glow I could make?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I recommend implementing it quickly in a fork, making a PR (it's a non-breaking change), and using your fork for the dependency here in the meantime.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK, updated glow with the extra binding and submitted a PR and updated this branch with the changes, but still no change in rendering.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this still needs to be done?

bors bot added a commit to grovesNL/glow that referenced this pull request Sep 19, 2020
127: Add glWaitSync Binding r=grovesNL a=zicklag

Needed this for `gfx-backend-gl`: gfx-rs/gfx#3375 (comment).

Co-authored-by: Zicklag <[email protected]>
@kvark
Copy link
Member

kvark commented Sep 19, 2020

What if, after we render to the renderbuffer on the main context (in submit), we bind it for reading to a framebuffer, and actually issue a blit to some other FBO? Just to replicate the same scenario that we have on surface context (in present). This would ensure that the image goes through the necessary memory barriers and layout changes under the hood. Of course we don't want to ship that, but it would confirm the theory that there is a workaround, at least.

@zicklag
Copy link
Contributor Author

zicklag commented Sep 19, 2020

What if, after we render to the renderbuffer on the main context (in submit), we bind it for reading to a framebuffer, and actually issue a blit to some other FBO?

I'll try that next, but I just found something interesting. I just made a new commit that gets rid of context sharing completely and I'm still getting the same rendering artifacts.

Also, I'm trying to figure out why we need separate contexts for the surface and the root context. Is that because in multi-threading you cannot use the same context across different threads? Being that we can't share anything across extra threads right now anyway because of Starc and such maybe we shouldn't even be sharing contexts at all?

Not that it is helping in this case it seems.


Edit: I'm not actually sure how to try what you suggested in submit because the framebuffer I would need to bind as the read FBO would be the swapchain FBO which we don't have access to in the submit function. Does the test I already did by getting rid of context sharing cover that test?

@kvark
Copy link
Member

kvark commented Sep 19, 2020

I just made a new commit that gets rid of context sharing completely and I'm still getting the same rendering artifacts.

That's an interesting experiment! We should double-check if the sharing is actually requested at the low level (by debugging through context creation).

I'm not actually sure how to try what you suggested in submit because the framebuffer I would need to bind as the read FBO would be the swapchain FBO which we don't have access to in the submit function. Does the test I already did by getting rid of context sharing cover that test?

Let's do it in present() then. To clarify: this operation is purely artificial and makes no practical sense, it's just to convince the driver to transition the renderbuffer into "readable" state. So, in present() before switching the current context, you could create an FBO and attach the renderbuffer to it, bind the FBO for reading. Then maybe create another temporary RBO, another temporary FBO, attach them, and bind from drawing. Then do a blit.

After that, we can proceed as usual by switch the contexts and doing a real blit.

@kvark
Copy link
Member

kvark commented Sep 23, 2020

@zicklag have you tried to make further progress here?

@zicklag
Copy link
Contributor Author

zicklag commented Sep 24, 2020

Haven't had the chance to try anything else yet.

@kvark
Copy link
Member

kvark commented Nov 1, 2020

What platform are you on? Can you reproduce the rendering issue on the quad example?

@kvark
Copy link
Member

kvark commented Nov 1, 2020

I tested this PR without the last commit on both macOS and Linux with a modified quad example and got no issues...

@zicklag
Copy link
Contributor Author

zicklag commented Nov 1, 2020

I'm on Linux, but the quad example did work before this PR: only the WGPU Cube example actually exhibits the problem so far.

For now, because of other priorities this is on the back burner for me probably for a while depending on how things go. Especially with Bevy getting an OpenGL renderer backend which would have been one of my larger use-cases. I'd still love for this to work! I just probably won't have time to spend on it for a while.

@kvark
Copy link
Member

kvark commented Nov 1, 2020

I think the difference with the cube example is that the latter has a second pass, where it draws edges only, and enables blending. So this is what confusing the driver in our case, and it would be trivial to replicate on gfx's quad example.

Especially with Bevy getting an OpenGL renderer backend which would have been one of my larger use-cases

I'd expect them to happily throw it out if gfx-rs had a working GL backend.
There is also Veloren, which is essentially blocked for wgpu migration on our GL backend.

@zicklag
Copy link
Contributor Author

zicklag commented Nov 1, 2020

I think the difference with the cube example is that the latter has a second pass, where it draws edges only, and enables blending. So this is what confusing the driver in our case, and it would be trivial to replicate on gfx's quad example.

Hmm, that would be good to do.

I'd expect them to happily throw it out if gfx-rs had a working GL backend.
There is also Veloren, which is essentially blocked for wgpu migration on our GL backend.

Ahgh, I know. You're going to convince me to keep trying this when I have time. LOL 😃


OK, well I'll try to get back to this eventually, then. Right now all my time is going to getting out an 0.2 release of my Arsenal game engine ( which is built on Bevy ). GL support is definitely something we really want for Arsenal so this will probably take priority at some point, but the first goal is for me to get Arsenal working at all.

@kvark
Copy link
Member

kvark commented Nov 1, 2020

Sorry, didn't mean to try convincing you. Looks like you got enough on your plate at the moment :) Just wanted to argue/elaborate on why GL backend would still be a strategic goal for us, even if Bevy doesn't use it.

@zicklag
Copy link
Contributor Author

zicklag commented Nov 1, 2020

Yeah, no problem man. :D I really do want to get this figured out. We'll figure out it sometime. 👍 It would be amazing to support all of the graphics APIs through one interface.

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Bummer. I added wireframe rendering to the quad example, and it still doesn't exhibit the issue.

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Do you see artifacts in the quad example with https://github.com/kvark/gfx/tree/gl-test ?
I tried running wgpu's examples on GL with the fixes of your PRs (except for the last commit in this PR, which is temporary), and it's all fine on Linux. Going to do the same on macOS now...
Would be good to resurrect and land this, at least on master. Are you interested to do this? Otherwise, I can pick it up.

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

Do you see artifacts in the quad example with https://github.com/kvark/gfx/tree/gl-test ?

Just ran it and it looks fine.

I tried running wgpu's examples on GL with the fixes of your PRs (except for the last commit in this PR, which is temporary), and it's all fine on Linux. Going to do the same on macOS now...

Oh, interesting. I'll try it again.

Would be good to resurrect and land this, at least on master. Are you interested to do this? Otherwise, I can pick it up.

I should be able to get it rebased. I'll see if I can do that now, I've got ~20 mins. It shouldn't be too hard.

@zicklag zicklag force-pushed the gl-final-showdown branch 3 times, most recently from 8845f60 to 98c6da9 Compare November 2, 2020 04:04

// Wait for rendering to finish
unsafe {
let wait_result = gl.client_wait_sync(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this still needs to be done?

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Unable to test the wgpu's cube example on macOS, getting

thread 'main' panicked at 'Error InvalidOperation executing command: CopyBufferToBuffer(2, 1, BufferCopy { src: 0, dst: 0, size: 580 })', /Users/dmalyshau/Code/gfx/src/backend/gl/src/queue.rs:1184:13
note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

This is unrelated to your PR but unfortunate. Will look into it, not a blocker.

Copy link
Member

@kvark kvark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, looks like everything is in place!
bors r+

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

looks like this still needs to be done?

Sorry, it won't let me reply to our comment inline. What still needs to be done?

@kvark kvark changed the title [ Failed ( so far ) ] Attempt to Fix Rendering Artifacts Enable GL backend via Surfman Nov 2, 2020
bors bot added a commit that referenced this pull request Nov 2, 2020
3375: [ Failed ( so far ) ] Attempt to Fix Rendering Artifacts r=kvark a=zicklag

( Will,  hopefully ) fix issue #3353 

PR checklist:
- [ ] `make` succeeds (on *nix)
- [ ] `make reftests` succeeds
- [ ] tested examples with the following backends:


Co-authored-by: Zicklag <[email protected]>
@kvark
Copy link
Member

kvark commented Nov 2, 2020

looks like this still needs to be done?

Sorry, it won't let me reply to our comment inline. What still needs to be done?

It was using client_wait_sync instead of wait_sync, but it looks like you fixed this now 👍

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Could you make a similar PR towards hal-0.6 branch?
I'd be comfortable publishing GL backend with this. Remember that we didn't publish it since hal-0.6 got out. Even if it's partially broken, still more useful than no backend at all.

@bors
Copy link
Contributor

bors bot commented Nov 2, 2020

Build failed:

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

It was using client_wait_sync instead of wait_sync, but it looks like you fixed this now +1

Ah, cool.

Alright, looks like everything is in place!

Sweet! 😃

I haven't tested the WGPU examples yet, but I'll try those out in a sec ( if there's no major conflicts to resolve ). If those don't work for me, it might just be that my driver's goofy. Also it's running the Mesa driver, but I've got a hybrid GPU setup on my machine, so I can choose to run it with the GPU which I didn't realize earlier so I'm not sure if that is the reason for the weirdness. I'll have to try both.

Could you make a similar PR towards hal-0.6 branch?
I'd be comfortable publishing GL backend with this. Remember that we didn't publish it since hal-0.6 got out. Even if it's partially > broken, still more useful than no backend at all.

Sounds good. I'll do that. 👍

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

Oh, yeah, I forgot about that extra framebuffer was only added for surfman. I'll have to gate the code off for the windows version and do that differently I guess.

@kvark
Copy link
Member

kvark commented Nov 2, 2020

But we don't need Surfman (and GL backend) on windows at all, since it's not able to support WGL in the long run anyway

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

Ah, then no reason to pollute the code with extra #[cfg] checks. What do we want to do here? Just disable CI for GL on windows?

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Yes, rip out GL on windows completely :)

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

There that should make CI pass now.

@kvark
Copy link
Member

kvark commented Nov 2, 2020

I was worried that the GL error I'm seeing on macOS is really about something broken with the presentation, and just manifesting itself on the next command. But it turns out to be a genuine error of buffer copying. Not sure what's going on there, but we can proceed.
bors r+

@zicklag
Copy link
Contributor Author

zicklag commented Nov 2, 2020

Just about out of time now, I'll try to do the hal-0.6 PR and a test of the WGPU examples tomorrow! 🙂

@kvark
Copy link
Member

kvark commented Nov 2, 2020

Thank you! I'll do some more cleanup of the GL backend in master in the meantime.

@bors
Copy link
Contributor

bors bot commented Nov 2, 2020

Build succeeded:

@bors bors bot merged commit 26ee948 into gfx-rs:master Nov 2, 2020
@zicklag zicklag deleted the gl-final-showdown branch November 6, 2020 15:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants