Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Render to a layer underneath all UI #20483

Closed
ajlende opened this issue Feb 26, 2020 · 5 comments
Closed

Render to a layer underneath all UI #20483

ajlende opened this issue Feb 26, 2020 · 5 comments
Labels
Needs Technical Feedback Needs testing from a developer perspective. [Type] Enhancement A suggestion for improvement.

Comments

@ajlende
Copy link
Contributor

ajlende commented Feb 26, 2020

Is your feature request related to a problem? Please describe.

In Automattic/block-experiments#19 I'm experimenting with WebGL effects.

There is a maximum number of webgl contexts in the browser, so in order to allow for more than 4-32 blocks (depending on the device/browser), I'm drawing all the blocks on a single canvas. In the editor, that canvas needs to appear underneath the editor UI and text.

Simply adding the div underneath the #editor div doesn't work because if a theme has a background color set, that background color is applied to the .editor-styles-wrapper div which is inside the #editor div and is, thus, drawn on top of the canvas. Also, with the addition of the device preview (#19082), the .block-editor-editor-skeleton__content div also has a background color set.

Describe the solution you'd like

Create a layer underneath the UI, but above the backgrounds applied by the theme and device preview.

It may be possible for blocks to render into this layer with the use of portals into a component (codesandbox demo). It looks like using slot-fill works to render the canvas from the block.

However, there's still the question of how to define just one canvas that all blocks would have access to in order to get the WebGL context. Also, you probably still only want one requestAnimationFrame call for your render loop, so you'd need to either access this React-rendered div from outside of React or have some sort of singleton useEffect for all of them.

I made a branch showing a rough, but working, idea of how this would work if the code lived in Gutenberg. All the code can live in a single React component, and that component could be added as a new option in registerBlockType that gets rendered into the background slot.

Interested in other thoughts for ways to handle this. I haven't worked much in gutenberg core yet, so there may be another way that makes more sense.

Describe alternatives you've considered

I wrote a hack that would set the WebGL clear color to the background color of the theme, but device preview breaks that now since there's another background-color being set.

Another option might be to specifically add a canvas in the background for WebGL rendering. However, doing so would require everyone using it to handle their own scissor/viewport calls so they don't interfere with other blocks—if blocks do have to overlap (like if my example was used in conjunction with another plugin that provides a full background), then the order of draw calls would be important for layering them properly. And you'd still have the RAF problem.

@ajlende
Copy link
Contributor Author

ajlende commented Mar 4, 2020

I have a proof of concept for adding a background option to registerBlockType. Check out the gutenberg branch and block-experiments branch. The render positioning and cursor calculations are a little off, but it runs and the general idea is there.

@talldan talldan added [Type] Enhancement A suggestion for improvement. Needs Technical Feedback Needs testing from a developer perspective. labels Mar 5, 2020
@talldan
Copy link
Contributor

talldan commented Mar 5, 2020

@ajlende The backgrounds are very cool, however, I do feel like this is a very specific use case for adding such a slot, one that is quite likely to never arise again. The issue is that the maintainers of the project then have to maintain this slot forever making sure it continues to work, even though there are so few usages.

If the backgrounds were static, I would propose that an alternative option is rendering the backgrounds to an in-memory/off-screen canvas. That is, a canvas that isn't mounted to the DOM at all. That data could be copied from that unmounted canvas to an image element, background style attribute as a data uri, or a standard 2d canvas.

There's a similar approach described here:
https://devbutze.blogspot.com/2014/02/html5-canvas-offscreen-rendering.html

The trouble is that for a moving image this would probably not be very performant. I thought I'd mention it in case it leads to some ideas.

One other thing I'd like to mention, you should consider making these backgrounds not animated if the user has prefers-reduced-motion specified:
https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-reduced-motion

@ajlende
Copy link
Contributor Author

ajlende commented Mar 5, 2020

@talldan Thanks for taking a look. 🙂

The prefers-reduced-motion is something that will definitely have to be added once this gets past its experimental stage. I would also want to enable failIfMajorPerformanceCaveat and provide a fallback for devices without hardware acceleration.

Still, I think that being able to create interactive and dynamic blocks like this could be a killer feature for Gutenberg. WebGL Fundamentals described this technique as a way to draw a 3D product list which would be useful for anyone selling 3D assets or even a 3D printing service. But there are plenty of other really cool experiences that this feature could enable too. Another site-building tool, cargo.site has a number of WebGL effects that can be applied to a backdrop, but they're rather limited since they only apply to a single image/video in the background. This feature would allow us to go beyond what they can do and have as many as we'd like per page.

The same motion-background could be implemented in a limited capacity by setting a maximum number of blocks and creating a canvas for each one; however, the performance would be worse, and when viewing all posts you might go past the WebGL context limit causing later blocks not to render.

Something that I just considered, and haven't had time to try yet, is to create a container block that manages the canvas and inner blocks within to mark the areas to render. You still run into the issue of max contexts in the "all posts" view, and I don't think it would be as nice of a user experience, but it might work.

@ajlende
Copy link
Contributor Author

ajlende commented Mar 5, 2020

Yeah, with a little CSS and updated measurement calculations the container block would probably work, but it feels really clunky to have to add the container first and then not see anything different until you add the block inside. From the user's perspective it feels like an unnecessary and confusing step.

The super buggy code is pushed up to automattic/block-experiments on try/motion-background-inner-blocks.

@ajlende
Copy link
Contributor Author

ajlende commented Jul 1, 2020

The same motion-background could be implemented in a limited capacity by setting a maximum number of blocks and creating a canvas for each one; however, the performance would be worse, and when viewing all posts you might go past the WebGL context limit causing later blocks not to render.

This is the solution that I landed on. It's less than ideal, but was deemed acceptable enough for the motion background (now the waves block). Will re-open if I come across another situation where having this layer would be more useful.

@ajlende ajlende closed this as completed Jul 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Needs Technical Feedback Needs testing from a developer perspective. [Type] Enhancement A suggestion for improvement.
Projects
None yet
Development

No branches or pull requests

2 participants