Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support configuring Surfaces with Devices that don't share the same underlying WebGL2 context #2343

Open
Tracked by #3674
vsaase opened this issue Jan 2, 2022 · 7 comments
Labels
api: gles Issues with GLES or WebGL help required We need community help to make this happen. type: enhancement New feature or request

Comments

@vsaase
Copy link

vsaase commented Jan 2, 2022

Description
When creating multiple canvases on web and multiple corresponding surfaces, all rendering is done in the last canvas associated to the last surface created.

Repro steps
Here is a modified hello-triangle main.rs file, run it with cargo run-wasm --example hello-triangle --features webgl

use std::borrow::Cow;
use winit::{
    event::{Event, WindowEvent},
    event_loop::{ControlFlow, EventLoop},
    window::Window,
};

async fn run(event_loop: EventLoop<()>, windows: [Window; 2]) {
    let size = windows[0].inner_size();
    let instance = wgpu::Instance::new(wgpu::Backends::all());
    let surfaces: Vec<_> = unsafe {
        windows
            .iter()
            .map(|window| instance.create_surface(&window))
            .collect()
    };
    let adapter = instance
        .request_adapter(&wgpu::RequestAdapterOptions {
            power_preference: wgpu::PowerPreference::default(),
            force_fallback_adapter: false,
            // Request an adapter which can render to our surface
            compatible_surface: Some(&surfaces[0]),
        })
        .await
        .expect("Failed to find an appropriate adapter");

    // Create the logical device and command queue
    let (device, queue) = adapter
        .request_device(
            &wgpu::DeviceDescriptor {
                label: None,
                features: wgpu::Features::empty(),
                // Make sure we use the texture resolution limits from the adapter, so we can support images the size of the swapchain.
                limits: wgpu::Limits::downlevel_webgl2_defaults()
                    .using_resolution(adapter.limits()),
            },
            None,
        )
        .await
        .expect("Failed to create device");

    // Load the shaders from disk
    let shader = device.create_shader_module(&wgpu::ShaderModuleDescriptor {
        label: None,
        source: wgpu::ShaderSource::Wgsl(Cow::Borrowed(include_str!("shader.wgsl"))),
    });

    let pipeline_layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
        label: None,
        bind_group_layouts: &[],
        push_constant_ranges: &[],
    });

    let swapchain_format = surfaces[0].get_preferred_format(&adapter).unwrap();

    let render_pipeline = device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
        label: None,
        layout: Some(&pipeline_layout),
        vertex: wgpu::VertexState {
            module: &shader,
            entry_point: "vs_main",
            buffers: &[],
        },
        fragment: Some(wgpu::FragmentState {
            module: &shader,
            entry_point: "fs_main",
            targets: &[swapchain_format.into()],
        }),
        primitive: wgpu::PrimitiveState::default(),
        depth_stencil: None,
        multisample: wgpu::MultisampleState::default(),
        multiview: None,
    });

    let mut config = wgpu::SurfaceConfiguration {
        usage: wgpu::TextureUsages::RENDER_ATTACHMENT,
        format: swapchain_format,
        width: size.width,
        height: size.height,
        present_mode: wgpu::PresentMode::Mailbox,
    };
    for surface in surfaces.iter() {
        surface.configure(&device, &config);
    }

    event_loop.run(move |event, _, control_flow| {
        // Have the closure take ownership of the resources.
        // `event_loop.run` never returns, therefore we must do this to ensure
        // the resources are properly cleaned up.
        let _ = (&instance, &adapter, &shader, &pipeline_layout);

        *control_flow = ControlFlow::Wait;
        match event {
            Event::WindowEvent {
                event: WindowEvent::Resized(size),
                ..
            } => {
                // Reconfigure the surface with the new size
                config.width = size.width;
                config.height = size.height;
                for surface in surfaces.iter() {
                    surface.configure(&device, &config);
                }
            }
            Event::RedrawRequested(_) => {
                for surface in surfaces.iter() {
                    let frame = surface
                        .get_current_texture()
                        .expect("Failed to acquire next swap chain texture");
                    let view = frame
                        .texture
                        .create_view(&wgpu::TextureViewDescriptor::default());
                    let mut encoder = device
                        .create_command_encoder(&wgpu::CommandEncoderDescriptor { label: None });
                    {
                        let mut rpass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
                            label: None,
                            color_attachments: &[wgpu::RenderPassColorAttachment {
                                view: &view,
                                resolve_target: None,
                                ops: wgpu::Operations {
                                    load: wgpu::LoadOp::Clear(wgpu::Color::GREEN),
                                    store: true,
                                },
                            }],
                            depth_stencil_attachment: None,
                        });
                        rpass.set_pipeline(&render_pipeline);
                        rpass.draw(0..3, 0..1);
                    }

                    queue.submit(Some(encoder.finish()));
                    frame.present();
                }
            }
            Event::WindowEvent {
                event: WindowEvent::CloseRequested,
                ..
            } => *control_flow = ControlFlow::Exit,
            _ => {}
        }
    });
}

fn main() {
    let event_loop = EventLoop::new();
    let windows = [
        winit::window::Window::new(&event_loop).unwrap(),
        winit::window::Window::new(&event_loop).unwrap(),
    ];
    #[cfg(not(target_arch = "wasm32"))]
    {
        env_logger::init();
        // Temporarily avoid srgb formats for the swapchain on the web
        pollster::block_on(run(event_loop, windows));
    }
    #[cfg(target_arch = "wasm32")]
    {
        std::panic::set_hook(Box::new(console_error_panic_hook::hook));
        console_log::init().expect("could not initialize logger");
        use winit::platform::web::WindowExtWebSys;
        // On wasm, append the canvas to the document body
        for window in windows.iter() {
            web_sys::window()
                .and_then(|win| win.document())
                .and_then(|doc| doc.body())
                .and_then(|body| {
                    body.append_child(&web_sys::Element::from(window.canvas()))
                        .ok()
                })
                .expect("couldn't append canvas to document body");
        }

        wasm_bindgen_futures::spawn_local(run(event_loop, windows));
    }
}

Expected vs observed behavior
I only see the triangle in the second canvas, but was expecting a triangle in both. This produces two windows with correct renderings on native.
I confirmed that both canvases are on the page with data-raw-handle attributes correspondig to the raw-window-handle produced by winit. There is also no error when creating the surfaces, indicating that the canvases were both found. Only the canvas associated to the last surface created is rendered to. The other surfaces render to the canvas of the last surface.

Platform
current master version of wgpu, WSL Ubuntu Linux

@cwfitzgerald cwfitzgerald added help required We need community help to make this happen. type: bug Something isn't working labels Jan 3, 2022
@vsaase
Copy link
Author

vsaase commented Feb 2, 2022

I think I have it tracked down to this:

pub struct Instance {
canvas: Mutex<Option<web_sys::HtmlCanvasElement>>,
}

The WebGL Backend Instance has only one canvas, but shouldn't it have a vector of canvases?

@kvark
Copy link
Member

kvark commented Feb 2, 2022

Ok, yeah, I don't know atm how we'd support multiple canvases on WebGL. Some investigation is needed. Maybe the Instance should be some kind of an offscreen canvas, and then presenting to other canvases would do some image copies on GPU.... Not sure.

@jinleili
Copy link
Contributor

jinleili commented Feb 7, 2022

@kvark Each instance simply hold only one canvas to provide the gl context, just like a javascript webgl program. So, it makes sense that multiple canvas need multiple instance.

*self.canvas.lock() = Some(canvas.clone());

  const ctx0 = canvas0.getContext("webgl");
  const ctx1 = canvas1.getContext("webgl");

@vsaase
Copy link
Author

vsaase commented Feb 7, 2022

My use case would need shared resources between canvases

@teoxoy teoxoy added the api: gles Issues with GLES or WebGL label Feb 28, 2023
@cwfitzgerald
Copy link
Member

What we could do is store an array of canvases and present a device for each canvas we have. The user would need to make sure they pick the correct device, but that's what surface compatibility is for anyway.

@ecoricemon
Copy link

I ran into this problem too when I was testing basic wgpu example (v0.18).
But wgpu will render to the last surface before requesting adapter even if it was dropped.

For instance, wgpu will draw onto the surface_b in the pseudo code below.

let surface_a = instance.create_surface(canvas_a);   
let surface_b = instance.create_surface(canvas_b);   
let adapter = instance.request_adapter();   
drop(surface_a);   
drop(surface_b);   
let surface_c = instance.create_surface(canvas_c);   
...   
// Intended to render onto surface_c
let texture = surface_c.get_current_texture();  
let view = texture.create_view();   
// But, actually wgpu renders onto surface_b   
render to view 

Full source is here example

So, I guess wgpu can solve this confusing operation by bloking configuration of surface like so,

wgpu/wgpu-hal/src/gles/web.rs : 298

impl crate::Surface<super::Api> for Surface {
  unsafe fn configure(
    &mut self,
    device: &super::Device,
    config: &crate::SurfaceConfiguration,
  ) -> Result<(), crate::SurfaceError> {

    // Blocks configuration if users attempt to configure different surface.
    // But, device.shared.context should have raw webgl context for this operation.
    let adapter_webgl2_context = &device.shared.context.webgl2_context;
    if &self.webgl2_context != adapter_webgl2_context {
      return Err(crate::SurfaceError::Other("Some error msg"));
    }

    ...
}

@teoxoy
Copy link
Member

teoxoy commented Jul 3, 2024

#5901 adds validation to prevent this from silently happening.

@teoxoy teoxoy added type: enhancement New feature or request and removed type: bug Something isn't working labels Jul 3, 2024
@teoxoy teoxoy changed the title Using multiple canvas surfaces renders only to last one on web Support configuring Surfaces with Devices that don't share the same underlying WebGL2 context Jul 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: gles Issues with GLES or WebGL help required We need community help to make this happen. type: enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

6 participants