-
Notifications
You must be signed in to change notification settings - Fork 18
Using renderlet with rive‐renderer
We've been building renderlet
for the last 6 months, using WebAssembly to make graphics programming easier. However, there was a big problem - we only supported 3D graphics, not 2D vector graphics - until now.
Interactive content continues to change how we engage with technology. While early video games were written in languages like C and used fixed-function graphics cards on desktop computers and consoles, the internet made rich interactive content available to everyone.
Adobe Flash made this kind of content accessible across the web, and the introduction of smartphones put interactive applications into practically everybody's pocket. Now, social media and the coming wave of AI-generated graphics and content will continue to raise the bar for the technology to render graphics.
Software with interactive 2D & 3D graphics have two pieces: data and code. Artists have tools to make data: textures, vectors, 3D models, for example. But building the software to use the data hasn't really changed since the 1990s: programmers write code against low-level APIs to use GPUs to render the data. Artists and developers repeatedly go back and forth across different tools and workflows to ultimately write logic that isn't portable across platforms or reusable across applications.
We're building renderlet
to solve this problem. Graphics data and code can be developed together in the same environment, packaged together into a WebAssembly module called a renderlet
, and rendered onto any canvas. With WebAssembly, we compile graphics code to portable bytecode that allows it to safely run on any processor and GPU.
Our WebAssembly Renderer, wander
, exposes cross-platform 3D graphics APIs, and we have experimentally integrated rive-renderer
to add GPU-accelerated 2D vector graphics support, giving developers the power of something like Adobe Flash inside any 3D application:
Read on to learn more about how we did this!
rive-renderer is a GPU-based renderer for 2D vector graphics. This is similar to tools like Skia (the rendering engine behind Google Chrome), but takes advantage of more modern GPU architectures. Using tools including compute shaders and pixel local storage, rive-renderer
can provide high-performance hardware-accelerated 2D graphics with an SVG-like interface.
rive-renderer
is now the default rendering engine for Rive, a popular 2D graphics design tool. During GDC last week, the Rive team open-sourced rive-renderer
.
As-is, wander
does not provide any 2D vector capabilities, however, technology enabling rasterization of 2D vectors is built on top of 3D graphics hardware APIs.
Can we use something like rive-renderer
with wander
to add 2D vector capabilities out of the box? Integrating this kind of native code by-hand with an existing rendering system is challenging, but we can let wander
take care of the integration and expose the APIs through a higher level of abstraction.
This opens up new use-cases, like vector-based texture generation, 2D overlays for a UI, or even colors and gradients for 2D surfaces.
Can this be done? Absolutely! It works now on wander
v1 even without support for custom shaders, and with wander
v2 the integration will be even cleaner.
Let’s start from the example path_fiddle provided in rive-renderer
’s repo:
v1 of wander
can render any data produced by renderlet
using a shader attached by the host application. This is because of an architectural challenge: although wander
exposes GPU functions through a cross-platform API for vertex/index buffers, textures, and some pipeline state, building cross platform shaders programmatically is very complex.
The GPU is exposed to WebAssembly by communicating over a lightweight binary wire format between the Wasm guest and the host. This enables wander
to efficiently upload data to the GPU irrespective of the graphics API.
We evaluated adding a new backend to rive-renderer
to write to this wire format, however, it still requires generating compute shaders at runtime. Therefore, we opted to run rive-runtime
as a part of wander
itself outside of Wasm, giving rive-renderer
direct access to its existing GPU backends. The Wasm guest then writes the actual rive-renderer
commands to the wire format, instead of the raw GPU commands produced by the backends:
BinaryOutStream ms;
for (const auto& command : m_command_list)
{
ms << std::get<0>(command);
ms << std::get<1>(command).size();
for (const auto& path : std::get<1>(command))
{
ms << std::get<0>(path);
ms << std::get<1>(path).size();
for (const auto& point : std::get<1>(path))
{
ms.Write(&point.x, sizeof(float));
ms.Write(&point.y, sizeof(float));
}
}
}
In v2, the entire rive-runtime
can become a renderlet
, making it even easier to distribute.
v2 of wander
exposes WebGPU directly to WebAssembly using wasi-gfx
, solving the shader problem. This makes two approaches possible, each leading to a cleaner architecture:
- Adding a
wander
platform backend as described above, so the entirerive-runtime
library and all dependencies are compiled to WebAssembly, and the platform backend lightly wraps the WebGPU APIs. - Using the existing browser-based Wasm/WebGPU build of
rive-renderer
directly, and linkrenderlet
modules to it via Emscripten. This may require polyfills through WASI for browser functions, and possibly replacing some interface code with the browser to letrive-renderer
run as more of a plugin.
As v2 becomes a reality, we’ll experiment with which approach gives the best performance and requires the least upstream changes.
What does this integration look like? After adding support to the renderlet
compiler to take 2D vector expressions that generate code to our wire format, and adding code to wander
to use the rive-renderer
backend, we’re able to easily embed this in the example app.
We’ll implement this quickly using the Windows DirectX 11 example, as wander
doesn’t fully support Metal yet, and rive-renderer
requires OpenGL 4.6 which MacOS doesn’t support.
Here’s a quick demo of the same vectors in rive-renderer
's path-fiddle
example being used to generate a texture on the GPU that we then apply to the roof of the 3D building in our example:
Now let’s make this dynamic. With renderlet
, we provide a high-level graphics specification out of the box, including procedural geometry functions. Let’s start by parameterizing the texture to take in width, height, and time from the app (with some sane defaults):
attr:
-
name: "width"
value: "1024.0"
-
name: "height"
value: "768.0"
-
name: "time"
value: "0.0"
Next let’s split the texture by the x and y dimension, so the vector only renders in the top left quadrant:
rule:
-
name: "Start"
op:
-
size:
x:
value: "width"
type: "absolute"
y:
value: "height"
type: "absolute"
-
split:
axis: "x"
sizes:
-
value: "0.5"
name: "Width"
type: "absolute"
-
value: "0.5"
name: "null"
type: "absolute"
rule:
-
name: "Width"
op:
-
split:
axis: "y"
sizes:
-
value: "0.5"
name: "Geometry"
type: "absolute"
-
value: "0.5"
name: "null"
type: "absolute"
Finally, let’s give it some animated vectors to render. We'll draw an ellipse where we modulate the width at a rate of cos(time / 2)
, and height at cos(time / 4)
. We’ll draw it twice, once as a line, and once as a fill:
rule:
-
name: "Geometry"
op:
-
ellipse:
width: "scope.x * cos(time / 2.0)"
height: "scope.y * cos(time / 4.0)"
-
paint:
style: "fill"
color: "0xFFFFFFFF"
-
paint:
style: "stroke"
color: "0x8000FFFF"
thickness: "70.0"
join: "miter"
cap: "butt"
Here's the final result in our example app:
We’ve built an entire procedural 2D texturing system that can interface with wander
’s 3D system by utilizing rive-renderer
’s 2D vector support out of the box. Instead of having to write lots of platform specific code, we can express design intent with high-level expressions, and in integrate with our app in only a few lines of code:
auto now = std::chrono::system_clock::now();
auto f_secs = std::chrono::duration_cast<std::chrono::duration<float>>(now - then);
runtime->PushParam(renderlet_id_vector, static_cast<float>(bitmap.width));
runtime->PushParam(renderlet_id_vector, static_cast<float>(bitmap.height));
runtime->PushParam(renderlet_id_vector, f_secs.count());
tree_id_vector = runtime->Render(renderlet_id_vector, tree_id_vector);
auto tree_vector = runtime->GetRenderTree(tree_id_vector);
for (auto i = 0; i < tree->Length(); ++i)
{
auto node = tree->NodeAt(i);
// Set procedural texture
tree_vector->NodeAt(0)->RenderVector(runtime, 0, bitmap.width, bitmap.height);
node->RenderFixedStride(runtime, stride);
}
WebGL and MacOS support for rive-renderer
integration are coming soon, as are more features like color gradients.
Try out the DirectX 11 rive-renderer
integration example here in wander (contributions welcome!)
The renderlet
compiler is currently in closed preview - please contact us for more information.
We’re just getting started making the tools to build the next wave of interactive applications. We’re very excited to see how open-source technologies like the rive-renderer
can bring cutting-edge graphics technologies to more apps.