Releases: gre/gl-react
v2.2.1
v2.1.1
Experimental support of Animated object
has been released with an experimental support for Animated object on uniforms.
A simple example is this HelloGL component:
const shaders = GL.Shaders.create({
helloGL: {
frag: `
precision highp float;
varying vec2 uv;
uniform float value;
void main () {
gl_FragColor = vec4(uv.x, uv.y, value, 1.0);
}
`
}
});
class HelloGL extends React.Component {
constructor (props) {
super(props);
this.state = {
value: new Animated.Value(0)
};
}
componentDidMount () {
const loop = () => Animated.sequence([
Animated.timing(this.state.value, { toValue: 1, duration: 1000 }),
Animated.timing(this.state.value, { toValue: 0, duration: 1000 })
]).start(loop);
loop();
}
render () {
const { width, height } = this.props;
const { value } = this.state;
return <Surface width={width} height={height}>
<GL.Node
shader={shaders.helloGL}
uniforms={{ value }}
/>
</Surface>;
}
}
Notice how this no longer requires to loop over render(). This allows to get even better performance for animations, removing the need of complex diffing but instead directly building the data object and calling setNativeProps
. Animated is also very smooth, here is an example of profiling:
Without animated (5.7ms)
After animated (4ms)
This should also work in gl-react-dom
as long as you implement a local web version of Animated. We will eventually have it.
Internal refactoring to simplify the duck-typing logic of uniform values
v2.1.0
This Release Notes applies for , , .
Bugfix race conditions and improve performance
Since the last release notes, performance have been improved and some important bugs (like race conditions) has been fixed and released via a few patch versions. This will not be detailed here but you can find out more by browsing the commits if you are interested.
"real" react
dependency
Hourray! Since react-native 0.18 depends on react, gl-react
can now depends on react
too. Now things are truly universal (did I already said that the last time? XD)
This is the good time to deprecate our old hacky way of injecting react / react-native. With this release, you will no more have to import
/require()
"gl-react/react"
or "gl-react/react-native"
.
We have also deprecated GL.React
, please directly import React
from "react"
.
captureFrame(
config
)
captureFrame()
now allows an optional config object in parameter.
: We have also removed the previously deprecated callback syntax. You must use the Promise returned by captureFrame()
.
With this config object you can choose among different file types (PNG, JPG, WEBM), different levels of quality (from 0% to 100%, which allows to compress more or less the file size) and different export formats (to base64, to Blob, save to file).
For more info, read the documentation.
Usage in gl-react-dom-static-container
You can also see captureFrame()
in action in gl-react-dom-static-container demo.
gl-react-dom-static-container
is a new module forgl-react-dom
you might need if you want to display more than ~15 simultaneous<Surface>
on the same page. It is a limitation in WebGL that you can't have so many context running at the same time, that library will snapshot the canvas withcaptureFrame()
and cache it when it's not "active".gl-react-dom
has also recently been improved to implement a "pool of canvas" to re-use WebGL context across<Surface>
instances.
Inline Shader support!
Before this release, the only way to create shaders was with GL.Shaders.create
, which statically define shaders and returns ids that can be used in <GL.Node shader={...}>
.
You can now also pass-in the shader object inline to that shader
props.
<GL.Node shader={{ frag: "...fragmentShaderCode..." }} />
onShaderCompile
prop
You can also optionally hook to shader compilation state with a new onShaderCompile
props: it receives 2 parameters error
and result
. error
is a text with the raw GL Error message. If error
is null the compilation was successful and result
will be filled with { uniforms }
where uniforms
is an object describing the shader uniform types (retrieved after compiling the shader). This gets called after render()
. If you don't provide a onShaderCompile
callback, the default behavior is used: an error message will be logged in the console when it fails to compile.
Example
<GL.Node
shader={{
frag:`
precision highp float;
varying vec2 uv;
uniform float value;
void main () {
gl_FragColor = vec4(uv.x, uv.y, value, 1.0);
}`
}}
uniforms={{ value: 0.5 }}
onShaderCompile={(newError, result) => console.log("Shader Compile Response", newError, result)}
/>
It's pretty basic but it already allows to do things like a GLSL editor:
As shown in this GIF, gl-react
reconciliates Shaders to minimize the add
& remove
events. Shaders also get factorized across Surface (if you define one shader twice, the same shader will be created and used and will be removed if all references are unmounted – if you pass-in inline). Finally, the remove
event is getting debounced so if you toggle on/off a Surface fast enough, the algorithm won't create new shader but re-use previous shader (even if it was not anymore used).
Technically, we use reference counting (like in a Garbage Collector) to know if a shader is used or not, and the "remove" operation happens at appropriate time: when the app is not active (0.5s debounce) or if there is a lot of unused shaders (> 20 shaders, GC trigger with a 0.5s throttle).
These optimizations allow to not overload the CPU/GPU of shader compilation, it is especially important for the React Native implementation which have the latency of the JS<>ObjC bridge.
To summary, performance should be almost the same as using a static Shaders.create
, it just depends if you want a temporary shader or not.
GL.Shaders.create(spec,
onAllCompiled
)
By default, GL.Shaders.create
will console.error
each shader that fails. If you want to override this and hook to shader compilation status, you can provide a second parameter to create
that will be called once with 2 parameters: errors and results. errors is null if all shaders are successful or an object describing the errors per shader. results is an object containing the shader types information per shader.
pixelRatio
prop
A new prop pixelRatio
has been introduced both on <Surface>
and GL.Node
.
It allows to override the "devicePixelRatio" (aka screen scale) to use.
By default, window.devicePixelRatio
(Web) or PixelRatio.get()
(React Native) will be used.
This can be useful to not follow the device scale but strictly follow the desired resolution
(e.g if you do an effect over an image and want to keep the same dimension).
GL.Node
will have a pixelRatio value inherited from the parent (in the effect tree).
GL.createComponent
props
GL.createComponent
takes a render function in parameter from props
to a <GL.Node>
element.
gl-react 1.1.0 introduced the fact that this props
is the union of the inherited {width, height, pixelRatio}
with user defined props.
This way, a GL Component can always assume to receive width, height and pixelRatio in parameter, either provided by the user or inherited from parent, and use them in your shader uniforms.
This was useful to remove the need to pass-in width and height when you need it, for instance, in gl-react-blur
component.
This is also useful to provide a resolution parameter (a bunch of existing shaders you can find on the web use gl_FragCoord
and a resolution
uniform to resolve the coordinate).
Most of the time the user shouldn't have to worry about them anymore (except at the Surface top-level):
<Surface width={300} height={200}>
<Blur factor={1}>
<Negative>
<video ... />
</Negative>
</Blur>
</Surface>
Improve uniform validation
Passing in weird values (like NaN) in uniform used to crash things (especially on React Native implementation) so there is now more check for these extreme cases.
2.0.x major release
The gl-react 2.0 version is an important API and library dependency refactoring.
gl-react
2.0.x : new API.gl-react-dom
2.0.x : for using gl-react with React DOMgl-react-native
2.0.x : for using gl-react with React Native < 0.16.0gl-react-native
2.1.x : for using gl-react with React Native >= 0.16.0
There are some pretty big changes, so please read carefully the following Changes and Migration Notes (you can also see the issues of the milestone and the new documentation on Gitbook).
Universal GL Effects
gl-react 2.0 makes effects truly universal: it will now be possible to publish GL effects on NPM that can be used for both the Web and Native. (without a need of a 2-entry-point workaround)
To prove this, checkout this new example implementing an universal image effects app for the Web, iOS and Android. 5 primitive effects were also released on NPM: gl-react-blur
, gl-react-negative
, gl-react-contrast-saturation-brightness
, gl-react-hue-rotate
, gl-react-color-matrix
that you can use for your own usage on Web or Native.
Changes and Migration Notes
HelloGL Example of the new API:
const GL = require("gl-react");
const React = GL.React;
// ^ for all effects you want to share between iOS/Android/Web, please extract React from GL and this until react-native depends on react.
const shaders = GL.Shaders.create({
helloGL: {
frag: `
precision highp float;
varying vec2 uv;
uniform float blue;
void main () {
gl_FragColor = vec4(uv.x, uv.y, blue, 1.0);
}`
}
});
module.exports = GL.createComponent(
({ blue, ...rest }) =>
<GL.Node
{...rest}
shader={shaders.helloGL}
uniforms={{ blue }}
/>,
{ displayName: "HelloGL" });
const { Surface } = require("gl-react-native" or "gl-react-dom");
<Surface width={511} height={341}>
<HelloGL blue={0.5} />
</Surface>
dependency changes
In 1.x versions, there were 2 exposed libraries: gl-react
and gl-react-native
and a hidden library gl-react-core
to share common code.
In 2.x, we now have 3 libraries: gl-react
, gl-react-native
, gl-react-dom
that respectively work with react
, react-native
and react-dom
.
To migrate from 1.x, you need to update your dependencies:
For using with react-dom
:
"gl-react": "2.0.x",
"gl-react-dom": "2.0.x",
// Naturally, you also need these:
"react": ">= 0.14.0",
"react-dom": ">= 0.14.0"
For using with react-native
:
"gl-react": "2.0.x",
//
"gl-react-native": "2.0.x"
"react-native": "< 0.16.0"
// OR
"gl-react-native": "2.1.x"
"react-native": ">= 0.16.0"
The New API
The important change is that <GL.View>
does not exist anymore. It has been split into two pieces: <GL.Node>
and <GL.Surface>
.
This simplifies the implementation and gives a much better separation of concerns: GL.Surface
is everything related to implementation specific code (web <canvas/>
vs iOS GLKView
vs Android GLSurfaceView
) and GL.Node
is the DSL for defining your stack of effects.
GL.Node
- Props: { shader, uniforms?, width?, height?, preload? }
- Methods: none
GL.Surface
- Props: { width, height, children, opaque?, preload?, onLoad?, onProgress?, autoRedraw?, eventsThrough?, visibleContent? }
- children must be a single
GL.Node
or GL Component.
- children must be a single
- Methods:
capture(cb)
The new lib API
gl-react-core
exposes { Node, Shaders, Uniform, createComponent }gl-react
andgl-react-native
exposes { Surface }.
react
dependency workaround
Because react-native
currently does not yet depend on react
(an issue that the React Native team are busy trying to finish), we can not have gl-react
depend on react
- even though it needs it. Here is the temporary workaround to solve this problem.
in the final app (not in a gl-react library, only the end user can do this)
Prepend in your JavaScript entry point one of following imports:
require("gl-react/react") // for a React context
OR
require("gl-react/react-native") // for a React Native context
Make sure to do this BEFORE any other gl-react related imports.
Note: This will be deprecated as soon as React Native depends on React
React exposed by gl-react
The React instance is also exposed from gl-react
in order to define universal effects:
const React = GL.React;
Note: This will be deprecated as soon as React Native depends on React
v1.2.1
v1.2.0
This release note applies both for [email protected] and [email protected]
Add GL.createComponent
function
A new API has been added to create GL Components.
Comp = GL.createComponent(
props => glViewOrGlComponent,
staticFields?)
GL.createComponent
takes, in first parameter, a render function (from props to Virtual DOM). More precisely, that render function must returns either aGL.View
or another GL Component.- It can also optionally take, in second parameter, a staticFields object to set into the React component (for instance
defaultProps
,propTypes
,displayName
). - Finally, it returns a React Component (that you can use in JSX:
<Comp />
).
This function is to replace the former GL.Component
class to extend (still working but deprecated).
Usage Example
const React = require("react");
const GL = require("gl-react");
const shaders = GL.Shaders.create({
helloGL: {
frag: `
precision highp float;
varying vec2 uv;
uniform float blue;
void main () {
gl_FragColor = vec4(uv.x, uv.y, blue, 1.0);
}`
}
});
module.exports = GL.createComponent(
({ blue, ...rest }) =>
<GL.View
{...rest}
shader={shaders.helloGL}
uniforms={{ blue }}
/>,
{ displayName: "HelloGL" });
<HelloGL width={511} height={341} blue={0.5} />
renders
Reasons for this change
We embrace "pure function" style as introduced by React 0.14.
But this is not the main reason for this change.
We call "GL Component", a React Component that always either renders a
GL.View
or another GL Component.
This constraint was not validated and you could have written aGL.Component
that wasn't really one.
This new API emphasizes the real purpose of GL Components. We just need the render function to create a GL Component. Also a GL Component should have a minimal lifecycle and no state to be consistent between either being the root of the effects stack or not (if you are not familiar with how effects stack works, we will write more about it later).
The previous way of creating GL Components via inheritence of GL.Component
class felt bad and hacky and you could easily forget about using GL.Component
and not React.Component
. It was also exclusively working using ES6 Classes.
Finally, the main reason we moved to this API is to be able to add methods on a GL Components consistenly with the ones we introduce on GL.View
.
That way we automatically provides captureFrame
method to all GL Components created with GL.createComponent
.
Currently there is only
captureFrame
(in gl-react only) which allows to get a snapshot of the GL canvas content.
The fact that GL.createComponent
API is more precise with a render function we can call ourself allows us to decorate GL.View
(basically just using a ref) so we can expose these methods.
v1.1.0
This release note applies both for [email protected] and [email protected]
autoRedraw
, eventsThrough
, visibleContent
props
Here is a first (React Native) use-case solved by these 3 new props added on GL.View
: You can now start making effects over interactive UIs !!!
This also allows us to simplify and optimize use-cases such as the VideoBlur example:
<Blur eventsThrough visibleContent autoRedraw width={width} height={height} passes={blurPasses} factor={blur}>
<HueRotate hue={hue}>
<video autoPlay loop>
<source type="video/mp4" src="video.mp4" />
</video>
</HueRotate>
</Blur>
VideoBlur is now more performant, and lets events like right click pass directly to the
<video>
.
autoRedraw
prop (#11)
The autoRedraw
prop improves performance of effects applied on top of living contents like videos, other canvas, or UI components (for React Native case).
This removes the need to do render loop, saving the cost of render()
lifecycle and only requiring implementers to rasterize content and perform a redraw.
eventsThrough
prop (#13)
eventsThrough
causes the GL view not to intercept events. Instead, events get intercepted by whatever is underneath it.
One use-case that eventsThrough
solves is to make a transparent layer on top of a something that is still interactive.
visibleContent
prop (#17)
visibleContent
makes a content visible under the GL view (by default, the content are all visibility:hidden
).
When combined with eventsThrough
, it allows to make content interactive. For instance, in VideoBlur example,
a right click actually happens on the <video>
beneath and not the <canvas/>
used for the WebGL effects.
Support for ndarray
textures + disableLinearInterpolation option (#14)
This feature has only been implemented in gl-react.
This feature, requested by @tjpotts,
allows to give an ndarray()
object as a possible value for a texture.
You can programmatically generate a texture with ndarray()
and send it to your shaders.
Refer to ndarray documentation for more information.
See also @mikolalysenko's presentation.
The second introduced feature is the ability to configure the Interpolation mode on textures: you can set disableLinearInterpolation
which disable the default linear interpolation in textures. N.B: this currently only works with ndarray
but it is planned to work with more formats.
{ value: ndarray(...), opts: { disableLinearInterpolation: true } }
Example:
Add captureFrame(cb)
method (#19)
This feature has only been implemented in gl-react.
This feature, requested by @tjpotts, allows you to capture the view rendering as a data64 image. It is implemented with a captureFrame(cb)
method on GL.View
.
Example:
render() {
return <GL.View ref="view" ... />;
}
capture () {
this.refs.view.captureFrame(data64 => ...);
}
See also improved version of Simple #1.
gl-react-core
: better performance, more comprehensive code (#15)
The gl-react-core algorithm that implements 1.0.0's Shared computation feature has been improved.
This results in a significant performance improvement during render()
React lifecycle.
The internal format to describe the different texture formats from gl-react-core
to implementers is also more comprehensive. This allowed us to remove some branching code in implementations.
Finally, gl-react-core
has been refactored in a more readable and maintainable way.
Some obscure parts, like the initial "GL.View discovery" is now more straightforward.
Bugfixes
v1.0.0
This release note applies both for [email protected] and [email protected]
Shared computation
This is a huge performance and usability gain: we can now be as performant as pure GL code, but in the React immutable paradigm!
For use-cases that require passing the same uniform parameter multiple times - such as Blur (multi-pass) - the content is now computed only once. For example, a 4-pass blur where you provide some content to each blur pass, the content will be rasterized just once. This also works for sub GL.View
: if you pass a sub-tree of effects, this sub-tree will be factorized and computed once (in a Framebuffer) so it is never duplicated.
But how?!
This is implemented by comparing references in the Virtual DOM tree - as a VDOM object can be considered to be referential transparent. If we find two identical Virtual DOM elements in your effects stack tree, we can share their computation in the final GL rendering.
Concrete example: VideoBlur
<Blur width={width} height={height} passes={blurPasses} factor={blur}>
<HueRotate hue={hue}>
<video autoPlay loop>
<source type="video/mp4" src="video.mp4" />
</video>
</HueRotate>
</Blur>
(Note that Blur will result in a blurPasses
number of nested Blur1D
's GL.Views
, with the children content being passed to each one)
- The
video
DOM element gets shared - only one video is created. - The HueRotate value get also factorized and computed once in a framebuffer.
The best part is, implementers don't have to worry about this factorization: just pass in content wherever you need it – and whatever this content is (another stack of effects, content to rasterize - such as canvas / video or RN View, or just a simple image URL) and it'll be handled for you.
GL.Target[uniform] gets renamed to GL.Uniform[name]
uniforms
object is also identical to providing GL.Uniform
children. The second option is just more elegant when composing GL.View
s or rasterizing VDOM content.
<GL.Uniform name="foo">...</GL.Uniform>
width/height of sub-GL.View gets inferred
When composing effects, only the root effect must define the width and height, not children. If you don't provide width and height in children, the parent's dimension will be used. Example: <Hue width={w} height={h}><Blur>...</Blur></Hue>
.
Explicitly setting width and height is still possible and allow to use different framebuffer sizes (if you want to bake an image with a different resolution for instance).
Better image loading experience
There is a new preload
props that allows you to prevent any GL rendering until ALL textures are loaded.
onLoad / onProgress
Following the preload
feature, there are 2 callbacks to be called when the textures are being loaded / fully loaded.