-
Notifications
You must be signed in to change notification settings - Fork 155
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement] Incremental reactive state #488
Comments
Update with two way data binding, here is a simple celsius to Fahrenheit converter: #[atom]
fn celsius()->f64{
0.0
}
#[computed(inverse = "set_fahrenheit")]
fn fahrenheit() -> f64 {
link_state(celsius()) * 9.0/5.0 + 32.0
}
fn set_fahrenheit(val:f64) {
celsius().update(|c| *c = (val - 32.0) * 5.0/9.0 )
} As you can see we define a Whats great about this is we can compute derived #[computed]
fn celsius_slider() -> Node<Msg> {
let celsius_reading = link_state(celsius());
input![
attrs![At::Type => "range", At::Min=> "0", At::Max=>"100", At::Value=>celsius_reading],
input_ev(Ev::Input, |value| celsius().update(|c| *c = value.parse::<f64>().unwrap() ))
]
}
#[computed]
fn fahrenheit_slider() -> Node<Msg> {
let fahrenheit_reading = link_state(fahrenheit());
input![
attrs![At::Type => "range", At::Min=> "32", At::Max=>"212" At::Value=>fahrenheit_reading],
input_ev(Ev::Input, |value| fahrenheit().set(value.parse::<f64>().unwrap()))
],
} Both of these sliders must and always will be in sync because they are simply computed state based on the underlying We can use these views anywhere in our app and always know that they will remain in sync. div![celsius_slider()],
...
...
in a galaxy far far away...
...
...
div![fahrenheit_slider()], |
Ad example with
I would write it this way#[derive(Default, Copy, Clone)]
pub struct Temperature {
celsius: f64,
}
impl Temperature {
pub fn as_celsius(&self) -> f64 {
self.celsius
}
pub fn as_fahrenheit(&self) -> f64 {
self.celsius * 9.0/5.0 + 32.0
}
pub fn set_from_celsius(&self, celsius: f64) {
self.celsius = celsius
}
pub fn set_from_fahrenheit(&self, fahrenheit: f64) {
self.celsius = (fahrenheit - 32.0) * 5.0/9.0
}
}
#[atom]
pub fn temperature -> Temperature {
Temperature::default()
}
#[computed]
fn celsius_slider() -> Node<Msg> {
let temperature = link_state(temperature());
input![
attrs![At::Type => "range", At::Min=> "0", At::Max=>"100", At::Value=>temperature.as_celsius()],
input_ev(Ev::Input, |value| temperature().update(|t| t.set_from_celsius(value.parse().unwrap())))
]
}
#[computed]
fn fahrenheit_slider() -> Node<Msg> {
let temperature = link_state(temperature());
input![
attrs![At::Type => "range", At::Min=> "32", At::Max=>"212" At::Value=>temperature.as_fahrenheit()],
input_ev(Ev::Input, |value| temperature().update(|t| t.set_from_fahrenheit(value.parse().unwrap())))
]
} P.S. I think we've already discussed it in another place, but I'm not sure - |
edit re: the comment above:
#[atom]
fn name() -> String {
"".to_string()
}
#[reaction]
fn name_length() -> usize {
observe(name()).len()
}
#[reaction]
fn name_view() -> Node<Msg> {
let name_length = observe(name_length());
div![
p![ "The name is " , name_length, " bytes long"],
input![
input_ev(Ev::Input, |inp|
name().set(inp.to_string())
)
]
]
}
Further here is how one could use these atoms with an async fetch call: First, define an atom to hold the request (in this case user id) and loaded user#[derive(Deserialize, Debug, Clone)]
struct User{
id: u32,
name: String,
}
#[derive(Clone)]
enum Loadable<T> {
NotRequestedYet,
Loading,
Request(String),
Loaded(T),
Error(String),
}
#[atom]
fn loadable_user() -> Loadable<User> {
Loadable::NotRequestedYet
} Next, define computed state to fire a fetch and update the user atom whenever the request id is set#[reaction]
fn username() -> Loadable<User>{
let app = observe(my_app());
let loading_user = observe(loadable_user());
if let Loadable::Request(user_id) = &loading_user {
loadable_user().update(|u| *u = Loadable::Loading);
spawn_local({
let user_id = user_id.clone();
async move {
let user_name = format!("https://jsonplaceholder.typicode.com/users/{}",user_id);
let response = fetch(&user_name).await.expect("HTTP request failed");
let user = response
.check_status() // ensure we've got 2xx status
.expect("status check failed")
.json::<User>()
.await
.expect("deserialization failed");
loadable_user().update(|u| *u = Loadable::Loaded(user));
app.unwrap().update(Msg::NoOp);
}
});
}
loading_user } Finally a computed view to display the user, loading status, or error#[reaction]
fn user_view() -> Node<Msg> {
match observe(username()){
Loadable::NotRequestedYet => {
div!["Not Requested A User Yet"]
},
Loadable::Loading => {
div!["Loading"]
},
Loadable::Request(_user_id) => {
div!["Loading",]
},
Loadable::Loaded(user) => {
div!["User is ", user.name]
}
Loadable::Error(err) => {
div!["There was an error with loading the user: ", err]
}
}
} The div![
user_view()
] |
In terms of performance, I did a very rough rewrite of the hooks markdown tutorial page using this approach. In debug mode the hooks markdown editor took 69.5ms on key down and 68.12ms on key up start to finish (inc webpage compositing). (Causing a performance warning in the Chrome profiler) In debug mode the atom-backed markdown editor took 42ms on key down and 23.44ms on key up start to finish (inc webpage compositing). The page itself contains a very large amount of markdown in form of the tutorial that is processed with In the atom backed version, because the markdown tutorial part and the editor part are in a separate atoms, a key press in the editor does not cause the markdown in the tutorial to rerender. The only thing that gets updated is the markdown editor itself. As you can see using atom backed state and reactively only re-rendering a part of the page resulted in approx 2x to 3x speedup in a rough cut version. Further speedups would be possible with a proper rewrite using atoms, plus potentially future integration with the diffing algorithm in order to diff only the parts of the virtual dom you know has changed. |
Hi, I might have a similar problem as you are describing, but I am not really sure. My example is, I have a text input element and an expensive SVG. Whenever something is typed into the text input, it sends a message, which should update it's state and cause a rerender.
The problem is that this causes a redraw of The way this works now (to my understanding), is after the message is sent by the /// parent
struct Model {
svg: svg::Model,
input: input::Model,
}
enum Msg {
SVG(svg::Msg),
Input(input::Msg),
}
fn update(msg: Msg, model: &mut Parent, orders: &mut impl Orders<Msg>) {
match msg {
Msg::SVG(svg_msg) => update(svg_msg, &mut model.svg, &mut orders.proxy(Msg::SVG)),
Msg::Input(input_msg) => update(input_msg, &mut model.input, &mut orders.proxy(Msg::Input)), /// wrap messages
}
}
fn view(model: &Model) -> IntoNodes<Msg> {
div![
input::view(&model.input).map_msg(Msg::Input),
svg::view(&model.svg).map_msg(Msg::SVG), /// wrap messages
]
}
mod svg { ... }
mod input { ... } Since redraws (of the root node) are triggered after receiving any message, all views are redrawn, whenever any child sends a message, even when there wasn't even any state change. I am not sure how this compares to the problem you are trying to solve, but I feel like there is a shared problem: unnecessary redraws. However I don't quite see how something like atoms are needed to fix this problem. They are a nice feature to reduce boilerplate code when you have dependent state, because updates can automatically be called. However the unnecessary redraws would still happen. I think what we need is a way to programmatically decide if a node should be redrawn or not. @MartinKavik just implemented fn view(model: &Model) -> IntoNodes<Msg> {
div![
input::view(&model.input).map_msg(Msg::Input),
if model.redraw_svg {
svg::view(&model.svg).map_msg(Msg::SVG)
else {
Node::NoChange
},
]
} Here the Model requires a Another option would be to store the So a solution to this would be if The problem here is, that seed does not know about a components children until they are already rendered. Currently, there is no way to tell seed "I am a component and I have these children, call these methods to use them", which would be needed for Seed to intercept a |
@mankinskin Then, #[view]
fn view(...) -> Node<Msg> {
sub_view(),
another_view()
}
#[view]
fn sub_view(...) -> Node<Msg> {
div![...]
}
#[view]
fn another_view(...) -> Vec<Node<Msg>> {
vec![]
}
So... there are many ways how to improve the speed, however we have to create the foundation for them first. Hope it makes sense. |
Hey @rebo and @MartinKavik What is the status of I see we have https://github.com/seed-rs/styles_hooks updated recently, so I guess this is our official repos for it now, isn't it ? 😄 |
Yeah, I've created it when |
Okay, then let's use this one 👍 |
Some additional experimentation has highlighted a useful pattern for state management and coordination. I want to collect some feedback on the following to help pin down the API.
Problem No1: Keeping state in sync is hard.
Seed has very straightfoward, sensible and mostly efficient state management.
Node<Ms>
tree view based on theMdl
.update
which mutates theMdl
Node<Ms>
tree to be created via the view function.That said having monolithic model to hold all state is not hugely expressive when it comes to ensuring state remains in sync, or when one state depends on other state.
For instance consider rendering a filtered list of thousands of elements based on a selected criteria. There are a number of existing ways to do ths.
a) In the view function
let items = model.items.iter().filter_map( criteria_based_on model.criteria )
Simple ... but the problem with this is that it has to run every single update, regardless to whether the items in the model or the filter criteria have changed.b) Manually update a cache of filtered items when either modifying the items themselves or changing the filter criteria. This is more efficent than (a), however it requires that the developer has remembered to correctly update the cache in both scenarios. What happens if additional criterias are added? or there are additional ways to add or remove items from model in update function. Or another state depends upon the filtered_list? At each stage the developer has to carefully ensure that the cached filtered list and any subsequent state is correctly generated.
What would be better is this:
c) The view function contains is a UI snippet that depends on computed state called
filtered_list
.filtered_list
is computed state that depends on thelist
state 'atom' and any number ofcriteria
state atoms.These atoms are the source of truth and do not depend upon other state.
Then when the
list
orcriteria
s are mutated, thefiltered_list
and then the UI snippet are calculated automatically.There is no possibilty of showing invalid data because the UI snippet is precisely generated from changes to the
list
orcritera
s.Problem No2: As long as Seed's state remains a monolith Model additional optimisations are hard to do.
(This problem is really not an issue at present but is more one for the future)
Seeds monolith is fine for small to medium size apps, however when scaling to large apps with potentially thousands of changing dom elements this is could block optimsation. The reason for this is that there is no way to determine which parts of the UI tree correspond directly to specific mutations in the
Mdl
.For instance consider two deep but separated leaf nodes on either side of the UI that both need to read
Model.color
. Maybe one being a background color setting in a preference pane and the other being the background of a highlighted element. Seed currently needs to reconstruct the entire view tree, which could mean parsing hundreds of nodes macros or views (each passing down a reference to Model) before finally allowing the two leaf nodes to accessModel.count
.It might be better if both leaf nodes could be automatically updated without having to reconstruct the entire
Node<Ms>
from scratch. This in effect could simply be two mutations in the leaves of a very large tree. Rather than reconstruct the entire tree every update frame.Potential Solution
As outlined in (a) there is a potential solution if we can create a graph of state dependencies originating in 'state atoms' and terminating in UI elements. This way specific UI elements only ever get updated if the specific state which they subscribe to changes.
How might this work in practise? The following is currently (working) proof of concept code.
We define atoms of state, in this case todos and filter criteria:
We define a computed state , filtered todos which subscribes to
todos
andfilter_criteria
:Also we define a computed state which renders the UI based on the filtered todos:
With the above setup, the computed UI will always by definition show the correct filtered state because it is automatically generated whenver the list state changes.
Additional benefits
Additional benefits from this approach is that implementing scoped undos is trivial, because state atoms can keep a memo log of their previous values. Further one can do partial computation for instance a UI snippet could depend on computed state which fetches remote data. Whist the data is fetching the UI snippet could show a "loading..." status and once fetched the UI snippet would automatically update itself to show the loaded state.
Here is an example of automatic undo on the above list example:
https://recordit.co/Am5hlZE7OC
Good talk demonstating these concepts in React:
https://www.youtube.com/watch?v=_ISAA_Jt9kI
The text was updated successfully, but these errors were encountered: