Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Incremental reactive state #488

Open
rebo opened this issue Jun 22, 2020 · 10 comments
Open

[Enhancement] Incremental reactive state #488

rebo opened this issue Jun 22, 2020 · 10 comments
Labels
enhancement New feature or request

Comments

@rebo
Copy link
Collaborator

rebo commented Jun 22, 2020

Some additional experimentation has highlighted a useful pattern for state management and coordination. I want to collect some feedback on the following to help pin down the API.

Problem No1: Keeping state in sync is hard.

Seed has very straightfoward, sensible and mostly efficient state management.

  1. A view function renders completely a Node<Ms> tree view based on the Mdl.
  2. An event triggers an update which mutates the Mdl
  3. This causes a complete new Node<Ms> tree to be created via the view function.

That said having monolithic model to hold all state is not hugely expressive when it comes to ensuring state remains in sync, or when one state depends on other state.

For instance consider rendering a filtered list of thousands of elements based on a selected criteria. There are a number of existing ways to do ths.

a) In the view function let items = model.items.iter().filter_map( criteria_based_on model.criteria ) Simple ... but the problem with this is that it has to run every single update, regardless to whether the items in the model or the filter criteria have changed.

b) Manually update a cache of filtered items when either modifying the items themselves or changing the filter criteria. This is more efficent than (a), however it requires that the developer has remembered to correctly update the cache in both scenarios. What happens if additional criterias are added? or there are additional ways to add or remove items from model in update function. Or another state depends upon the filtered_list? At each stage the developer has to carefully ensure that the cached filtered list and any subsequent state is correctly generated.

What would be better is this:

c) The view function contains is a UI snippet that depends on computed state called filtered_list.
filtered_list is computed state that depends on the list state 'atom' and any number of criteria state atoms.

These atoms are the source of truth and do not depend upon other state.

Then when the list or criterias are mutated, the filtered_list and then the UI snippet are calculated automatically.

There is no possibilty of showing invalid data because the UI snippet is precisely generated from changes to the list or criteras.

Problem No2: As long as Seed's state remains a monolith Model additional optimisations are hard to do.

(This problem is really not an issue at present but is more one for the future)

Seeds monolith is fine for small to medium size apps, however when scaling to large apps with potentially thousands of changing dom elements this is could block optimsation. The reason for this is that there is no way to determine which parts of the UI tree correspond directly to specific mutations in the Mdl.

For instance consider two deep but separated leaf nodes on either side of the UI that both need to read Model.color. Maybe one being a background color setting in a preference pane and the other being the background of a highlighted element. Seed currently needs to reconstruct the entire view tree, which could mean parsing hundreds of nodes macros or views (each passing down a reference to Model) before finally allowing the two leaf nodes to access Model.count.

It might be better if both leaf nodes could be automatically updated without having to reconstruct the entire Node<Ms> from scratch. This in effect could simply be two mutations in the leaves of a very large tree. Rather than reconstruct the entire tree every update frame.

Potential Solution

As outlined in (a) there is a potential solution if we can create a graph of state dependencies originating in 'state atoms' and terminating in UI elements. This way specific UI elements only ever get updated if the specific state which they subscribe to changes.

How might this work in practise? The following is currently (working) proof of concept code.

We define atoms of state, in this case todos and filter criteria:

#[atom(undo)]
fn todos(idx: i32)-> Vec<Todo>{
    vec![]
}

#[atom]
fn filter_critera()-> FilterStatus {
    FilterStatus::ShowAll
}

We define a computed state , filtered todos which subscribes to todos and filter_criteria:

#[computed]
fn filtered_todos() -> Vec<Todo> {
    let filter = link_state(filter_criteria());
    let todos = list_state(todos());
    match filter  {
        FilterStatus::ShowAll => todos,
        FilterStatus::Complete => todos.iter().filter(|t| t.completed).collect::<Vec<_>>(),
        FilterStatus::InComplete => todos.iter().filter(|t| t.completed).collect::<Vec<_>>(),
    }
}

Also we define a computed state which renders the UI based on the filtered todos:

[computed]
fn filtered_todos_list() -> Node<Msg>{
    let todos = link_state(filtered_todos());

    ul![
        todos.iter().map(|t| 
            li![
               t.description, 
               button!["X", mouse_ev(Ev::Click,|_| TodoItemCompleted(t.id))]
            ]
        )
    ]
}

With the above setup, the computed UI will always by definition show the correct filtered state because it is automatically generated whenver the list state changes.

Additional benefits

Additional benefits from this approach is that implementing scoped undos is trivial, because state atoms can keep a memo log of their previous values. Further one can do partial computation for instance a UI snippet could depend on computed state which fetches remote data. Whist the data is fetching the UI snippet could show a "loading..." status and once fetched the UI snippet would automatically update itself to show the loaded state.

Here is an example of automatic undo on the above list example:

https://recordit.co/Am5hlZE7OC

Good talk demonstating these concepts in React:

https://www.youtube.com/watch?v=_ISAA_Jt9kI

@MartinKavik MartinKavik added the enhancement New feature or request label Jun 22, 2020
@rebo
Copy link
Collaborator Author

rebo commented Jun 27, 2020

image

Update with two way data binding, here is a simple celsius to Fahrenheit converter:

#[atom]
fn celsius()->f64{
    0.0
}

#[computed(inverse = "set_fahrenheit")]
fn fahrenheit() -> f64 {
    link_state(celsius()) * 9.0/5.0 + 32.0
}

fn set_fahrenheit(val:f64) {
    celsius().update(|c| *c = (val - 32.0) * 5.0/9.0 )
}

As you can see we define a celsius atom, of which fahrenheit is a computed property. The reverse binding is achieved by setting the inverse option on the computed macro which will set celsius accordingly. This is triggered when set() is used on the computed state.

Whats great about this is we can compute derived Node<Msg> views directly from both of these:

#[computed]
fn celsius_slider() -> Node<Msg> {
    let celsius_reading = link_state(celsius());
    input![
        attrs![At::Type => "range", At::Min=> "0", At::Max=>"100", At::Value=>celsius_reading],
        input_ev(Ev::Input, |value| celsius().update(|c| *c = value.parse::<f64>().unwrap() ))
    ]
}

#[computed]
fn fahrenheit_slider() -> Node<Msg> {
  let fahrenheit_reading = link_state(fahrenheit());
  input![
        attrs![At::Type => "range", At::Min=> "32", At::Max=>"212" At::Value=>fahrenheit_reading],
        input_ev(Ev::Input, |value| fahrenheit().set(value.parse::<f64>().unwrap()))
    ],
}

Both of these sliders must and always will be in sync because they are simply computed state based on the underlying celsius atom.

We can use these views anywhere in our app and always know that they will remain in sync.

div![celsius_slider()],
...
...
in a galaxy far far away...
...
...
div![fahrenheit_slider()],

@MartinKavik
Copy link
Member

MartinKavik commented Jun 27, 2020

Ad example with inverse - I don't know if I'm a fun of that API.

  • Users have to remember a new API - #[computed(inverse = "set_fahrenheit")]
  • The code is inconsistent - celsius().update(|c| *c = va... vs fahrenheit().set(value.p...
  • It seems to be an edge-case. How do you add Kelvin? I think inverse doesn't make sense for more than two cases. Or if inverse is associated just with the computed value generally - are there other examples where modifying computed values would be a good trade-off for less explicit atom changes?
  • It hides that celsius() and fahrenheit() are basically a one value - I would expect that that they are two atoms and not an atom and a computed value. It would be surprising for users/readers.
I would write it this way
#[derive(Default, Copy, Clone)]
pub struct Temperature {
    celsius: f64,
}

impl Temperature {
    pub fn as_celsius(&self) -> f64 {
        self.celsius
    }
    pub fn as_fahrenheit(&self) -> f64 {
        self.celsius * 9.0/5.0 + 32.0 
    }
    pub fn set_from_celsius(&self, celsius: f64) {
        self.celsius = celsius
    }
    pub fn set_from_fahrenheit(&self, fahrenheit: f64) {
        self.celsius = (fahrenheit - 32.0) * 5.0/9.0
    }
}

#[atom]
pub fn temperature -> Temperature {
    Temperature::default()
}

#[computed]
fn celsius_slider() -> Node<Msg> {
    let temperature = link_state(temperature());
    input![
        attrs![At::Type => "range", At::Min=> "0", At::Max=>"100", At::Value=>temperature.as_celsius()],
        input_ev(Ev::Input, |value| temperature().update(|t| t.set_from_celsius(value.parse().unwrap())))
    ]
}

#[computed]
fn fahrenheit_slider() -> Node<Msg> {
    let temperature = link_state(temperature());
    input![
        attrs![At::Type => "range", At::Min=> "32", At::Max=>"212" At::Value=>temperature.as_fahrenheit()],
        input_ev(Ev::Input, |value| temperature().update(|t| t.set_from_fahrenheit(value.parse().unwrap())))
    ]
}

P.S. I think we've already discussed it in another place, but I'm not sure - link_state should be probably renamed to link_atom (link_state does make sense only with #[state]).

@rebo
Copy link
Collaborator Author

rebo commented Jun 27, 2020

edit re: the comment above:

  1. Inverse is weird, I'm going to drop that implementation for now and hold fire until the use-case better presents itself. Recoil.js has it however I'm not 100% clear on their use case either so probably a good reason not to use just yet.

  2. Naming : Looks like I'm going to go with atom, reaction and observe. An atom is a core piece of state that cannot be broken down, a reaction occurs when observe-ing an atom or another reaction

#[atom]
fn name() -> String {
    "".to_string()
}

#[reaction]
fn name_length() -> usize {
    observe(name()).len()
}

#[reaction]
fn name_view() -> Node<Msg> {
    let name_length = observe(name_length());
    div![
        p![ "The name is " , name_length, " bytes long"],
        input![
            input_ev(Ev::Input, |inp| 
                name().set(inp.to_string())
            )
        ]
    ]
}

name_view() is then a cached UI tree snippet containing the div, paragraph and input elements to present the byte length of the input field. Not that one would ever really need to cache the 'name_length' probably better to calculate it directly in the view. However if this bit of state was essential for the application and used throughout then it might make sense to have it as a reaction here.

Further here is how one could use these atoms with an async fetch call:

First, define an atom to hold the request (in this case user id) and loaded user
#[derive(Deserialize, Debug, Clone)]
struct User{
    id: u32,
    name: String,
}

#[derive(Clone)]
enum Loadable<T> {
    NotRequestedYet,
    Loading,
    Request(String),
    Loaded(T),
    Error(String),
}

#[atom]
fn loadable_user() -> Loadable<User> {
    Loadable::NotRequestedYet
}
Next, define computed state to fire a fetch and update the user atom whenever the request id is set
#[reaction]
fn username() -> Loadable<User>{
    let app = observe(my_app());
    let loading_user =  observe(loadable_user());

    if let Loadable::Request(user_id) = &loading_user {
        loadable_user().update(|u| *u = Loadable::Loading);
      
        spawn_local({
            let user_id = user_id.clone();
            async move  {
                let user_name = format!("https://jsonplaceholder.typicode.com/users/{}",user_id);
            
                let response = fetch(&user_name).await.expect("HTTP request failed");
                
                let user = response
                    .check_status() // ensure we've got 2xx status
                    .expect("status check failed")
                    .json::<User>()
                    .await
                    .expect("deserialization failed");
                
                    loadable_user().update(|u| *u = Loadable::Loaded(user));
                app.unwrap().update(Msg::NoOp);
            }
        });
    }

    loading_user

}

Finally a computed view to display the user, loading status, or error
#[reaction]
fn user_view() -> Node<Msg> {
    match observe(username()){
        Loadable::NotRequestedYet => {
            div!["Not Requested A User Yet"]
        },
        Loadable::Loading => {
            div!["Loading"]
        },
        Loadable::Request(_user_id) => {
            div!["Loading",]
        },
        Loadable::Loaded(user) => {
            div!["User is ", user.name]
        }
        Loadable::Error(err) => {
            div!["There was an error with loading the user: ", err]
        }
    }
}

The user_view will then automatically always be in sync with whatever the loading status of the user is.

div![
    user_view()
]

@rebo
Copy link
Collaborator Author

rebo commented Jun 29, 2020

It is also possible to have local state work with atoms in a very similar way to current Seed Hooks. Therefore it is possible to use atom backed state as re-usable components. All you need to do is use id: Local as an argument to a local atom/reaction.

For instance:

#[atom]
fn counter(id: Local) -> u32 {
    0
}

#[reaction]
fn count_ui(id: Local) -> Node<Msg> {
    
    let count = observe(counter(id));

    div![
        p!["Counter: ", count],
        fancy_button![ "Increment", 
            mouse_ev(Ev::Click, move |_| counter(id).update(|c| *c +=1 ))],
    ]    
}

In this example both atom and reaction are local because they accept id: Local which is provided at the callsite by Local::new(). They UI snippet is then used like this:

div![
    count_ui(Local::new()),
    count_ui(Local::new()),
    count_ui(Local::new()),
    count_ui(Local::new()),
],

Each counter widget is unique with local state:

image

You can actually use any identifier as a key to unique state therefore you don't have to explicitly name all individual atoms or reactions:

First, define atom parameterised by an id:
#[derive(Clone)]
struct Widget {
    id: usize,
    count: u32,
}

#[atom]
fn widget(id: usize) -> Widget {
    Widget {
        count: 0,
        id,
    }
}
Next, a reaction that observes the entire collection :
#[reaction]
fn widget_collection() -> Vec<Widget> {
    // an observed collection of 10 widgets.. ids 1 to 10.
    (1..=10).map(|i| observe(widget(i)))
        .collect()
}
UI snippets can then be rendered in response to changes in the collection
#[reaction]
fn widget_ui() -> Node<Msg> {
    observe_with( widget_collection() , |widgets|  // observe_with so as not to needlessly clone the widget's Vec
        ul![
            widgets.iter().map(|w| 
                { let widget_id = w.id;
                    li![
                        p!["The widget id is ", w.id, " and the count is ", w.count],
                        fancy_button!["Inc", mouse_ev(Ev::Click, move |_| widget(widget_id).update(|wi| wi.count += 1))]
                    ]
                }
            )
        ]
    )
}
image
The interesting thing is that the UI snippet will re-render and remain in sync on a change to the widget from anywhere in the app
fn view() {
    div![
        widget_ui(),
        fancy_button![ "Increment widget 8 specifically", 
            mouse_ev(Ev::Click, move |_| widget(8).update(|w| w.count +=1 )),
    ]
}
image

@rebo
Copy link
Collaborator Author

rebo commented Jun 29, 2020

In terms of performance, I did a very rough rewrite of the hooks markdown tutorial page using this approach.

In debug mode the hooks markdown editor took 69.5ms on key down and 68.12ms on key up start to finish (inc webpage compositing). (Causing a performance warning in the Chrome profiler)

image

In debug mode the atom-backed markdown editor took 42ms on key down and 23.44ms on key up start to finish (inc webpage compositing).

image

The page itself contains a very large amount of markdown in form of the tutorial that is processed with md! on every keypress (up or down) in the hooks version. This is because the entire page is being re-rendered including the md! processing. This cost would also be similar under standard seed because the entire UI tree gets re-rendered every update cycle.

In the atom backed version, because the markdown tutorial part and the editor part are in a separate atoms, a key press in the editor does not cause the markdown in the tutorial to rerender. The only thing that gets updated is the markdown editor itself.

As you can see using atom backed state and reactively only re-rendering a part of the page resulted in approx 2x to 3x speedup in a rough cut version. Further speedups would be possible with a proper rewrite using atoms, plus potentially future integration with the diffing algorithm in order to diff only the parts of the virtual dom you know has changed.

@mankinskin
Copy link

Hi, I might have a similar problem as you are describing, but I am not really sure. My example is, I have a text input element and an expensive SVG. Whenever something is typed into the text input, it sends a message, which should update it's state and cause a rerender.

parent
     ├── svg
     └── input * (event)

The problem is that this causes a redraw of parent which in turn draws both svg and input, even though the message is only sent to input. Since a mutation can only occur upon receiving a message in seed (unless a component depends on static state), seed should not redraw components that don't receive a message.

The way this works now (to my understanding), is after the message is sent by the input node, the seed runtime sends a message to the root update function , which it is delegated to its children. The message is wrapped as determined by Node::map_msg or Orders::proxy, which usually looks like this:

/// parent
struct Model {
    svg: svg::Model,
    input: input::Model,
}
enum Msg {
    SVG(svg::Msg),
    Input(input::Msg),
}
fn update(msg: Msg, model: &mut Parent, orders: &mut impl Orders<Msg>) {
    match msg {
         Msg::SVG(svg_msg) => update(svg_msg, &mut model.svg, &mut orders.proxy(Msg::SVG)),
         Msg::Input(input_msg) => update(input_msg, &mut model.input, &mut orders.proxy(Msg::Input)), /// wrap messages 
    }
}
fn view(model: &Model) -> IntoNodes<Msg> {
    div![
        input::view(&model.input).map_msg(Msg::Input),
        svg::view(&model.svg).map_msg(Msg::SVG),  /// wrap messages
    ]
}
mod svg { ... }
mod input { ... }

Since redraws (of the root node) are triggered after receiving any message, all views are redrawn, whenever any child sends a message, even when there wasn't even any state change.

I am not sure how this compares to the problem you are trying to solve, but I feel like there is a shared problem: unnecessary redraws. However I don't quite see how something like atoms are needed to fix this problem. They are a nice feature to reduce boilerplate code when you have dependent state, because updates can automatically be called. However the unnecessary redraws would still happen.


I think what we need is a way to programmatically decide if a node should be redrawn or not. @MartinKavik just implemented Node::NoChange which can be used by the user to skip a node update, but this requires a lot of boilerplate, when it should be the default:

fn view(model: &Model) -> IntoNodes<Msg> {
    div![
        input::view(&model.input).map_msg(Msg::Input),
        if model.redraw_svg {
            svg::view(&model.svg).map_msg(Msg::SVG)
        else {
            Node::NoChange
        },
    ]
}

Here the Model requires a redraw_svg: bool field and has to manually manage it in the update function.

Another option would be to store the redraw variable in the component itself, so it can decide if it should redraw upon receiving a message, but it is required that the redraw variable is set to false after every draw, or otherwise it would be set to redraw until the component is messaged again. The advantage of having this logic in the parent is that it is calculated for every message/potential redraw. So basically the component would have to set model.redraw = false after the view method, but view can only immutably access the model.

So a solution to this would be if update returned a bool value, which is stored in the virtual DOM and determines if its view should be called or replaced by Node::NoChange. Then this value should be false by default. Then parent::update could return true, deligate a message to input::update, which also returns true. Now in the rendering step Seed should use these values to determine if it should render the root node or any of its children.

The problem here is, that seed does not know about a components children until they are already rendered. Currently, there is no way to tell seed "I am a component and I have these children, call these methods to use them", which would be needed for Seed to intercept a view call.. I will try to implement something like this in my components crate.

@MartinKavik
Copy link
Member

MartinKavik commented Sep 25, 2020

@mankinskin
We plan to focus on performance once #525 and #537 are resolved and ideally Seed Hooks are integrated into Seed.

Then, view functions would look like:

#[view]
fn view(...) -> Node<Msg> {
    sub_view(),
    another_view()
}

#[view]
fn sub_view(...) -> Node<Msg> {
    div![...]
}

#[view]
fn another_view(...) -> Vec<Node<Msg>> {
    vec![]
}

#[view] macro effectively turns view functions into components that can have their own local state. It will allow us to do many interesting things:

  • We can define the local state directly for each view fuction to create a "real" components with their own Models. It will allow us to create proper Seed component libraries because it would eliminate boilerplate caused by TEA components.

  • We can leverage Node::NoChange to improve render performance by eliminating unnecessary rerenders. We can explicitly or even implicitly save the view functions arguments and compare their values with the arguments passed during the next render. We can even read the arguments in their binary form and hash it so we don't have to implement Hash for them.

  • It would play nicely with reactive state (as described in this issue) to eliminate time consuming operations in view functions.

So... there are many ways how to improve the speed, however we have to create the foundation for them first. Hope it makes sense.

@arn-the-long-beard
Copy link
Member

Hey @rebo and @MartinKavik

What is the status of seed_hooks ?

I see we have https://github.com/seed-rs/styles_hooks updated recently, so I guess this is our official repos for it now, isn't it ? 😄

@MartinKavik
Copy link
Member

What is the status of seed_hooks ?

I see we have https://github.com/seed-rs/styles_hooks updated recently, so I guess this is our official repos for it now, isn't it ? 😄

Yeah, I've created it when rebo was too busy to work on it by himself, to unify our effort to one repo and to simplify development.
I use it in an app for my client, so I want and have to maintain it.

@arn-the-long-beard
Copy link
Member

Okay, then let's use this one 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants