Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding a "do not re-track" watcher option #197

Open
brandon942 opened this issue Aug 4, 2020 · 20 comments
Open

Adding a "do not re-track" watcher option #197

brandon942 opened this issue Aug 4, 2020 · 20 comments

Comments

@brandon942
Copy link

brandon942 commented Aug 4, 2020

Here's the issue I have with watchers.

  • dependency tracking and dependency list re-creation on every single triggering of the watch function even in watchers with an unchanging set of reactive properties (eg. functions with 1 branch path or watchers of 1 reactive property). Watchers need a "fixed" or "do not re-track" option because in half of all cases they do unnecessary work
  • watchers are hidden. The watch() function might as well return the watcher instead of a wrapper function around watcher.teardown(). In one use case I am changing a list of properties accessed in the watcher function and then I re run the watcher to re-track watcher.run(). Such use cases are rare but they do exist.
@posva
Copy link
Member

posva commented Aug 4, 2020

Give https://v3.vuejs.org/api/refs-api.html#customref a look. It's not really clear what is wrong with a boiled down example of what you are trying to do. You might want to provide one.

@brandon942
Copy link
Author

@posva It's certainly useful. But it's about defining a reactive ref that can trigger itself when it wants (as opposed to only on assignments). It's not about watchers who subscribe to those trigger events.

@jods4
Copy link

jods4 commented Aug 5, 2020

dependency tracking and dependency list re-creation on every single triggering of the watch function even in watchers with an unchanging set of reactive properties (eg. functions with 1 branch path or watchers of 1 reactive property). Watchers need a "fixed" or "do not re-track" option because in half of all cases they do unnecessary work

There is room for optimization in the current implementation. Most effects have constant dependencies and the cost of their tracking can be brought down. I've intended to tackle this for quite some time but I'm overwhelmed by work.

Further than that, I think a benchmark would be required to demonstrate that tracking costs significantly trump actual work before introducing a "constant dependencies" flag, because it is a dangerous one. It can lead to situations that are not easy to debug:

  • It can lead to missed dependencies, so things don't update as expected anymore.
  • It could keep alive dependencies that should be garbage collected. EDIT: not really.

watchers are hidden. The watch() function might as well return the watcher instead of a wrapper function around watcher.teardown(). In one use case I am changing a list of properties accessed in the watcher function and then I re run the watcher to re-track watcher.run(). Such use cases are rare but they do exist.

I don't quite understand this one... Make the list of properties reactive and your watcher would re-run automatically any time you change it?

@brandon942
Copy link
Author

Make the list of properties reactive and your watcher would re-run automatically

You're right I could make the list reactive, but it's part of a plugin and I don't feel like adding any overhead if I can just use the watcher directly. Watchers are stored on the instance as _watchers so there are still ways for plugins to obtain them. As mentioned, it's a rare use case.

It can lead to missed dependencies, so things don't update as expected anymore.

If it can then we're dealing with a variable set of dependencies. It the dependencies of a watcher function are not a fixed set then you don't declare them as such. For example the watcher knows for a fact that a string name of a data property (or getter or prop) without a "." is a fixed dependency. In all other cases the watcher can be told by the user.
There are no issues with GC.

I've intended to tackle this for quite some time but I'm overwhelmed by work

An efficient implementation basically amounts to 1 flag and 1 single check in watcher.get(). But feel free to ask for help.

@jods4
Copy link

jods4 commented Aug 5, 2020

but it's part of a plugin and I don't feel like adding any overhead if I can just use the watcher directly [...] As mentioned, it's a rare use case.

It's not a rare use case then, it's hacking around the built-in way to fit arbitrary constraints.

As a plugin author, if you don't want to use reactivity in the intended way, then it's up to you to create alternative designs. And Vue is flexible enough to provide you with some alternatives.
For example I would suggest that you use a signal pattern (to be clear: a single dependency to you manually trigger whenever you need to run stuff again, e.g. whenever an api to change the list of properties is called). Having a single dependency is very efficient.

The signal pattern pretty much amounts to manual change tracking, which is more or less what you want to do by manipulating watchers directly or triggering runs manually.

If it can then we're dealing with a variable set of dependencies. It the dependencies of a watcher function are not a fixed set then you don't declare them as such.

You have to account that people sometimes make mistakes. That some projects are very large, have many people working on them and evolve for years.

Consider that that flag can be set correctly, but years later another dev introduces a new ref in the codebase and assumes everything would update properly, only to be surprised that it doesn't.

Vue effects automatically watch anything reactive.
Having a flag that say "freeze dependencies" is possible, but it's also a subtle API because you thread into manual dependency management. That flag can be set erroneously and that might lead to surprising bugs. It's definitely not a trivial and intuitive API.

An efficient implementation basically amounts to 1 flag and 1 single check in watcher.get(). But feel free to ask for help.

Well, if you beat me to a PR, go for it. I'm not having much time on my hands right now.

1 flag and 1 single check is not "efficient", it's minimal and doesn't support the current feature set.

I'm curious, would you share some real use cases that exhibit a bottleneck in reactivity tracking?
If you have a benchmark that would be helpful, as I could measure how much the changes I have in mind improve the performance.

@brandon942
Copy link
Author

If you have a benchmark that would be helpful

sure, bench, my diff

@jods4
Copy link

jods4 commented Aug 6, 2020

@brandon942
That's really as micro-benchmarking as it gets.
You're comparing doing change tracking with doing nothing in a tight loop (in 2nd case, as there is neither dependency tracking, nor a callback).

And I would say that it shows the change tracking comparing rather nicely to a tight loop, as it adds about the cost of an empty loop on my machine.
In fact, based on your benchmark tracking a single dependency adds 0.0003ms on my machine per watch evaluation.
What's your use case where this is a problem? Do you have a real (non-micro) benchmark where this shows up?

That said, I believe a more comprehensive benchmark with more than a single dependency would be interesting (e.g. going through a reactive array of 300 elements).

More remarks:

I tried adding a side-effect to your benchmark, just to see if everything was working correctly. Surprisingly, only the last watch seems to work, check this:
https://codesandbox.io/s/quizzical-dewdney-ipfg6
All watches are supposed to increment variable y but only the last one actually does.

Your assumption that watching a field without dot can be fixed is flawed if said field is a getter property (or maybe not? I'm not sure how the Options API $watch works, I'm only using the new Composition API).

@RobbinBaauw
Copy link

RobbinBaauw commented Aug 6, 2020

I would like to add that this seems to be Vue 2, his fork is forked from vuejs/vue.

@brandon942
Copy link
Author

brandon942 commented Aug 6, 2020

@jods4 The 2nd argument in $watch() is the callback function. It is called each time the return value of the 1st (watched) function changes. Since the 1st function always returns undefined, the callback will never be invoked. This is why we can pass null. The composition API's watchEffect is basically only the first function. So you put your effect in that function. The loop in the benchmark test has to be tight because we want to measure the tracking time difference. We're tracking only 1 dependency there so the difference shouldn't be too great. If we track more dependencies, the time difference naturally will increase in absolute terms, but not in relative terms because the getter functions of all the reactive properties that are accessed add even more time, which reduces the proportion of what we want to measure in the overall measured time. This is purely an efficiency optimization that if used makes watchers about 2-3 times faster, which is not bad.

Your assumption that watching a field without dot can be fixed is flawed if said field is a getter property

Yes you're right about that, the data object can have getters so we can't assume it to be a single dependency.

@RobbinBaauw I don't even know where you can find the Vue3 source but I'm guessing the core is pretty much the same.

@RobbinBaauw
Copy link

@brandon942 Vue3 can be found at the vuejs/vue-next repo, but everything has been completely rewritten. The core is very much not the same.

@brandon942
Copy link
Author

@RobbinBaauw I've taken a look. Overall, Vue3 seems to be slower than Vue2 and my no-retrack implementation doesn't save as much in comparison. I kindof expected this after seeing how proxy heavy it is. Even a vue component instance itself is a proxy - that bumps up property access times.
My bigger issue is with refs though. They should be as lightweight as possible. I have proposed that they be simple functions. Access times are 100x faster, believe it or not, and they allocate way less memory (heap size: 95mb for a million refs vs 255mb). And if you want you can still just plug in a property Object.defineProperty(ref, "v", {get: ref, set: ref}) any time and do this ref.v++. You can call it whatever you like, without being forced to type .value each time - which is a neat thing in javascript.

@RobbinBaauw
Copy link

I tested your ref idea, and it is indeed a lot faster. I implemented it in Vue itself (RobbinBaauw/vue-next@b66e6f1), and made a small benchmark. It shows that this method is quite a bit faster (5x for me) and still allows for backwards compatibility with the .value. Note it's only a small, simple test which does not represent a specific real-world scenario. I would also like to note that constructing these patched refs is quite a bit slower, which can be seen when you create the refs in the loop. @jods4 what do you think?

About the component proxy, I also implemented a small working POC of this a while ago (RobbinBaauw/vue-next@373decb). The downside of removing the component proxy is that it's used heavily when rendering and actually getting rid of it properly would be pretty hard (I think). I have not done any performance tests with this yet. Note that this also contains code which was used to test something which was eventually implemented in vuejs/core#1682.

I also do believe that the watchers with constant dependencies can be faster and I made a small benchmark to test whether a certain implementation is faster (https://github.com/RobbinBaauw/watch-benchmark).

@jods4
Copy link

jods4 commented Aug 10, 2020

@brandon942
This is micro-benchmarking to the extreme. Empty trigger and tracks, just plain function calls that return unused values...
Micro-benchmark results are often widely influenced by the code around them. One example is that something very small may get inlined by JIT but then is not inlined when put in its real (larger) context.
Also, proper benchmarking should include several runs, standard deviation, to be able to assess if the results mean something or not. Especially with very short code.

So, I put back the real trigger/track in the implementation, I did both reads and writes and I plugged all this into a proper benchmarking lib:
https://jsbench.me/gnkdnq9icw/4

Outcome is (using Chromium 86 here):

  • class implementation is fastest;
  • function calls are ~12% slower;
  • object based implementations perform ~93% slower.

@yyx990803 Is it worth considering a class implementation of refs? It's really not a big difference code-wise but it might 10x faster than current implementation.
I did not measure memory-wise but given there's a single object with a single property (value and isRef are in prototype) and without capture, it should be better than current implementation.

It might be worth checking the results in other browsers (although Chromium is dominant on desktop).
I was surprised by how much faster than objects class is, so if this benchmark is not flawed (counting on you guys to check if I did something stupid), maybe it's worth trying to introduce class instead of object literals in other places: component instances, proxy handlers, etc.

@brandon942
As far as style goes, I've used Knockout for years. It worked in IE6 using functions (exactly yours, even to their implementations). And it was a pain. It's a lot of noise in code and there's nothing intuitive in calling a function to set a value.
So much so that when properties were finally available in IE8 a plugin was quickly released to hide the function calls behind properties and it was much more usable.

The beauty of Vue is that if you want to go this way you could do it by building your own ref from trigger and track that are exported from @vue/reactivity; but I wouldn't dare make it the default option.

Letting people choose their own abreviation (like adding a v property to a function) is IMHO a false good idea. There's a lot of value in having the whole community use the same core apis / conventions. It's bad if when helping someone on discord or stackoverflow you need to start by wondering if the code should call functions or access a v or val or value or wert property.

@jods4
Copy link

jods4 commented Aug 10, 2020

I would also like to note that constructing these patched refs is quite a bit slower, which can be seen when you create the refs in the loop. @jods4 what do you think?

That is an interesting point, given the big focus on startup time of Vue 3. When Evan says proxies are faster he means at construction (they're slower at access, AFAIK); and the charts he shows depict an application starting up.

Good thing that it seems this is not the fastest option, on my computer the benchmark I posted in my previous comment is faster with class, which should be even faster at construction time as well.

@RobbinBaauw
Copy link

I've tested your bench with FF79. Besides the unavailability of private fields, I got the following results:

  • Vue (plain object) was fastest
  • Function call was 35% slower
  • Class was 50% slower

Note that the Vue (plain object) was faster on FF than it was on Chrome.

However, I think putting these functions on the prototype does make a lot of sense. I can imagine that the JIT has to optimize every single ref now, whereas the class version only requires a single function optimization. The benchmarks do not test this, will do that in a minute, but this may also be a factor to be considered (to make this a good option even in FF).

@jods4
Copy link

jods4 commented Aug 10, 2020

Strange, I get very different results on my machine w/ FF 79.

  • Class is fastest with less variance
  • defineProperty is almost as fast as class, with more variance
  • Pure function 3.5% slower
  • Function property 6.33% slower
  • Vue ref (plain object) 18.4% slower

Private fields are not supported but in Chrome they were slower than plain class anyway (+ they don't transpile well so not a great option).

Note to anyone willing to try on their machine: use this version for FF, with private fields removed (you'd get a type error with previous version)
https://jsbench.me/gnkdnq9icw/5

@brandon942
Copy link
Author

@RobbinBaauw @jods4 Yea I was wondering. Classes don't seem to be used a lot. Class instance creation is way faster and their memory footprint is also a lot smaller because of the prototyping. The only drawback in the ref example is you can't choose your property name. As you know, I really dislike the verbosity of ".value". One could however add a 2-character alias almost without a memory penalty. Another difference is it exposes the ref value since it is no longer stored in the scope. So there's 2 ways to access the value: ref.val non-reactively and ref.value reactively. Maybe it's a feature. Or maybe that's a reason why scopes that can hold private data are preferred. Value access via a function ref() performs about equally. The statement ref.value++ appears to be optimized and does perform a bit faster than calling a function twice. What I take from the benchmarks is that Chrome's V8 optimizes classes better.

you could do it by building your own ref from trigger and track that are exported from @vue/reactivity

That'd need to be supported by the framework first. Those are internal exports. You wouldn't know that for something to be seen as a ref it needs to have __v_isRef without looking into the source code.

@jods4
Copy link

jods4 commented Aug 10, 2020

The only drawback in the ref example is you can't choose your property name. As you know, I really dislike the verbosity of ".value".

I wouldn't encourage it but you can add v to the prototype if you really want to.

Another difference is it exposes the ref value since it is no longer stored in the scope.

That's JS life, nothing new here. There are plenty of Vue internals that you can grab if you inspect the real shape of objects.
Obviously that field would be called something like _v and as every undocumented feature that starts with an underscore you'd be ill-advised to depend on it.

@jods4
Copy link

jods4 commented Aug 12, 2020

@RobbinBaauw and I have benchmarked some more and it's really interesting.
On top of the nice improvement of ref, Robbin created a benchmark that shows that using a class instance as the reactive handler makes reactive objects much faster. Benchmark:
https://jsbench.me/hikdrnzgsh
On my machine: 25% faster in Chrome, 43% faster in Firefox!!

I have a theory that explains this.

Classes have nothing special, they are just syntactic sugar for setting up prototypes.
Here's a new version of the ref benchmark. For the 5th case I used a prototype (through Object.create) and it has exactly the same speed as the class. Which should be expected because again, class is just a nice syntax for objects that have prototypes.
https://jsbench.me/gnkdnq9icw/6

Here's my theory:
You all know about the hidden class and monomorphism in JS JIT engines.
Blocks of code are speculatively optimized for constant object shapes and there's only one quick guard at the beginning of each block that verify if objects have indeed the expected shapes (it bails out if they don't).

The key observation here is that an object hidden class says what its prototype is, what fields it has, even the type of those fields... but not their actual value.

If your object is an instance with a function (or getter) get(), such as this one:

{
  get() { return _v }
}

Then the JIT will optimize for objects that have a get field, maybe even assume that its value is of type function.
But unless the object is frozen it can't assume the field value is a constant, so the JIT'ed code still has to read get on the object and perform a dynamic call.
A dynamic call prevents many optimization, starting with inlining, which is key for even more optimizations in the context of inlined code.

On the other hand, if you put get() in the prototype:

{
  __proto__: { 
    get() { return _v }
  }  
}

Then the JIT will optimize for objects that have this specific prototype and no field.
And this is a key difference because now the JIT can assume that calls to instance.get() are actually calls to a specific prototype that is known (since there is no field get in the hidden class).
Which leads to this being a static call that can even be inlined and aggressively optimized.
Hence that perf boost, especially for small functions.

Conclusion: in monomorphic code, it's better to put constant functions into prototypes rather than in instances.

An added benefit is that prototypes reduce memory usage. Every object instance has a prototype.
If you have N instances, putting a function in a field on each instance results in O(N) memory usage.
Putting the function on a (singleton) prototype results in O(1) memory usage.

@RobbinBaauw
Copy link

RobbinBaauw commented Aug 13, 2020

Thanks for the explanation! I would like to add that this benchmark for reactive is wrong. It only passes a few tests. Subsequent benchmarks with proxy handlers as classes that do pass all tests don't have better performance.

I will try some other techniques to find optimizations in the proxy handlers but I'm not sure if there are any, especially since the functions are quite heavy themselves and proxies are notoriously hard to optimize.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants