-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Potential bug found during reactive library benchmark stress tests #3926
Comments
I think it doesn't hang, it's just slow. I limited
So 7000 iterations * 5 repetitions would take ~2.62h to complete. |
The reason why it's so slow, is because the You either have to make sure that accessed leaf computeds stays observed (some effect depends on them) between individual reads, or reads must be part of the same batch (transaction/action) - this has similar effect, because computeds also stay "hot" for the duration of a batch, just in case they would be accessed multiple times during the batch, which is exactly the situation here. Eg if you change it like this, you will immediately see drastic improvement: export function runGraph(
graph: Graph,
iterations: number,
readFraction: number,
framework: ReactiveFramework
): number {
const rand = pseudoRandom();
const { sources, layers } = graph;
const leaves = layers[layers.length - 1];
const skipCount = Math.round(leaves.length * (1 - readFraction));
const readLeaves = removeElems(leaves, skipCount, rand);
let sum = 0;
framework.withBatch(() => {
let start = Date.now();
for (let i = 0; i < iterations; i++) {
if (i % 100 === 0) {
console.log('iteration: ' + i, 'delta: ' + (Date.now() - start));
start = Date.now()
}
const sourceDex = i % sources.length;
sources[sourceDex].write(i + sourceDex);
for (const leaf of readLeaves) {
leaf.read();
}
}
sum = readLeaves.reduce((total, leaf) => leaf.read() + total, 0);
});
return sum;
} Also you can change the adapter like so: import { computed, observable, autorun, runInAction } from "mobx";
import { ReactiveFramework } from "../util/reactiveFramework";
export const mobxFramework: ReactiveFramework = {
name: "MobX",
signal(initial) {
const s = observable.box(initial, { deep: false });
return {
read: () => s.get(),
write: (x) => s.set(x), // <-- no action here
};
},
computed: (fn) => {
const read = computed(fn);
return {
read: () => read.get(),
};
},
effect: (fn) => autorun(fn),
withBatch: (fn) => runInAction(fn), // <-- more realistic
withBuild: (fn) => fn(),
}; It still fails on assertion, which I did not dig into. |
You should consider creating a new graph for these runs, otherwise they may affect each other. |
Thanks so much for the prompt help here, @urugator 🙏 I can confirm that applying your suggested changes results in the test suite running with times much more in line with what I'm seeing from other frameworks. https://github.com/transitive-bullshit/js-reactivity-benchmark/tree/feature/minimal-issues-repro-mobx 🎉 🎉 🎉 I wonder, though, if this makes sense as the expected behavior? The changes to the The changes to the
Would you be able to explain the reasoning behind this design decision in MobX? It seems like it'd be pretty core and difficult to change, I just want to better understand the reasoning as I continue my deep-dive into understanding the various tradeoffs made by different reactive libs across the TS landscape. This may be related to pmndrs/valtio#949 and/or vuejs/core#11928. Note that at least a dozen other similar reactive libs tested in the benchmark don't seem to have this perf footgun (see the non-minimal base benchmark branch for examples). |
Agreed. In general, I think there's a lot we can / should do to improve isolation of the benchmark runs. Currently, invoking the GC inbetween each run isn't a great solution, but I've only been working on this fork of https://github.com/milomg/js-reactivity-benchmark for a few days so far. |
I've updated the benchmark for now with a middle-ground to make export function runGraph(
graph: Graph,
iterations: number,
readFraction: number,
framework: ReactiveFramework
): number {
const rand = pseudoRandom();
const { sources, layers } = graph;
const leaves = layers[layers.length - 1];
const skipCount = Math.round(leaves.length * (1 - readFraction));
const readLeaves = removeElems(leaves, skipCount, rand);
const start = Date.now();
let sum = 0;
for (let i = 0; i < iterations; i++) {
// Useful for debugging edge cases for some frameworks that experience
// dramatic slow downs for certain test configurations. These are generally
// due to `computed` effects not being cached efficiently, and as the number
// of layers increases, the uncached `computed` effects are re-evaluated in
// an `O(n^2)` manner where `n` is the number of layers.
if (i % 100 === 0) {
console.log("iteration:", i, "delta:", Date.now() - start);
}
framework.withBatch(() => {
const sourceDex = i % sources.length;
sources[sourceDex].write(i + sourceDex);
});
framework.withBatch(() => {
for (const leaf of readLeaves) {
leaf.read();
}
});
}
framework.withBatch(() => {
sum = readLeaves.reduce((total, leaf) => leaf.read() + total, 0);
});
return sum;
} Alternatively, if we kept the Does MobX have a similar way to create a tracked scope which would ensure that all sub-computations and effects are both cached as well as disposed of properly when that scope expires? Maybe we just want to wrap the entire content of the benchmark in |
Every real world pattern containing observables must also contain observers. Observer in this case means a side effect that should run whenever state changes: I used
Batch is a series of individual mutations, that looks like a single mutation to a consumer - any "subsequent read" is not a consumer. Batching has no effect on subsequent reads, because they're, well, "subsequent" - they can never see intermidiate mutations that happened before. So if you talk about batching, you again need observers. framework.withBatch(() => {
const sourceDex = i % sources.length;
sources[sourceDex].write(i + sourceDex);
}); Here the batch is completely useless because:
There could however be implementations, where batch has an effect here. |
closing as answered. |
I created a comparison of the most popular TS reactivity libs, which of course includes MobX.
During my deep-dive, I wanted to benchmark all of them and created a fork of @milomg's excellent js-reactivity-benchmark, which includes dozens of benchmarks and stress tests.
The problem is that MobX fails to complete some of the dynamic benchmark tests in the suite.
Intended outcome: MobX should be able to run these benchmark tests just like all of the other reactive libs.
Actual outcome: MobX runs most of the benchmark tests fine, but consistently hangs on several of the larger dynamic tests.
I've been unable to track down the source of the issue, whether it may be due to a rare bug in MobX or possibly due to a bug in the MobX adapter used by the benchmark.
How to reproduce the issue:
feature/minimal-issues-repro-mobx
branchpnpm install
pnpm test
to make sure everything's working locallypnpm bench
to reproduce the issueThe mobx benchmark tests work fine for the first two test cases but then hang on the third (slightly larger) test case. Note that these test cases are creating a rectangular grid of shallow signals, where each layer of the grid contains
computed
properties depending on some number of the previous row's signals as a stress test.I've paired down this branch as much as possible to try and make it easier to debug.
Note that in the full benchmark branch, all of the other reactivity libs are able to pass these benchmark tests.
Versions
The text was updated successfully, but these errors were encountered: