-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async #30
Comments
For additional color here about bubble-reactivity:
|
We discussed this today and agreed to not include it in the initial thing which we will all prototype. There are too many open questions, and it feels too early to try to answer them all. Same goes for forking/transactions, which are similarly important. |
I know it has already been decided to exclude this from an initial version of the proposal, but I think it would be immensely valuable to have something forward-compatible with an async mechanism, because:
While there is a case to be made that app developers can solve this by manually implementing "loading" signals/states or carefully interleaving any async setup work, it can be much more straightforward to wait for the code/data to be fetched. In many cases, the user would expect to wait a brief while for an operation to finish, and it's still possible to add loading states to an async signal network. I maintain a library named
After spending a lot of time looking at options, I ended up writing a small mechanism that propagates signals across a DAG: https://github.com/cubing/cubing.js/blob/cdfab5cb35c3741ef80e0680d9b72c69263205d3/src/cubing/twisty/model/props/TwistyProp.ts I know it can't be a universal solution, but it has done a great job of being performant, avoiding synchronization bugs, and getting most of the benefits of async code while supporting There is a global Each node has:
When a node is updated:
When someone calls
To receive updates of the freshest value for a given There are some details around mutability rules and deduplication optimizations I've skipped over, but I think this captures enough core details. Footnotes
|
@lgarron I'm making a utility library here: proposal-signals/signal-utils#1 which will include a reactive async implementation built upon the polyfill (and will eventually phase out the polyfill when real implementation occurs). would that help with your concerns? |
Hmm, the main appeal of the Signals proposal is that the code would be built-in and standardized. I support a light async layer on top of the existing proposed spec could be handy, but using any library has costs and liabilities beyond using standard web platform code. In particular, if Signals becomes a standard for interoperable signaling code, then any async implementations on top of that would need to be careful to interoperate, probably to the point that… we would need a specification for now to interoperate. 😆 |
Like with other proposals, they need to be the minimal feature set possible to reduce bike sheddin / yak shaving / in general longer discussion. If something can't be implemented with primitives, it makes sense to include it in the proposal -- but since a lot of things can be implemented with the currently described primitives, deferring decisions to future proposals via libraries feels like the best way to move the existing proposal forward |
Not only the promise state would be of interest. There will be a need to cancel all pending evaluations. For example if you switch the whole page and need to cancel all the requests that are still pending.
What signals need is Structured Concurrency. I am already looking at this with: https://github.com/timonkrebs/MemoizR |
I also don't see a way to make deduplication work with |
@lgarron the way that you could handle that is via what I've been calling the "Relay" pattern, mentioned by @littledan above. I'm going to write up a document describing this later this week if I can find some time, but here's an example: // $lib/data-utils.ts
import { client } from '$lib/juno';
export const fetchJson = (url: Signal<string>) => {
let state: Signal.State = new State({ isLoading: true, value: undefined });
let controller: AbortController;
const loadData = new Signal.Computed(async () => {
controller?.abort();
// Update isLoading to true, but keep the rest of the state
state.set({ ...state.get(), isLoading: true });
controller = new AbortController();
const response = await fetch(url.get(), { signal: controller.signal });
const value = await response.json();
state.set({ ...state.get(), value, isLoading: false });
});
return new Signal.Computed(() => {
loadData.get();
return state;
}, {
onObserved() {
loadData.get();
},
onUnobserved() {
controller?.abort();
}
});
} The basic idea is that Relays absorb the async of a subgraph and expose it to a parent graph that is consuming the subgraph in a synchronous way. So, you can intercept those sync changes and watch for equality at that point. |
So, I figured out the minimal primitive needed to make async signals work without needing to deal with async functions directly: an "is updating" condition propagated similarly to the "is dirty" condition underlying
Then, you can implement async // The `.set` options are purely hypothetical here
function asyncComputed(body, opts) {
const state = new Signal.State()
let token = {}
const invoker = new Signal.Computed(async () => {
const prev = Signal.subtle.untrack(() => state.get())
state.set(prev, {isPending: true})
const t = token = {}
let failed = false
let value
try {
value = await body()
} catch (e) {
value = e
failed = true
}
// Signal refreshed, drop the stale value
if (t !== token) {
if (failed) throw value
return
}
state.set(value, {
isException: failed,
isPending: false,
})
})
return new Signal.Computed(() => {
invoker.get()
return state.get()
}, opts)
} When you use the above, it literally works like sync signals in every way, except that those who care can transparently know if a given value is ready. No inner function coloring involved. // Define
function detailsInfo() {
let handler
const event = new Signal.State(undefined, {
[Signal.subtle.watched]() {
emitter.on("foo", handler = (v) => this.set(v))
},
[Signal.subtle.unwatched]() {
emitter.off("foo", handler)
},
})
const resource = asyncComputed(async () => {
const response = await fetch(`/thing/${Model.currentId.get()}/detail`)
if (!response.ok) throw new Error(`Response failed: ${await response.text()}`)
return response.json()
})
return new Signal.Computed(() => {
// combine `event` and `resource` somehow
return {
lastEvent: event.get(),
currentDetail: resource.get(),
}
})
}
// Use
const state = getStateSomehow()
effect(() => {
// Note how easily this could just be removed
if (state.isPending) showLoadingSymbol()
// Note how this is just the same as with sync signals
updateUI(state.get())
})
// And in your event listener
return <ThingCard
thing={thing}
onSelect={() => Model.currentId.set(thing.id)}
/> |
@dead-claudia exactly! That's what i did here:
(Tho, i don't think untrack is needed) |
@NullVoxPopuli That doesn't seem to propagate readiness through computeds? Or am I missing something? That propagation is pretty crucial to avoiding the function coloring problem in signals. Edit: Also, exceptions need to be writable into |
What's the tldr on the coloring problem? Here is a test that shows behavior, if that helps: https://github.com/NullVoxPopuli/signal-utils/blob/ef3d29f2dd2ff943714c5f7c1bfd0c89af529ad7/tests/async-function.test.ts#L111 State changes are propagated appropriately (and non duplicatively) when tracked states are consumed by the outer computed. But maybe the coloring problem will inform me what needs fixed 🤔 |
Edit: explain a bit more @NullVoxPopuli Okay, maybe "function coloring" was a bit imprecise. I'm also conflating it with "setting to a thrown exception" and so I'll drop that for now. It's more so "value coloring" that's at play here. Without an "is pending" flag, you have to model your signal's value as follows: type AsyncValue<T> =
| {thrown: false, value: T, isPending: boolean}
| {thrown: false, value: Error, isPending: boolean} That Now, suppose you want to filter the list by type. If you have a sync collection of records built by the user, and an async collection of records fetched from the network, you'll need two different functions to collect them:
function filterSync(type, source) {
return new Signal.Computed(() => {
const t = type.get()
return source.get().filter((s) => s.type === t)
})
}
function filterAsync(type, source) {
return new Signal.Computed(() => {
const t = type.get()
const result = source.get()
if (result.thrown) return result
return {
...result,
value: result.filter((s) => s.type === t),
}
})
} To get rid of the value coloring problem, you need to get all the fields other than the value itself (namely, Computed signals already have the ability to store exceptions and rethrow them on // Extends `Signal.Computed` so it can be directly `.watch`ed.
class StateWithException extends Signal.Computed {
#state
constructor(initial, opts) {
const state = new Signal.State({thrown: false, value: initial})
super(() => {
const {thrown, value} = state.get()
if (thrown) throw value
return value
}, opts)
this.#state = state
}
setException(e) {
this.#state.set({thrown: true, value: e})
}
set(v) {
this.#state.set({thrown: false, value: v})
}
}
We still have another field left, // Returned from `.get()`
type AsyncValue<T> = {value: T, isPending: boolean}
// Thrown from `.get()`
type AsyncError = {value: Error, isPending: boolean} Adding a function filter(type, source) {
return new Signal.Computed(() => {
const t = type.get()
return source.get().filter((s) => s.type === t)
})
} Likewise, you can switch from async to sync without code change, and it'll just work. Suppose somewhere down the line, you make some changes to Then, a month later, that test endpoint comes live and you need to make that change to
The beauty of this approach is that your view caller in this case won't need rewritten or even changed at all for either case. You can even keep the |
Ah ok, thanks for explaining! This is indeed a familiar concept than i've been trying to find ergonomics for, and one that @ef4 has been pushing back on @wycats and I to figure out with "Resources", which are like a computed, but with a lifetime and cleanup. Also! This reminds me a lot of railway-oriented-programming, (https://fsharpforfunandprofit.com/rop/), or Result types in other languages, where, if you can force the whole system to adhere to the same interface at each interaction, there are far fewer surprises. This is super important for composing multiple computeds represents async state... Without a shared interface at each layer, making sure the pending and error states are appropriately propagated out to the renderer can be cumbersome and error prone 🤔 |
@NullVoxPopuli Admittedly, that style of programming was of some inspiration for my idea there, though I wasn't specifically thinking of it by name. (I have heard of the concept before, though.) Also, I was inspired by promises in that they also have this (implicit and internal) "pending" vs "settled" state, and I knew that exposing this data path would allow me to use that paradigm. I also come from a huge virtual DOM background, both as a user and as a framework implementor, and that kind of branching-on-state reigns supreme in render functions/methods. I also have a little bit of recent experience in 2D rendering. So it was only natural for me to think about branching like that. 🙂 |
@dead-claudia Hah, yeah, this is almost exactly the core construction of the bubble-reactivity approach that @modderme123 and I were fleshing out a while back, and which I tried to describe (maybe poorly) earlier in this thread. The core idea is that loading-ness propagates contagiously forwards on "Every signal can be loading" avoids the function coloring problem in exactly the same way JS exceptions avoid the coloring problem you'd get by having things return As far as side effects go, this is about as benign as you can get -- the side channel is just one bit that gets captured alongside the result, with no control flow implications (unlike IIRC it's not a completely satisfying solution to async operations for all frameworks, because some framework want to let you write "effects that wait for things to load before running" without you having to manually check that things are loaded or perhaps without you having to see the default unloaded values of signals show up in their types at all. OTOH I don't know if anyone has any alternatives that actually deliver on that experience for effects without introducing Weird Stuff:tm: (throw-and-restart based suspense, and/or spreading what should be one effect across multiple microtasks because you have to |
Related: #178 |
I finally got around to my writeup on the Relay pattern, you can check it out here. My current feeling is that Relays would provide a primitive that would work well with all forms of async, including:
Basically, they can handle 1-to-1 connections AND 1-to-many connections quite well in a variety of ways. At the moment, I'm concerned about over optimizing for 1-to-1/promise oriented async with automatic propagation of
|
Has anyone yet considered the async context proposal ? Where basically an async context could track dependencies of derived/computed signals and an API set up along those routes would allow callers to able to do something like the following totally transparently, 'and it just works.' All the way down into any async function tapping any other observables, all captured and tracked through the async context flowing along aside the userland code. Signal.derivedAsync(async abortSignal => {
const result = await fetch(url.get(), { signal: abortSignal });
return await processAsync({ result, dependency: otherDependency.get(), signal: abortSignal });
}) |
@rjgotten Fortunately, @littledan is involved both with async context and signals. I think we would like to integrate them. Dan can probably provide some additional thoughts. |
My thoughts:
|
@rjgotten To add on to that, signal state is currently (at least in this repo) specified to use an internal async context variable for glue. Not to imply this will continue to be the case, just that it currently is the case. |
This is resolveable if It wouldn't be problematic wrt auto-tracking either. If an update were needed mid-execution (i.e. while the promise is not yet settled) then internally the delegate should re-execute. While externally what's observed could still remain the same pending promise. (Only when a promise has already settled would the When such a mid-execution update would be needed, the old monitoring/recording in the AsyncContext should be marked as dead - not to be responded to further - so that any dependents still being added or changing in the remainder of the old execution, will no longer cause changes. And then the delegate should be re-executed with a new AsyncContext. An abort signal would probably also be passed as a parameter to the userland async delegate for cooperative cancellation of the old execution, which might be beneficial to avoid continuing expensive recomputing; to avoid no longer needed network requests; etc. Then create additional helpers that take care of unwrapping the asynchronous promise into an observable signal's value change. E.g. if you'd want to just expose the promise's state in full: Signal.fromPromise<T>(signal: Signal<Promise<T>>): { value? : T, error? : unknown, state : "resolved"|"rejected"|"pending" }; And from there, other variants such as holding onto the last known value or error are also possible. Basically; this means |
@littledan I still feel Consider this code as an example: const foo = new Signal.State()
const bar = new Signal.State()
async function doAsyncAction() {
const thing = await db.fetchThing()
doSomething(foo.get(), thing)
}
const baz = new Signal.Computed(() => {
doAsyncAction()
return bar.get()
})
For the second, throwing isn't an option as that's If
|
Okay, I've recently through some experimentation come up with a slightly different model that IMHO is useful in its own right:
What this does is run a block, catching errors and indirectly capturing pending signal status. It returns the same kind of object that This also delimits calls: signal accesses in child Note that this shouldn't use async context, only global state, as that limits object leakage and it's a lot cheaper. |
Just coming to this discussion now, but I hit some of the needs described here when using signals to model data structures that have async edges, ie. to lazy-loaded nodes in the graph or expensive computations. What I've made personally for this is a class called AsyncComputed, that works like this:
This is somewhat similar to It's working well enough for me so far, that the only thing I think I'm missing from the proposal is tracking signal reads after the first await of the compute function. If would be really great if computeds over async functions could "just work", although that seems difficult to polyfill. Propagating pending state through the Chaining a regular Computed off an AsyncComputed just works. function filterSync(type, source) {
return new Signal.Computed(() => {
const t = type.get();
return source.get().filter((s) => s.type === t);
});
}
const source = new AsyncComputed(() => getItems()); If My test for chaining test('chaining a computed signal propagates error state', async () => {
const dep = new Signal.State('a');
const task = new AsyncComputed(async () => {
// Read dependencies before first await
const value = dep.get();
await 0;
if (value === 'b') {
throw new Error('b');
}
return value;
});
const computed = new Signal.Computed(() => task.get());
assert.strictEqual(computed.get(), undefined);
await task.complete;
assert.strictEqual(computed.get(), 'a');
dep.set('b');
await task.complete.catch(() => {});
assert.throws(() => computed.get());
dep.set('c');
await task.complete;
assert.strictEqual(computed.get(), 'c');
}); |
Lots of people here are interested in handling async/await in conjunction with signals. Some approaches that have been suggested:
The text was updated successfully, but these errors were encountered: