-
Notifications
You must be signed in to change notification settings - Fork 168
Create useTracker hook and reimplement withTracker HOC on top of it #271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
….warn if available
…fecycle deprecation notice
…ithTracker behavior - compare deps in order to keep API consistency to React.useEffect()
…cker() can be omitted - also React.memo() already has a check for prop change so there is no need to check for changed deps again
|
@menelike I think you have a different problem (described in the other issue), where you want to control the computation. This new question is about whether it's worth it to expose its life cycle. I commented on the other issue to continue that conversation there. |
|
Looking closer at React Suspense, it looks like it's mostly an async loading paradigm, which is not at all how Meteor works. To properly support that we'd probably want to do some more radical design changes to this hook (or creating another one). For example, a The problem comes if someone uses a Suspense feature along side (or after) our I wonder if in the short term it's enough to explain that The ideal solution is for us to somehow get a safe way to create or clean up the side-effect we might create before a promise is thrown. Otherwise we have to accept an inefficient default. I explained it fairly clearly in the React ticket on the subject. |
|
@yched Someone in the react ticket I opened suggested creating a custom Suspense component, to help us out with making our solution work with Suspense - I like the idea. What do you think? We could do a custom |
|
@yched This is really a worm in my brain. I had another idea - what if we just use a timeout between the synchronous portion of the hook, and the
Would that solve it? We would just need to set the timeout to some appropriate length - but it would probably not have to be very long. Maybe 50ms? |
|
@CaptainN heh - that sounds a bit acrobatic, but right now I don't see where that would fail, actually... A timeout introduces another level of asynchronicity that comes with its own potential for interesting race conditions :-), but that definitely looks promising. The penalty would be, if Suspense holds our component for more than (say) 50ms, then we'll stop the computation & subscriptions that were initiated, and will just start them again when it's actually mounted, that sounds acceptable. Not sure what would happen for reactive events triggered between the initial render and the actual mount, though. We wouldn't want them to trigger a rerender before useEffect runs and ensure our render was committed ? |
|
I'm out of time to play with it - but it might be possible to detect if the react component has been discarded or not from within the reactive function when the timeout happens - I'm not sure how to do that though. |
|
Another thing that I've been thinking lately (speak af a brain worm...) : maybe having a version of just the withTracker HOC that is fully future-compatible with React Suspense - i.e. no componentWillXxx / no side effects in render / no double render - could be fairly simple : withTracker() is easier because a) we know the reactive func returns an object of props, and b) it wraps a component as a whole. So we could :
That's some handwaving here, and I'd need to actually try it, but if it works it could give us some leeway to release an iso-API version of react-meteor-data to make existing apps ready for the deprecations in React 16.8.7 and Suspense in 16.9 (which remains the most urgent goal IMO), and give us time to figure out the more generic useTracker() hook if it's not too clear yet how it would work with Suspense ? |
|
I actually got a version of this working - it's simpler than I thought, but there may be a race condition. It does this:
The main question I have (the race condition) is how long is the right amount of time to wait in setTimeout? Is there some guarantee of how long React will take before it executes I still prefer the current approach, because it works today, and even with Suspense, it should work in 99% of cases. It would be a shame to hobble performance and complicate the API for a chance Suspense messes with it. I'd still prefer some kind of fix or workaround. Another thing I'm not sure about is how concurrency plays in to all this. Can a render's |
|
We could probably just offer a second suspense compatible version - BTW, I merged the |
|
As far as I can tell, the problem with Suspense is isolated to one specific scenario, one that I think BTW, I think react-cache may have something of a solution to the problem of cleaning up in concurrency mode, but it's not stable yet (neither is concurrency mode). |
@yched Sorry, I didn't see this reply until now. Yes, that is the one gotcha I noticed as well. I think we'd have to accept a gap in service, so to speak in that case, but I think this workaround will do the trick for 99.9% of cases, even without a service gap:
Another thing I'd like to explore is what happens if you do try to update state in the computation from a dead end react component - does it throw errors or anything we can catch? Then we wouldn't even need a timeout (a quick test didn't reveal anything, but maybe there is something we can latch on to). |
|
Posting this here so as not to pollute the other PR too early - look what recently landed as a (for now private) standalone package in the official react repo : If you read the discussion in the associated PR facebook/react#15022, it's almost laughable how they hit the exact same pain points as we did :-D. Interestingly, they admit they're not able to fully optimize in concurrent mode at the moment. Their hook over there is designed for an API with clearly separated "subscribe to something" and "receive a value from that subscription" steps, whereas Tracker.autorun mingles the two, so not sure we'll be able to use it directly. But at the very least the code and discussion there are super relevant for us here... If need be, it could make sense to reach out to the PR author at some point to discuss the specifics of our case, since he now has a pretty good vision of what is or isn't possible at the moment or in the future with concurrent mode. |
|
I figured out while writing up a comment on React's I'm not sure React's own If not (if for example, my assumption that their lack of a "semantic guarantee" is talking about concurrent mode) then we might actually be able to use the lifecycle from that to make |
|
Reading through what they did, we can't use that. They made an assumption that subscribe is "passive" - which allowed them to avoid setting up any side-effects during render. I think they also assume the side-effects they set up in |
|
Yup, came to the same conclusion, sadly :-/ They split between a "getCurrentValue()" callback that is supposed to be side-effects-free, and a "subscribe()" callback were side-effects occur, so they can can just call getCurrentValue() when needed to get the "sync value for current render". That's not true in our case, the function that gets the value also performs side effects - that's where I monkey-patched DDP subscriptions in my original PR... So, right, their useSubscription is not for us it seems. |
|
Also - for the timeout-based approach, you wrote in #271 (comment)
Yup, apparently this is exactly what |
|
So one other thing we should look in to is how to handle error scenarios (error boundaries). I'm concerned that when an error it's thrown we can end up with the same sorts of cases where things are leaking memory and computations are not cleaned up. |
|
@yched @menelike I think I got a version that satisfies all the requirements. Take a look. The only part I'm struggling with now is writing some unit tests for the edge cases of concurrent mode (I'm not sure how to get the test thingy to wait for the thrown promise to resolve in my simulated Suspense rigging, but it seems to actually work), here's what it's doing:
After chatting with Brian Vaughn in facebook/react#15022 it became clear that this is an important area to work around in multiple cases - suspense, concurrent mode and other memory optimizations, and error boundaries. Suspense and error boundaries are shipping in some form or another today. What I've basically set up is a poor man's garbage collector based on Once I've gotten the TinyTests to work the way I want, and have updated the readme to explain all this (including a section on Concurrent mode and avoiding side-effects in the reactiveFn) I'll merge this with PR #273. |
|
So, the way Another thing I thought about is, can we leverage some other browser APIs such as |
This PR is a roundup of the discussion in #262, and supersedes: #242, #252, #256, #263, #266, #267, #268.
It does the following:
useTrackerto for use as an alternative towithTrackerin hooks based projects.withTrackeron top ofuseTracker, which sures upwithTracker's compatibility with React strict mode and the forthcoming Suspense and concurrent mode (issues Unsafe lifecycle methods were found within a strict-mode tree #256, React 16.3 => migration from componentWillMount and componentWillUpdate #252, componentWillMount vs componentDidMount #242 / PR [WIP] Update lifecycle methods for newer React #261)createContainer.withTrackerand inuseTrackerif nodepsare specified, but also allows asynchronous reactivity afterfirstRunifdepsare supplied, and properly responds todepschanges.There were not any unit tests, but I may write some if I have time. I'd especially like to test some of the lifecycle stuff, and make sure reactivity persists. I don't think that should hold up a beta release.
Speaking of release - since this bumps the react version requirement to 16.8, there has been some discussion about how to properly release this with a major version bump. One suggestion was to bump the old version to 1.0.0 which can receive updates separately, then release this update as 2.0.0.
CC @hwillson @menelike @yched