Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory bloat #66

Open
jondayton opened this issue Sep 1, 2020 · 5 comments
Open

Memory bloat #66

jondayton opened this issue Sep 1, 2020 · 5 comments

Comments

@jondayton
Copy link
Contributor

Adding 20K records to the store seems to increase memory by about 400 MB, whereas the aggregate requests to the server are returning about 20MB of data. I'm guessing a lot of this has to do with mobx observables. @jtorreggiani have you seen this in the past?

Screen Shot 2020-09-01 at 2 24 03 PM

@jondayton
Copy link
Contributor Author

https://github.com/artemis-ag/mobx-async-store/blob/9a58d4e4bb13b05561531c0d6b66b373729f1682/docs/files/src_Model.js.html#L511-L533

Adding a reaction to each attribute just to know if it's dirty or not seems very expensive from a memory standpoint. @jordanstephens thoughts? Could we go to diffing against the snapshot, or explicitly setting dirty attributes?

@jordanstephens
Copy link
Contributor

jordanstephens commented Sep 1, 2020

iirc, we did this to avoid enumerating over all of the attributes at "read time." Ideally, I wanted to slip a little bit of code to track dirty state into the set portion of the Proxy objects that mobx uses to wrap each property, but i couldn't find a way to do that while still preserving everything mobx was already doing. Ultimately we fell back on this (which should behave failrly similarly afaik—which the exception that dirty state will not be updated until the next tick on the event loop—if we could inject it directly into the proxy it would happen synchronously).

I would double check that this is your culprit though. Why do you think it's very expensive from a memory standpoint? I could imagine these leaking memory if they were not disposed before their parent object was GC'd, but it doesn't sound like you are describing a leak. It might be worth double checking if you are removing records from the store w/o calling destroy on them though.

Memory profiling is its own special hell, and I don't know of a good way to do it other than to take multiple heap snapshots before and after you make changes that you think might affect it and compare them to see if you can get consistently reduced heap sizes. Have you tried removing dirty state tracking altogether? Does that get you closer to your 20MB here? Are you sure that the extra memory doesn't have anything to do with rendering DOM nodes for all of this data?

An alternate strategy to this whole thing could be to wrap each of the attributes (which are already mobx proxies) with another proxy in order to track this dirty state. @jtorreggiani and I talked about this a lot (I remember feeling like that was a little precarious and cumbersome so we went with this route), but you may find it to be worth it.

@jtorreggiani
Copy link
Contributor

jtorreggiani commented Sep 2, 2020

@jondayton, I haven't seen this behavior in the profiler before. I would be interested to see if removing those reactions decreases the memory utilization.

I think @jordanstephens is correct that the last place we left off on this problem was that we agreed the ideal way forward was to intercept the values of attributes at write time through the decorators (instead of a reaction), but neither of us could get that to work.

Some possible solutions we haven't explored:

I think either of these could work and eliminate the need for the reactions all together.

@jondayton
Copy link
Contributor Author

I haven't tried removing anything yet, wanted to get a feel if you've noticed anything in the past since heap analysis is painful. The reason I thought it might be related is seeing above that memory consumption jumps after _makeObservable is being called from the model; this is happening when we're loading model data into the store from the server and steps up after each model being added. Thanks for the insight @jordanstephens @jtorreggiani

@jordanstephens
Copy link
Contributor

The reason I thought it might be related is seeing above that memory consumption jumps after _makeObservable is being called from the model

Just because the GC is happening here, doesn't mean that the allocation is happening here. The following CropBatchAction constructors in your chart don't appear to be resulting in increases in heap size. We can't see on the chart where the allocation is happening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants