-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Profiler: Show "truly-in-use" memory usage line #1739
Comments
I was looking at this some yesterday. I realized that there's a bit of a discrepancy between how we currently visualize instance lifetimes and what we probably want to visualized for understanding out of memory conditions. There are actually two different timelines that matter for instances: one at the mapping stage of the pipeline and one for the execution stage of the pipeline. At the mapping stage of the pipeline, the timeline looks like the following:
At the execution stage of the pipeline, the timeline looks like the following:
What we render in the profiler today is the execution stage of the pipeline. It shows the actual memory usage by instances during execution so that it aligns with what the tasks and copies are actually doing (e.g., if a task or a copy is running, you can see that the instances that it is using are alive at the same time). This makes sense if all we care about is seeing memory usage by the actual execution. If we're actually trying to debug OOM conditions though, we don't actually care about the execution stage of the pipeline; instead we care about the mapping stage of the pipeline because we want to see what instances are valid and which have done deferred deletions so that they are eligible for deferred allocations. Unfortunately, you can't just overlay the mapping stage timeline on top of the execution stage of the pipeline because they are almost completely decoupled from each other. For example, if Legion is mapping ahead, you could have the instance be created, go through several cycles of becoming valid/invalid, and then have the destroy method invoked (with a precondition) all before the instance even becomes "ready" during the execution stage of the pipeline. In most cases though the two timelines will overlap but will be kind of unrelated to each other, e.g. the cycles of becoming valid/invalid won't be correlated with when tasks/copies are actually using the instance. This creates a conundrum: do we want to visualize the mapping stage of instance lifetimes and if so how do we want to render it? I suspect that the answer is that we do want to visualize it, but I also don't think we want to overlay the mapping and execution stages on top of each other as I think it will be confusing. Do we want to have a separate timeline in the profiler for the mapping lifetime of instances? Should it be on by default or maybe should users be able to toggle the memories from the mapping stage to the execution stage and back? Other ideas? @manopapad @elliottslaughter What do you guys think? |
Honestly it might not even be necessary to visualize the "history" of mapping-stage allocations, for the purposes of OOM debugging. Just a visualization of the deferred memory state at the point of OOM might be enough. That gives enough information to understand what valid deferred allocations are stopping the incoming allocation from succeeding. No need to even visualize the invalid instances. |
Right, so perhaps that deserves a different visualization, perhaps one with matplotlib or something that generates a static visualization of all the instances and where they are in memory and how much memory they take up so you can see the holes and the fragmentation and what instances are currently valid (uncollectable). Let's start a separate issue for that since I think we probably do want to keep this issue as well because I do think users will want to see which instances are actually in use and not being used since, as you pointed out, some users get confused by our lazy garbage collection. We might actually be able to do that without any new logging statements, but just using the instance use information that we have for tasks and copies to mark when instances are currently in use. |
We can't run the profiler after OOM anyway (unless the mapper handles the error and falls back to some other memory, in which case it's not clear what to show), so unless we're fixing that along the way, we wouldn't be able to use Legion Prof anyway. I do think having an execution timeline that is more accurate to what's actually being used would be desirable. E.g., you can take your largest case (that still fits), run it through, and at least observe whether memory usage is growing and what is using what. And execution is relevant here, because even though the decisions happen at mapping time, the lifetimes are ultimately correlated to some tasks. The duration of those tasks (that use large instances) is highly relevant to how quickly you can get those allocations done, even if the mapping process is done far ahead of time. In contrast, it's not clear if a mapper-based timeline is useful because the mapping timeline is tied to the duration of specific mapper calls (that in general you'd expect to be uncorrelated with actual execution times). You may see what's going on, but have no sense of the proportions of running time. I also think an OOM-time live dump is potentially also useful. I suppose if we make the profiler smart enough it could even be incorporated into the profiler itself. E.g., for a given point in the execution, visualize what Legion sees as the state of memory. Each popup is just independent and that resolves the issue with matching timelines; the way to think about it is a query like "What does Legion think the state of memory is at time point X?" You'll see the mapper calls on the timeline so if you really want to chase that down you can, but you can also just click at different points in the timeline and see how it's changing (as I expect most users would do). It might be very expensive to do this though, I'm not sure. |
I have the beginnings of fixing this in the
I agree with should maintain the current timeline because it accurately aligns with which instances are live at the time when tasks and copies are actually running.
This is actually one thing missing from the critical path analysis today, because we don't analyze which instances had to be freed up allow for an allocation to succeed. We could actually technically do this analysis given we know the offsets and sizes of all instances in each memory. We could then align the allocation being ready with when the prior instances using the same space in memory were freed.
Right, it's really only relevant at the point that you can't do an allocation and you need to understand what it is that caused that. The history might be relevant in case you wanted to see if fragmentation was growing worse over time, but I agree the timing is definitely decoupled from the execution stage of the pipeline and therefore a bit hard to grasp.
I think this is the plan in #1797. There will be a mapper runtime call like:
Legion will dump out the mapper runtime state and then there will be a post-processing tool that will parse the log file and either give you text output or present a visualization. You can call that though as many times as you want and make as many dumps as you want so you don't have to wait until OOM to call it, although it will eagerly dump to the file and might be a bit expensive. |
This would handle a common pitfall for users when looking at profiles: Users see the memory usage only go up, and assume that the program is leaking memory. In reality the instances that are accumulating are probably invalid, and would be removed by Legion if it actually got squeezed for memory.
IMHO what the user truly wants to know is how much "breathing room" they have on the memory at any given time.
We would need the profiler to be told when instances become valid and invalid, so it can draw a line of the "truly-in-use" memory usage, which only includes valid instances (possibly in addition to the "technically correct" utilization that we show today, where invalid instances are also included).
Assigning @lightsighter to give an estimate on the work, maybe make sub-tasks to assign.
The text was updated successfully, but these errors were encountered: