-
Notifications
You must be signed in to change notification settings - Fork 144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove CUDART hijack #1730
Comments
#1059 is a blocker on the Regent side. |
Noting here that the only way that Legion Prof accurately represents the time that CUDA kernels for a GPU task spend on the device today is by relying on Realm's hijack. If the hijack is disabled, then Realm currently over approximates the lifetime of the GPU kernels by assuming that are launched and start running on the GPU immediately as soon as the GPU task starts, which is not always the case. Detecting when the first kernel is launched an enqueuing a CUDA event right before it is probably going to be challenging without the hijack. |
@lightsighter If I were to replace the current timestamp reporting for the kernel launched with a CUPTI activity timestamp difference, would that cover the Legion profiler use case? I am thinking I can enable the following and retrieve the completion field to get the information requested: These should not have any additional overhead than what is already in place with the Realm CUDART hijack, but it does require CUPTI to be installed, which requires a CUDA Toolkit to be install locally on the system somewhere. CUPTI is ABI stable IIRC, so a dynamic loader can be built and we can dynamically detect it's presence on the system (and can toggle it via a cmdline arg or w/e if you'd like). Is that acceptable? @elliottslaughter I'm not sure I understand what the actual issue is here, would it be possible for you to summarize it in the issue? I've added myself to the issue and I can talk to @magnatelee about it next week for more clarity. |
That seems reasonable to me, but I'll let other folks comment as well as I'm not the only one who expects that to work. Note that we don't need to profile every kernel. We mainly want this profiling response from Realm to provide tight bounds on when the kernels on the GPU are actually running on the device: |
@lightsighter Unless I'm mistaken, the hijack doesn't seem to come into play with this. When we add an OperationTimelineGPU to the operation, it enables this path: This just puts either a stream callback on the task stream, or records an event and schedules a bgworker to retrieve that event and record the CPU time. No tight bounding is done as far as I can tell. |
If I remember correctly. We do not record the tight bounding of each kernel, but only record the start of the first kernel and the end of the last kernel. |
@eddy16112 I understand that, but where is this done? If it's done with GPUWorkStart, then it's not tight bound at all. |
We record an event before and after the task body respectively, and use the first event as the start and the second one as the end. |
Yes, but those events are queued before the task is run. If the stream is idle at the start of the task body (which it must be IIUC), then the event is recorded immediately, not when the first kernel starts. This logic also has nothing to do with the hijack. |
Right, I think the CUDA runtime hijack used to do that by only enqueuing the event upon the first kernel launch and not at the start of the task, but that code seems to have been lost. It could have happened anytime in the last 9 years when I wrote the first version of the hijack and then stopped working on it myself. I'll note that we should probably fix this regardless of whether it is for getting rid of the hijack or not. I know for certain that @jjwilke could use it right now for running code without the hijack. |
@lightsighter Can you file a separate issue for this with all the requirements and steps to actually test if this is set up properly? I haven't gotten familiar enough with Legion Prof to ensure I can see the issue and ensure it's fixed. |
I've started an issue on improving the precision of the GPU profiling here: #1732 |
Is there any plan to automatically capture completion of events on the default stream associated with a GPU? One thing I noticed while porting the application in #1682 is that if you do not have the hijack, and use |
I am not sure if it is doable. @muraj ? I think it is an application bug if they forgot to use the task stream. Actually we have a realm cuda hook tool which could detect such leaks. |
Correct, this would be an application bug. It is part of the contract in setting ctx_sync_required(false). That said, with ctx_sync enabled, on recent drivers (12.0+), we record an event at the end of the task which should minimize the over synchronization that happens by quite a bit, so if the app can't make the contract of ctx_sync_required(false), it hopefully shouldn't be too expensive.
That is what ctx_sync_required(true) with recent drivers does already. |
Ok, let me see if I understand the options here. After the hijack is removed, applications can either:
The issue, as far as Regent is concerned, is that it's (or can be) a mixed application. Regent itself always uses the task stream. So in a pure Regent application it would always be safe to call Has anyone gone to measure how much of performance impact the event record actually has in approach (1)? That's the conservative approach for mixed codes, it just feels a bit unfortunate as the vast majority of Regent users would be getting a needless slowdown. Alternatively if anyone else can suggest another way for Regent to figure out if it's calling a blob of code that does not properly use the task stream, I'm all ears. |
That is not necessary. An application can use whatever stream they wish, but the contract of
That is correct, which is why
Not on a legion or higher level application, no, but on a realm application, yes. I ran local performance tests with simultaneous DMA work running in the background and got negligible to the slightly better performance compared to just calling cuCtxSynchronize in a background thread. So in the worst case, we do no worse than what was there without the hijack. In the best case with no contention, we achieve about the same performance as with the hijack. Again, this is assuming we have a driver that supports cuda 12.5+ (I need the cuCtxRecordEvent API from the driver to achieve this). In addition, I know legate folks like @manopapad have done their own analyses and have begun converting their codes over to using |
Just to be very clear for @elliottslaughter, this is a per-dynamic-task API call that you can do to disable context synchronization after that task (and only that task) is done running. It means you can selectively opt-in to disabling the context synchronization for individual tasks, so I don't think you have to worry about it being a global setting that will not compose well. |
Ok, I guess I'd forgotten that, since this is the second time I've needed to be reminded: #1557 (comment) There are still issues with tasks of the form: __demand(__cuda)
task asdf(...)
for ... do ... end -- Regent turns this into a kernel launch
call_native_c_code() -- Internally launches more kernels If the C code is used inside the same task and does not properly use the task stream, then we're still in the same situation. This limits the scope of the problem, but the issue still fundamentally occurs. (And empirically based on available evidence, appears to be happening in S3D.) |
Correct, again, the application needs to know if that's an assumption it can make in the task and if it cannot, then it must not use |
This issue is to track progress on removing the need for the CUDART hijack and eventual removal of it from the Realm codebase. Current use cases known are (will be updated as use cases come up):
Task kernel legion prof annotationsThe text was updated successfully, but these errors were encountered: