-
Notifications
You must be signed in to change notification settings - Fork 179
DEVICE/API: Wait for wireup completion in createGpuXferReq #947
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
👋 Hi michal-shalev! Thank you for contributing to ai-dynamo/nixl. Your PR reviewers will review your contribution then trigger the CI to test your changes. 🚀 |
6fc28c0 to
f9dd38b
Compare
Signed-off-by: Michal Shalev <[email protected]>
f9dd38b to
9b37338
Compare
|
/build |
2 similar comments
|
/build |
|
/build |
|
/build |
Signed-off-by: Michal Shalev <[email protected]>
Signed-off-by: Michal Shalev <[email protected]>
| params.num_elements = ucp_elements.size(); | ||
|
|
||
| const auto start = std::chrono::steady_clock::now(); | ||
| constexpr auto timeout = std::chrono::seconds(5); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about making it configurable via environment variable?
| if (std::chrono::steady_clock::now() - start > timeout) { | ||
| throw std::runtime_error( | ||
| "Timeout waiting for endpoint wireup completion has been exceeded"); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it makes sense to swap the time check and the execution of the progress on the workers. Otherwise, we may throw the exception even when the wireup is completed on this iteration.
Optional. I'd prefer to do a time loop. E.g.:
for (const auto start = std::chrono::steady_clock::now();
std::chrono::steady_clock::now() - start <= timeout;)
{
status = ucp_device_mem_list_create(ep.getEp(), ¶ms, &ucx_handle);
if (status != UCS_ERR_NOT_CONNECTED) {
break;
}
for (const auto &w : workers) {
w->progress();
}
}
if (status == UCS_ERR_NOT_CONNECTED) {
throw std::runtime_error("Timeout waiting for endpoint wireup completion has been exceeded");
} else if (status != UCS_OK) {
throw std::runtime_error(std::string("Failed to create device memory list: ") +
ucs_status_string(ucs_status));
}|
|
||
| nixlGpuXferReqH | ||
| createGpuXferReq(const nixlUcxEp &ep, | ||
| const std::vector<std::unique_ptr<nixlUcxWorker>> &all_workers, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks like all_ is redundant for this parameter. workers would be enough.
What?
Add blocking wait for endpoint wireup completion in
createGpuXferReq().Why?
Previously, users had to implement workarounds in their applications to wait for wireup completion before calling
createGpuXferReq()(as shown in UCX tests).This PR moves the wireup handling into the library, simplifying the API and removing the burden from application code.
How?
ucp_device_mem_list_create()in a loop while it returnsUCS_ERR_NOT_CONNECTEDworker.progress()in each iteration to advance the wireup state machine