-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Vulkan] Vulkan runtime reimplementation (stream approach) #3849
Conversation
Awesome, this is something that i overlooked and definitely should be the way to go. The main reason that I took the current approach was because there is no clear way to do synchronization other than the fence, and the stream model is not supported by vulkan natively. The proposed approach(to add stream support via lazy execution) is definitely better and we should just move toward this new runtime IMO. As long as we implement all the features, please feel free to remove the old vulkan runtime. |
@nihui would you be interested in taking a look at this PR and this approach? Are there more useful things in NCNN’s Vulkan backend that TVM should learn from? |
I also found that it took a lot of time to wait for each operator's result, so I put multiple operators in a single command buffer and apply the lazy execution. If you have any questions about the ncnn vulkan backend, feel free to post an issue to https://github.com/Tencent/ncnn/issues |
09675eb
to
64b55bb
Compare
Thanks @nihui, @tqchen, @jwfromm. I generalized this approach to handle devices that don't support the push descriptor (see the Stream::LaunchDeferred APIs), and devices that don't support dedicated allocation APIs, and removed the previous runtime implementation as @tqchen suggested. I also added some more tests which (under Vulkan API validation) revealed some thread safety bugs. I think this should be closer to something that is acceptable, so I'll remove the RFC and WIP tags, this should be ready for review. File/class structure is certainly up for debate. I'm fine with whatever folks propose. |
64b55bb
to
762250d
Compare
24065ab
to
d200b51
Compare
a85f3d2
to
595107c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is really excellent work, I'm super impressed and excited to get better Vulkan performance merged in to TVM. I like that you've now replaced all the old Vulkan runtime code, however, now that it's gone I think you should remove all the 2s from your file, class, and function naming. I've also added a few minor comments, however everything else LGTM.
// Use SPIR-V v1.0. This needs to be kept in sync (or at least behind) | ||
// `VkApplicationInfo.apiVersion` in `vulkan2.cc` to ensure Vulkan API | ||
// validation passes. | ||
header_.push_back(0x10000); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need to hard-code the version like this? Is it possible to use spv::Version both here and in vulkcan2.cc and check that its >= v1.0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding is that the semantically correct thing to do is:
- at code-generation time, output the lowest possible version number that supports all the features we require. Currently this is 1.0, we don't use features from 1.1 or higher.
- in vkApplicationInfo.apiVersion, use the minimum required API version we require for the devices. This is also what NCNN uses, see https://github.com/Tencent/ncnn/blob/c65c215c2c1084303849f9adae2f7495a1b99409/src/gpu.cpp#L407.
Thanks @jwfromm, will follow up on your suggestions. |
595107c
to
2654d3c
Compare
@jwfromm, I believe I've addressed your comments (except the one re: the correct version semantics at SPIR-V emission time and Vulkan instance instantiation time). |
2654d3c
to
db04992
Compare
Thanks @ajtulloch @jwfromm @nihui ! |
In the Vulkan API, it is recommended to launch multiple compute kernels on a shared command buffer in general, instead of having a 1-1 relationship between compute kernels and buffers. See http://on-demand.gputechconf.com/gtc/2016/events/vulkanday/High_Performance_Vulkan.pdf, https://devblogs.nvidia.com/vulkan-dos-donts/, etc. The extant TVM Vulkan runtime uses one command buffer per kernel, which can lead to significant overheads for smaller-kernels (on the order of half a millisecond on some of the devices I looked at).
An alternative approach leverages an approach similar to a CUDA stream abstraction, where we record commands onto the command buffer, and at synchronization points, submit the command buffer to the queue and wait on the fence. This is non-trivially more efficient - similar to the approach taken by
ncnn
- there are some useful ideas in there that applied here. In particular it's quite convenient to depend on the KHR push descriptors extension, but that could be removed without too much pain similar to how ncnn does it.This code isn't production ready, and it's not super clear how much interest there is in the Vulkan side of things. I think it's quite promising and was planning on spending some time looking at codegen stuff, but the difficulty in getting reasonable numbers for small B/W bound kernels was the motivator in working on this to begin with.
If there's interest we could probably figure out a way to merge this into the existing Vulkan runtime, perhaps gated by a feature flag?
Performance improves for simple pointwise kernels as expected, using a script like: