-
Notifications
You must be signed in to change notification settings - Fork 3.7k
[HEXAGON] Auto-vectorization (fp16) for v68 #12397
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Hi @kparzysz-quic, |
|
@tvm-bot rerun |
| if not llvm_options: | ||
| llvm_options = "" | ||
| llvm_options += " -force-hvx-float" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this flag impose any new requirements regarding which LLVM versions TVM needs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, are there any potential downsides or other side-effects of using this flag? Asking because with compiler flags, the word "force" sometimes implies that the resulting behavior isn't always a good idea.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It will work with any LLVM version that supports v69. It's fine to use it.
|
@tvm-bot rerun |
* Auto-vectorization (fp16) for v68 * use tvm.testing.main in fp16 test of tanh_slice op
Thanks for contributing to TVM! Please refer to guideline https://tvm.apache.org/docs/contribute/ for useful information and tips. After the pull request is submitted, please request code reviews from Reviewers by @ them in the pull request thread.
cc @mehrdadh