-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fmla support added #285
Fmla support added #285
Conversation
Hi @adcastel , thanks for the PR! |
Codecov Report
@@ Coverage Diff @@
## master #285 +/- ##
==========================================
+ Coverage 86.28% 86.30% +0.01%
==========================================
Files 73 73
Lines 16386 16416 +30
==========================================
+ Hits 14139 14168 +29
- Misses 2247 2248 +1
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good if you could add the corresponding tests
Vfmla test added |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding a test! Please use golden
functionality to compare the result.
tests/test_neon.py
Outdated
@@ -83,7 +83,53 @@ def simple_math_neon(n: size, x: R[n] @ DRAM, y: R[n] @ DRAM): # pragma: no cov | |||
fn(None, n, x, y) | |||
assert np.allclose(x, expected) | |||
|
|||
#@pytest.fixture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#@pytest.fixture | |
@pytest.mark.isa("neon") |
tests/test_neon.py
Outdated
|
||
p = test_neon_vfmla() | ||
print(p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(p) |
tests/test_neon.py
Outdated
|
||
p = test_neon_vfmla() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
p = test_neon_vfmla() |
tests/test_neon.py
Outdated
p = autofission(p, p.find('B_vec[_] = _').after(), n_lifts=2) | ||
p = replace(p, 'for l in _: _ #0', neon_vld_4xf32) | ||
p = set_memory(p, 'B_vec', Neon4f) | ||
print(p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
print(p) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
Head branch was pushed to by a user without write access
This is the support for fmla neon assembly instruction
It works with microkernel generation.
I can show the resulting code in the next meeting.