-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consolidate address calculations for atomics #3143
Consolidate address calculations for atomics #3143
Conversation
This commit consolidates all calcuations of guest addresses into one `prepare_addr` function. This notably remove the atomics-specifics paths as well as the `prepare_load` function (now renamed to `prepare_addr` and folded into `get_heap_addr`). The goal of this commit is to simplify how addresses are managed in the code generator for atomics to use all the shared infrastrucutre of other loads/stores as well. This additionally fixes bytecodealliance#3132 via the use of `heap_addr` in clif for all operations. I also added a number of tests for loads/stores with varying alignments. Originally I was going to allow loads/stores to not be aligned since that's what the current formal specification says, but the overview of the threads proposal disagrees with the formal specification, so I figured I'd leave it as-is but adding tests probably doesn't hurt. Closes bytecodealliance#3132
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM; thanks very much for this, it's a lot simpler and cleaner than what we had before!
Very good catch on the wasm linear address vs. native pointer documentation and signature fix.
It's not immediately apparent to me what's going on with the aarch64 failure (the misalignment check looks like it should be platform-agnostic); happy to help debug if you're not sure either.
Looks like #3144 is what this was running into. The trap showing up was different for emulation vs non-emulation (due to configurations around guard pages and linear memory). I'm not sure which trap should be showing up so I've modified the tests to only have one case of the trap possible, instead of two. |
a0b9e45
to
87885e8
Compare
Hm well actually thinking about it more, the in-progress spec does specify that alignment checks happen first, and I think that's the most reasonable thing to do, so I'm going to go ahead and implement that. The codegen is inefficient in that it generates an |
Yeah, GVN may actually combine the adds; if it doesn't, we might be able to do something special for this later. |
This commit consolidates all calcuations of guest addresses into one
prepare_addr
function. This notably remove the atomics-specifics pathsas well as the
prepare_load
function (now renamed toprepare_addr
and folded into
get_heap_addr
).The goal of this commit is to simplify how addresses are managed in the
code generator for atomics to use all the shared infrastrucutre of other
loads/stores as well. This additionally fixes #3132 via the use of
heap_addr
in clif for all operations.I also added a number of tests for loads/stores with varying alignments.
Originally I was going to allow loads/stores to not be aligned since
that's what the current formal specification says, but the overview of
the threads proposal disagrees with the formal specification, so I
figured I'd leave it as-is but adding tests probably doesn't hurt.
Closes #3132