-
Couldn't load subscription status.
- Fork 13.9k
Closed
Labels
A-atomicArea: Atomics, barriers, and sync primitivesArea: Atomics, barriers, and sync primitivesA-codegenArea: Code generationArea: Code generationA-intrinsicsArea: IntrinsicsArea: IntrinsicsA-strict-provenanceArea: Strict provenance for raw pointersArea: Strict provenance for raw pointersC-optimizationCategory: An issue highlighting optimization opportunities or PRs implementing suchCategory: An issue highlighting optimization opportunities or PRs implementing such
Description
Currently, the type of our atomic RMW intrinsics looks like
fn atomic_xadd_seqcst<T: Copy>(_dst: *mut T, _src: T) -> THowever, this is not quite what we want: for atomic operations on a pointer, we want dst to be something like *mut *mut T, but src should be usize. The return type should be *mut T.
This would let us avoid some unnecessary casts in AtomicPtr, and shift the burden of mapping this operation to something LLVM supports into the backend. It also makes the semantics of these operations more clear: only the provenance of the in-memory data matters; src carries no provenance.
scottmcm
Metadata
Metadata
Assignees
Labels
A-atomicArea: Atomics, barriers, and sync primitivesArea: Atomics, barriers, and sync primitivesA-codegenArea: Code generationArea: Code generationA-intrinsicsArea: IntrinsicsArea: IntrinsicsA-strict-provenanceArea: Strict provenance for raw pointersArea: Strict provenance for raw pointersC-optimizationCategory: An issue highlighting optimization opportunities or PRs implementing suchCategory: An issue highlighting optimization opportunities or PRs implementing such