Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add simple data-race detector #1617

Merged
merged 17 commits into from
Nov 29, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
56 changes: 26 additions & 30 deletions src/data_race.rs
Original file line number Diff line number Diff line change
Expand Up @@ -39,21 +39,14 @@
//!
//! The timestamps used in the data-race detector assign each sequence of non-atomic operations
//! followed by a single atomic or concurrent operation a single timestamp.
//! Write, Read, Write, ThreadJoin will be represented by a single timestamp value on a thread
//! Write, Read, Write, ThreadJoin will be represented by a single timestamp value on a thread.
//! This is because extra increment operations between the operations in the sequence are not
//! required for accurate reporting of data-race values.
//!
//! If the timestamp was not incremented after the atomic operation, then data-races would not be detected:
//! Example - this should report a data-race but does not:
//! t1: (x,0), atomic[release A], t1=(x+1, 0 ), write(var B),
//! t2: (0,y) , atomic[acquire A], t2=(x+1, y+1), ,write(var B)
//!
//! The timestamp is not incremented before an atomic operation, since the result is indistinguishable
//! from the value not being incremented.
//! t: (x, 0), atomic[release _], (x + 1, 0) || (0, y), atomic[acquire _], (x, _)
//! vs t: (x, 0), atomic[release _], (x + 1, 0) || (0, y), atomic[acquire _], (x+1, _)
//! Both result in the sequence on thread x up to and including the atomic release as happening
//! before the acquire.
//! As per the paper a threads timestamp is only incremented after a release operation is performed
//! so some atomic operations that only perform acquires do not increment the timestamp, due to shared
RalfJung marked this conversation as resolved.
Show resolved Hide resolved
//! code some atomic operations may increment the timestamp when not necessary but this has no effect
//! on the data-race detection code.
//!
//! FIXME:
//! currently we have our own local copy of the currently active thread index and names, this is due
Expand Down Expand Up @@ -516,7 +509,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: MiriEvalContextExt<'mir, 'tcx> {
Ok(old)
}

/// Perform an atomic compare and exchange at a given memory location
/// Perform an atomic compare and exchange at a given memory location.
/// On success an atomic RMW operation is performed and on failure
/// only an atomic read occurs.
fn atomic_compare_exchange_scalar(
Expand Down Expand Up @@ -640,7 +633,9 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: MiriEvalContextExt<'mir, 'tcx> {
// Either Release | AcqRel | SeqCst
clocks.apply_release_fence();
}
Ok(())

// Increment timestamp if hase release semantics
RalfJung marked this conversation as resolved.
Show resolved Hide resolved
Ok(atomic != AtomicFenceOp::Acquire)
})
} else {
Ok(())
Expand All @@ -651,9 +646,7 @@ pub trait EvalContextExt<'mir, 'tcx: 'mir>: MiriEvalContextExt<'mir, 'tcx> {
/// Vector clock metadata for a logical memory allocation.
#[derive(Debug, Clone)]
pub struct VClockAlloc {
/// Range of Vector clocks, this gives each byte a potentially
/// unqiue set of vector clocks, but merges identical information
/// together for improved efficiency.
/// Assigning each byte a MemoryCellClocks.
alloc_ranges: RefCell<RangeMap<MemoryCellClocks>>,

// Pointer to global state.
Expand Down Expand Up @@ -935,10 +928,12 @@ trait EvalContextPrivExt<'mir, 'tcx: 'mir>: MiriEvalContextExt<'mir, 'tcx> {
true,
place_ptr,
size,
);
).map(|_| true);
}
}
Ok(())

// This conservatively assumes all operations have release semantics
Ok(true)
})?;

// Log changes to atomic memory.
Expand Down Expand Up @@ -1159,6 +1154,7 @@ impl GlobalState {
created.join_with(current);

// Advance both threads after the synchronized operation.
// Both operations are considered to have release semantics.
current.increment_clock(current_index);
created.increment_clock(created_index);
}
Expand All @@ -1185,11 +1181,9 @@ impl GlobalState {

// The join thread happens-before the current thread
// so update the current vector clock.
// Is not a release operation so the clock is not incremented.
current.clock.join(join_clock);

// Increment clocks after atomic operation.
current.increment_clock(current_index);

// Check the number of active threads, if the value is 1
// then test for potentially disabling multi-threaded execution.
let active_threads = self.active_thread_count.get();
Expand Down Expand Up @@ -1287,13 +1281,14 @@ impl GlobalState {
/// operation may create.
fn maybe_perform_sync_operation<'tcx>(
&self,
op: impl FnOnce(VectorIdx, RefMut<'_, ThreadClockSet>) -> InterpResult<'tcx>,
op: impl FnOnce(VectorIdx, RefMut<'_, ThreadClockSet>) -> InterpResult<'tcx, bool>,
) -> InterpResult<'tcx> {
if self.multi_threaded.get() {
let (index, clocks) = self.current_thread_state_mut();
op(index, clocks)?;
let (_, mut clocks) = self.current_thread_state_mut();
clocks.increment_clock(index);
if op(index, clocks)? {
let (_, mut clocks) = self.current_thread_state_mut();
clocks.increment_clock(index);
}
}
Ok(())
}
Expand All @@ -1313,10 +1308,11 @@ impl GlobalState {

/// Acquire a lock, express that the previous call of
/// `validate_lock_release` must happen before this.
/// As this is an acquire operation, the thread timestamp is not
/// incremented.
pub fn validate_lock_acquire(&self, lock: &VClock, thread: ThreadId) {
let (index, mut clocks) = self.load_thread_state_mut(thread);
let (_, mut clocks) = self.load_thread_state_mut(thread);
clocks.clock.join(&lock);
clocks.increment_clock(index);
}

/// Release a lock handle, express that this happens-before
Expand All @@ -1335,8 +1331,8 @@ impl GlobalState {
/// any subsequent calls to `validate_lock_acquire` as well
/// as any previous calls to this function after any
/// `validate_lock_release` calls.
/// For normal locks this should be equivalent to `validate_lock_release`
/// this function only exists for joining over the set of concurrent readers
/// For normal locks this should be equivalent to `validate_lock_release`.
/// This function only exists for joining over the set of concurrent readers
/// in a read-write lock and should not be used for anything else.
pub fn validate_lock_release_shared(&self, lock: &mut VClock, thread: ThreadId) {
RalfJung marked this conversation as resolved.
Show resolved Hide resolved
let (index, mut clocks) = self.load_thread_state_mut(thread);
Expand Down
16 changes: 0 additions & 16 deletions src/shims/posix/sync.rs
Original file line number Diff line number Diff line change
Expand Up @@ -62,8 +62,6 @@ fn mutex_get_kind<'mir, 'tcx: 'mir>(
mutex_op: OpTy<'tcx, Tag>,
) -> InterpResult<'tcx, ScalarMaybeUninit<Tag>> {
let offset = if ecx.pointer_size().bytes() == 8 { 16 } else { 12 };
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// mutex implementation, it may not need to be atomic.
ecx.read_scalar_at_offset_atomic(
mutex_op, offset, ecx.machine.layouts.i32,
AtomicReadOp::Relaxed
Expand All @@ -76,8 +74,6 @@ fn mutex_set_kind<'mir, 'tcx: 'mir>(
kind: impl Into<ScalarMaybeUninit<Tag>>,
) -> InterpResult<'tcx, ()> {
let offset = if ecx.pointer_size().bytes() == 8 { 16 } else { 12 };
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// mutex implementation, it may not need to be atomic.
ecx.write_scalar_at_offset_atomic(
mutex_op, offset, kind, ecx.machine.layouts.i32,
AtomicWriteOp::Relaxed
Expand All @@ -88,8 +84,6 @@ fn mutex_get_id<'mir, 'tcx: 'mir>(
ecx: &MiriEvalContext<'mir, 'tcx>,
mutex_op: OpTy<'tcx, Tag>,
) -> InterpResult<'tcx, ScalarMaybeUninit<Tag>> {
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// mutex implementation, it may not need to be atomic.
ecx.read_scalar_at_offset_atomic(
mutex_op, 4, ecx.machine.layouts.u32,
AtomicReadOp::Relaxed
Expand All @@ -101,8 +95,6 @@ fn mutex_set_id<'mir, 'tcx: 'mir>(
mutex_op: OpTy<'tcx, Tag>,
id: impl Into<ScalarMaybeUninit<Tag>>,
) -> InterpResult<'tcx, ()> {
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// mutex implementation, it may not need to be atomic.
ecx.write_scalar_at_offset_atomic(
mutex_op, 4, id, ecx.machine.layouts.u32,
AtomicWriteOp::Relaxed
Expand Down Expand Up @@ -135,8 +127,6 @@ fn mutex_get_or_create_id<'mir, 'tcx: 'mir>(
fn rwlock_get_id<'mir, 'tcx: 'mir>(
ecx: &MiriEvalContext<'mir, 'tcx>,
rwlock_op: OpTy<'tcx, Tag>,
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// rw-lock implementation, it may not need to be atomic.
) -> InterpResult<'tcx, ScalarMaybeUninit<Tag>> {
ecx.read_scalar_at_offset_atomic(
rwlock_op, 4, ecx.machine.layouts.u32,
Expand All @@ -149,8 +139,6 @@ fn rwlock_set_id<'mir, 'tcx: 'mir>(
rwlock_op: OpTy<'tcx, Tag>,
id: impl Into<ScalarMaybeUninit<Tag>>,
) -> InterpResult<'tcx, ()> {
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// rw-lock implementation, it may not need to be atomic.
ecx.write_scalar_at_offset_atomic(
rwlock_op, 4, id, ecx.machine.layouts.u32,
AtomicWriteOp::Relaxed
Expand Down Expand Up @@ -207,8 +195,6 @@ fn cond_get_id<'mir, 'tcx: 'mir>(
ecx: &MiriEvalContext<'mir, 'tcx>,
cond_op: OpTy<'tcx, Tag>,
) -> InterpResult<'tcx, ScalarMaybeUninit<Tag>> {
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// cond-var implementation, it may not need to be atomic.
ecx.read_scalar_at_offset_atomic(
cond_op, 4, ecx.machine.layouts.u32,
AtomicReadOp::Relaxed
Expand All @@ -220,8 +206,6 @@ fn cond_set_id<'mir, 'tcx: 'mir>(
cond_op: OpTy<'tcx, Tag>,
id: impl Into<ScalarMaybeUninit<Tag>>,
) -> InterpResult<'tcx, ()> {
//FIXME: this has been made atomic to fix data-race reporting inside the internal
// cond-var implementation, it may not need to be atomic.
ecx.write_scalar_at_offset_atomic(
cond_op, 4, id, ecx.machine.layouts.u32,
AtomicWriteOp::Relaxed
Expand Down