Further reduction in allocations (from callers)#54
Conversation
1afa550 to
9bbaca9
Compare
|
code looks great! remaining allocation (24 B/op) in BenchmarkLockUnlock. Do we know what it is? beforeAfter key alloc in preLock l.order? Have you tried running stress tests under -race? |
|
@sasha-s last remaining alloc is the stackGID entry that's appended to the holders slice when it needs to grow past its pooled capacity |
|
Just ran the tests with -race, one test fails but it also fails on master. Going to see if it fails prior to my first allocations commit as well. |
|
Okay yeah TestLockDuplicate also fails the race check at 7c2aeed "fix: use weak pointers in lock order map to prevent GC leak" I'll try to fix this test now |
Thank you. |
other changes: 1. add new unit tests 2. fix existing -race unit test failures
9bbaca9 to
c7663db
Compare
|
Fixed! Issue is the tests are writing and reading to Opts fields like Opts.DeadlockTimeout (not really something that happens in production) but sequential tests will do this and leaked go routines from previous tests will read Opts.* while Opts.* is being written to in new tests Caching Opts.DeadlockTimeout locally so leaked goroutines don't actually read Opts.DeadlockTimeout again solves this Also removed a few "restore" calls since this could cause races as well on Opts.* fields between subsequent tests. |
|
Thank you @kevin-pan-skydio |
Continuation of the allocation-reduction work in dcbba57 (which replaced per-lock goroutines and channels with pooled
pendingEntrystructs andtime.AfterFunc). This change tackles the remaining hot-path allocations: the stack-trace buffer allocated on everycallers()call and the[]stackGIDholder slices grown inpostLock.What changed
stackBufPool->callers()now draws its[50]uintptrbacking array from async.Poolinstead ofmake([]uintptr, 50)on every lock. The buffer pointer is threaded throughpostLock->stackGID.buf->postUnlock->releaseStackBufso it gets returned to the pool when the lock is released.holdersPool-> the[]stackGIDslices stored inlo.cur[p]are recycled. When a mutex has no remaining holders the backing slice is returned to a pool;postLockpulls from this pool before allocating a new one.copyStackforl.order-> because stacks are now backed by pooled buffers that get recycled, entries persisted in the lock-order map (l.order) need independent copies viacopyStack()to avoid use-after-recycle bugs.TestManyReadersFewWriters(100 readers, 3 writers),TestConcurrentLockOrderDetection(20 goroutines triggering order violations), andTestDeadlockTimeoutTransientNoHolder(exercises the reschedule path when the deadlock timer fires during a transient no-holder window).Benchmarks
Prior benchmark:
Test plan
go test ./...)-benchmem -count=3confirm the allocation reductionTestDeadlockTimeoutTransientNoHoldervalidates no false-positive deadlock reports from the reschedule pathThis change is