-
Notifications
You must be signed in to change notification settings - Fork 0
[CORE-2698] Transaction log store #149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
📊 Benchmark Resultsencoding.bench.tsKey encoding > ordered-binary keys - strings (100 records)
Key encoding > ordered-binary keys - numbers (100 records)
Key encoding > ordered-binary keys - mixed types (100 records)
Value encoding > msgpack values - strings (100 records)
Value encoding > msgpack values - numbers (100 records)
Value encoding > msgpack values - arrays (100 records)
Value encoding > msgpack values - small objects (100 records)
Value encoding > msgpack values - large objects (100 records)
get-sync.bench.tsgetSync() > random keys - small key size (100 records)
get.bench.tsget() > rocksdb - random vs sequential keys (100 records)
get() > random keys - max 1978 lmdb key size (100 records)
get() > rocksdb - async vs sync
put-sync.bench.tsputSync() > random keys - insert - small key size (100 records)
putSync() > random keys - update - small key size (100 records)
putSync() > random keys - insert - max 1978 lmdb key size (100 records)
putSync() > random keys - update - max 1978 lmdb key size (100 records)
putSync() > sequential keys - insert (100 records)
putSync() > put 100KB value (100 records)
putSync() > put 1MB value (100 records)
putSync() > get 10MB value (100 records)
put.bench.tsput > small dataset (100 records)
put > async vs sync overhead
ranges.bench.tsgetRange() > small range (100 records, 50 range)
getRange() > full scan vs range scan
getKeys() > keys only (100 records, 50 range)
Reverse iteration > reverse range (100 records, 50 range)
Reverse iteration > rocksdb - reverse vs forward
Range query patterns > prefix scan performance
Sparse data patterns > sparse - range over gaps
Sparse data patterns > sparse - prefix with gaps
remove-sync.bench.tsremoveSync() > random keys - small key size (100 records)
removeSync() > sequential keys - small key size (100 records)
removeSync() > rocksdb - random vs sequential keys (100 records)
removeSync() > random keys - max 1978 lmdb key size (100 records)
removeSync() > random access pattern (100 records)
removeSync() > non-existent keys (100 records)
transaction-sync.bench.tstransaction sync > optimistic > simple put operations (100 records)
transaction sync > optimistic > batch operations (100 records)
transaction sync > optimistic > read-write operations (100 records)
transaction sync > optimistic > concurrent non-conflicting operations (100 records)
transaction sync > optimistic > rollback operations (100 records)
transaction sync > optimistic > rocksdb - large transaction vs many small
transaction sync > optimistic > lmdb - large transaction vs many small
transaction sync > optimistic > empty transaction overhead
transaction sync > optimistic > transaction with only reads (100 records)
transaction sync > pessimistic > simple put operations (100 records)
transaction.bench.tstransaction > optimistic > simple put operations (100 records)
transaction > optimistic > batch operations (100 records)
transaction > optimistic > read-write operations (100 records)
transaction > optimistic > concurrent non-conflicting operations (100 records)
transaction > optimistic > rollback operations (100 records)
transaction > optimistic > rocksdb - large transaction vs many small
transaction > optimistic > lmdb - large transaction vs many small
transaction > optimistic > empty transaction overhead
transaction > optimistic > transaction with only reads (100 records)
transaction > pessimistic > simple put operations (100 records)
worker-get-sync.bench.tsWorker > random keys - small key size (100 records, 1 worker)
Worker > random keys - small key size (100 records, 2 workers)
Worker > random keys - small key size (100 records, 10 workers)
worker-put-sync.bench.tsputSync() > random keys - small key size (100 records, 1 worker)
putSync() > random keys - small key size (100 records, 2 workers)
putSync() > random keys - small key size (100 records, 10 workers)
Results from commit 5160fa8 |
// for writes at specific offset, update size if we wrote beyond current end | ||
size_t newEnd = static_cast<size_t>(offset) + static_cast<size_t>(bytesWritten); | ||
size_t currentSize = this->size.load(); | ||
while (newEnd > currentSize && !this->size.compare_exchange_weak(currentSize, newEnd)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I think calls to write to the log file would need to be synchronized; I think we would expect that there there would a lock at higher level (in the eventual commit that calls this) to ensure we acquire a mutex, do all the writes for a transaction, increment the offset, and then release the mutex, so anything inside this function could expect that we are already in synchronized/locked execution for a file.
WIP