consensus/ethash: improve cache/dataset handling#15864
Conversation
There are two fixes in this commit: Unmap the memory through a finalizer like the libethash wrapper did. The release logic was incorrect and freed the memory while it was being used, leading to crashes like in ethereum#14495 or ethereum#14943. Track caches and datasets using simplelru instead of reinventing LRU logic. This should make it easier to see whether it's correct.
7216825 to
253747a
Compare
karalabe
left a comment
There was a problem hiding this comment.
Minor nitpick, otherwise seems good to me.
| return calcCacheSize(epoch) | ||
| } | ||
|
|
||
| func calcCacheSize(epoch int) uint64 { |
There was a problem hiding this comment.
Please document this method. I know it's private, but all methods in ethash are fully documented and I'd like to keep it so.
| return calcDatasetSize(epoch) | ||
| } | ||
|
|
||
| func calcDatasetSize(epoch int) uint64 { |
There was a problem hiding this comment.
Please document this method. I know it's private, but all methods in ethash are fully documented and I'd like to keep it so.
| } | ||
| } | ||
|
|
||
| func TestCacheFileEvict(t *testing.T) { |
There was a problem hiding this comment.
Please add a small doc describing what this tests.
This makes it possible to shorten the time taken for TestCacheFileEvict.
|
@karalabe PTAL |
karalabe
left a comment
There was a problem hiding this comment.
Fixed some tiny nitpicks, otherwise LGTM.
| func datasetSize(block uint64) uint64 { | ||
| // If we have a pre-generated value, use that | ||
| epoch := int(block / epochLength) | ||
| if epoch < len(datasetSizes) { |
There was a problem hiding this comment.
len(datasetSized) -> maxEpoch (you changed this everywhere else).
| futureItem interface{} | ||
| } | ||
|
|
||
| func newlru(what string, maxItems int, new func(epoch uint64) interface{}) *lru { |
There was a problem hiding this comment.
// newlru create a new least-recently-used cache for ither the verification caches
// or the mining datasets.
| used time.Time // Timestamp of the last use for smarter eviction | ||
| once sync.Once // Ensures the cache is generated only once | ||
| lock sync.Mutex // Ensures thread safety for updating the usage time | ||
| func newCache(epoch uint64) interface{} { |
There was a problem hiding this comment.
// newCache creates a new ethash verification cache and returns it as a plain Go
// interface to be usable in an LRU cache.
| used time.Time // Timestamp of the last use for smarter eviction | ||
| once sync.Once // Ensures the cache is generated only once | ||
| lock sync.Mutex // Ensures thread safety for updating the usage time | ||
| func newDataset(epoch uint64) interface{} { |
There was a problem hiding this comment.
// newDataset creates a new ethash mining dataset and returns it as a plain Go
// interface to be usable in an LRU cache.
* consensus/ethash: add maxEpoch constant * consensus/ethash: improve cache/dataset handling There are two fixes in this commit: Unmap the memory through a finalizer like the libethash wrapper did. The release logic was incorrect and freed the memory while it was being used, leading to crashes like in ethereum#14495 or ethereum#14943. Track caches and datasets using simplelru instead of reinventing LRU logic. This should make it easier to see whether it's correct. * consensus/ethash: restore 'future item' logic in lru * consensus/ethash: use mmap even in test mode This makes it possible to shorten the time taken for TestCacheFileEvict. * consensus/ethash: shuffle func calc*Size comments around * consensus/ethash: ensure future cache/dataset is in the lru cache * consensus/ethash: add issue link to the new test * consensus/ethash: fix vet * consensus/ethash: fix test * consensus: tiny issue + nitpick fixes
* consensus/ethash: add maxEpoch constant * consensus/ethash: improve cache/dataset handling There are two fixes in this commit: Unmap the memory through a finalizer like the libethash wrapper did. The release logic was incorrect and freed the memory while it was being used, leading to crashes like in ethereum#14495 or ethereum#14943. Track caches and datasets using simplelru instead of reinventing LRU logic. This should make it easier to see whether it's correct. * consensus/ethash: restore 'future item' logic in lru * consensus/ethash: use mmap even in test mode This makes it possible to shorten the time taken for TestCacheFileEvict. * consensus/ethash: shuffle func calc*Size comments around * consensus/ethash: ensure future cache/dataset is in the lru cache * consensus/ethash: add issue link to the new test * consensus/ethash: fix vet * consensus/ethash: fix test * consensus: tiny issue + nitpick fixes
There are two fixes in this PR:
Unmap the memory through a finalizer like the libethash wrapper did. The
release logic was incorrect and freed the memory while it was being
used, leading to crashes like in #14495 or #14943.
Track caches and datasets using simplelru instead of reinventing LRU
logic. This should make it easier to see whether it's correct.
Fixes #14495, #14943