-
Notifications
You must be signed in to change notification settings - Fork 599
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Iterator readEntry crashes #222
Comments
Well that's exciting! Can you provide a quick reproduction of the bug so it's a bit easier to understand the full context? I think I understand where it's breaking, but a code example will help debug it a bit more. |
Hey @mxplusb - It's tricky to know how to reproduce it exactly as it is not something that occurs consistently, and it seems that some unusual condition is met sometimes. Here is a mockup that represents roughly what we are doing: package main
import (
"encoding/binary"
"fmt"
"math/rand"
"time"
"github.com/allegro/bigcache"
)
var cache *bigcache.BigCache
type Entry struct {
id uint32
v []byte
}
func uint32Bytes(v uint32) []byte {
dAtA := make([]byte, 4)
binary.LittleEndian.PutUint32(dAtA, v)
return dAtA
}
func (e Entry) Marshal() []byte {
dAtA := uint32Bytes(e.id)
dAtA = append(dAtA, e.v...)
return dAtA
}
func assignRand(p []byte) {
for i := range p {
p[i] = byte(rand.Int63() & 0xff)
}
}
func seed() {
for i := 0; i < 100000; i++ {
v := make([]byte, rand.Intn(1024))
assignRand(v)
e := Entry{
id: uint32(i),
v: v,
}
cache.Set(string(uint32Bytes(e.id)), e.Marshal())
}
}
func init() {
conf := bigcache.DefaultConfig(60 * 24 * time.Hour)
conf.HardMaxCacheSize = 100 // actually dynamically calculated
conf.MaxEntrySize = 500
var err error
cache, err = bigcache.NewBigCache(conf)
if err != nil {
panic(err)
}
seed()
}
func main() {
rand.Seed(1)
it := cache.Iterator()
var current bigcache.EntryInfo
var err error
for it.SetNext() {
current, err = it.Value()
if err != nil {
panic(fmt.Sprint("iterator value error:", err))
continue
}
fmt.Println("len-value:", len(current.Value()))
}
}``` |
@mxplusb We have had another crash incident from a different method call:
Any ideas? |
I think this may be related to #148 |
This program will panic immediately with error package main
import (
"bytes"
"github.com/allegro/bigcache/v2"
"log"
"math/rand"
"runtime"
"strconv"
"time"
)
func main() {
cache, _ := bigcache.NewBigCache(bigcache.Config{
Shards: 1,
LifeWindow: time.Second,
MaxEntriesInWindow: 100,
MaxEntrySize: 256,
HardMaxCacheSize: 1,
Verbose: true,
})
go func() {
defer func() {
if err := recover(); err != nil {
buf := make([]byte, 16*1024*1024)
buf = buf[:runtime.Stack(buf, false)]
log.Fatalf("err: %v\n\ncrash: %s\n", err, string(buf))
}
}()
for {
cache.Set(strconv.Itoa(rand.Intn(100)), blob('a', 1024*100))
}
}()
go func() {
defer func() {
if err := recover(); err != nil {
buf := make([]byte, 16*1024*1024)
buf = buf[:runtime.Stack(buf, false)]
log.Fatalf("err: %v\n\ncrash: %s\n", err, string(buf))
}
}()
for {
iter := cache.Iterator()
for iter.SetNext() {
value, err := iter.Value()
log.Printf("key: %v, err: %v", value.Key(), err)
}
}
}()
select {}
}
func blob(char byte, len int) []byte {
return bytes.Repeat([]byte{char}, len)
}
In Iterator, it copy index in Why NOT just copy hash value of keys during iterating and get newest index of bytequeue in // shards.go
func (s *cacheShard) copyKeys() (keys []uint64, next int) {
s.lock.RLock()
keys = make([]uint64, len(s.hashmap))
for key := range s.hashmap {
keys[next] = key
next++
}
s.lock.RUnlock()
return keys, next
} // iterator.go
// Value returns current value from the iterator
func (it *EntryInfoIterator) Value() (EntryInfo, error) {
it.mutex.Lock()
if !it.valid {
it.mutex.Unlock()
return emptyEntryInfo, ErrInvalidIteratorState
}
wrappedEntry, err := it.cache.shards[it.currentShard].getWrappedEntryWithLock(it.elementKeys[it.currentIndex])
if err != nil {
it.mutex.Unlock()
return emptyEntryInfo, err
}
it.mutex.Unlock()
return EntryInfo{
timestamp: readTimestampFromEntry(wrappedEntry),
hash: readHashFromEntry(wrappedEntry),
key: readKeyFromEntry(wrappedEntry),
value: readEntry(wrappedEntry),
}, nil
} |
I don't actively maintain this project anymore (I was Allegro's first outside collaborator for many years), so I personally won't be providing a fix, I just review community contributions and help make it easier for others to provide a fix if they choose to. Sorry! |
Oh cool, we've been happy users for years but it does seem there are quite a few issues now related to this. If there is any way to get more attention to it or if there is a better place to get an update from the contributor community that would certainly be appreciated. |
@WideLee would you like to contribute a patch? |
@janisz Sure ^_^ I want to change |
What is the issue you are having?
Sometimes when using EntryInfoIterator to read all entries from the cache, the readEntry() call crashes with the following stack trace:
We only recently updated from an old version of bigcache. This error seems to indicate the object being read is somehow larger than the maximum slice length, which should not be possible given we marshal the structs to bytes using
protobuf
'sMarshal()
method:We are running on a 64bit system and we limit the size of all entries to ~1MB anyhow.
What is BigCache doing that it shouldn't?
It is not clear reading the code, but it seems related to how readEntry() determines length, perhaps some kind of overflow issue.
Environment:
/etc/os-release
or winver.exe): ubuntu 16.04The text was updated successfully, but these errors were encountered: