Otter is one of the most powerful caching libraries for Go based on researches in caching and concurrent data structures. Otter also uses the experience of designing caching libraries in other languages (for example, caffeine).
- Simple API: Just set the parameters you want in the builder and enjoy
- Autoconfiguration: Otter is automatically configured based on the parallelism of your application
- Generics: You can safely use any comparable types as keys and any types as values
- TTL: Expired values will be automatically deleted from the cache
- Cost-based eviction: Otter supports eviction based on the cost of each entry
- Deletion listener: You can pass a callback function in the builder that will be called when an entry is deleted from the cache
- Stats: You can collect various usage statistics
- Excellent throughput: Otter can handle a huge number of requests
- Great hit ratio: New S3-FIFO algorithm is used, which shows excellent results
Otter is based on the following papers:
- BP-Wrapper: A Framework Making Any Replacement Algorithms (Almost) Lock Contention Free
- FIFO queues are all you need for cache eviction
- A large scale analysis of hundreds of in-memory cache clusters at Twitter
- Go 1.19+
go get -u github.com/maypok86/otter
Otter uses a builder pattern that allows you to conveniently create a cache instance with different parameters.
Cache with const TTL
package main
import (
"fmt"
"time"
"github.com/maypok86/otter"
)
func main() {
// create a cache with capacity equal to 10000 elements
cache, err := otter.MustBuilder[string, string](10_000).
CollectStats().
Cost(func(key string, value string) uint32 {
return 1
}).
WithTTL(time.Hour).
Build()
if err != nil {
panic(err)
}
// set item with ttl (1 hour)
cache.Set("key", "value")
// get value from cache
value, ok := cache.Get("key")
if !ok {
panic("not found key")
}
fmt.Println(value)
// delete item from cache
cache.Delete("key")
// delete data and stop goroutines
cache.Close()
}
Cache with variable TTL
package main
import (
"fmt"
"time"
"github.com/maypok86/otter"
)
func main() {
// create a cache with capacity equal to 10000 elements
cache, err := otter.MustBuilder[string, string](10_000).
CollectStats().
Cost(func(key string, value string) uint32 {
return 1
}).
WithVariableTTL().
Build()
if err != nil {
panic(err)
}
// set item with ttl (1 hour)
cache.Set("key1", "value1", time.Hour)
// set item with ttl (1 minute)
cache.Set("key2", "value2", time.Minute)
// get value from cache
value, ok := cache.Get("key1")
if !ok {
panic("not found key")
}
fmt.Println(value)
// delete item from cache
cache.Delete("key1")
// delete data and stop goroutines
cache.Close()
}
The benchmark code can be found here.
Throughput benchmarks are a Go port of the caffeine benchmarks. This microbenchmark compares the throughput of caches on a zipf distribution, which allows to show various inefficient places in implementations.
You can find results here.
The hit ratio simulator tests caches on various traces:
- Synthetic (zipf distribution)
- Traditional (widely known and used in various projects and papers)
- Modern (recently collected from the production of the largest companies in the world)
You can find results here.
The memory overhead benchmark shows how much additional memory the cache will require at different capacities.
You can find results here.
Contributions are welcome as always, before submitting a new PR please make sure to open a new issue so community members can discuss it. For more information please see contribution guidelines.
Additionally, you might find existing open issues which can help with improvements.
This project follows a standard code of conduct so that you can understand what actions will and will not be tolerated.
This project is Apache 2.0 licensed, as found in the LICENSE.