Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inadequate memory consumption #292

Open
AlexBlack772 opened this issue Oct 26, 2021 · 4 comments
Open

Inadequate memory consumption #292

AlexBlack772 opened this issue Oct 26, 2021 · 4 comments
Labels

Comments

@AlexBlack772
Copy link

What is the issue you are having?
Despite the limitation of the maximum cache size of 64 megabytes, the program consumes considerably more memory

I'se set HardMaxCacheSize to 64Mb (see a code below), but after programm run vmRSS was much more than 64Mb. In a real setup I've seen cache size more than 5Gb and programm was killed by OOM killer

keys count = 1
vmRSS: Pre run 19 890 176
vmRSS: Post run 20 430 848

keys count = 100
vmRSS: Pre run 19 890 176
vmRSS: Post run 74 498 048

keys count = 1000
vmRSS: Pre run 198 86 080
vmRSS: Post run 385 257 472

keys count = 5000
vmRSS: Pre run 19 894 272
vmRSS: Post run 556 134 400

What is BigCache doing that it shouldn't?
Consumes a lot of memory

Minimal, Complete, and Verifiable Example

package main

import (
	"fmt"
	"log"
	"strconv"
	"time"

	"github.com/allegro/bigcache/v3"
	"github.com/prometheus/procfs"
)

func main() {
	config := bigcache.Config{
		Shards:             1024,
		LifeWindow:         10 * time.Minute,
		CleanWindow:        5 * time.Minute,
		MaxEntriesInWindow: 5000 * 10 * 60,
		MaxEntrySize:       1024,
		Verbose:            true,
		HardMaxCacheSize:   64,
		OnRemove:           nil,
		OnRemoveWithReason: nil,
	}

	cache, initErr := bigcache.NewBigCache(config)
	if initErr != nil {
		log.Fatal(initErr)
	}

	memStat("Pre run")

        // Keys count
	count := 1000
	val := make([]byte, 512*1024)

	for count > 0 {
		key := fmt.Sprintf("key-%d", count)
		cache.Set(key, val)
		count--
	}

	memStat("Post run")
}

func memStat(stage string) {

	p, err := procfs.Self()
	if err != nil {
		return
	}

	stat, err := p.NewStatus()
	if err != nil {
		return
	}

	fmt.Printf("vmRSS: %s %d\n", stage, int64(stat.VmRSS))
}

Environment:
go version go1.17 linux/amd64

@janisz
Copy link
Collaborator

janisz commented Oct 27, 2021

Hello. Thanks for submitting this bug.
I slightly modified your example to run pprof on it and below are the results

Showing nodes accounting for 270.35MB, 99.82% of 270.85MB total
Dropped 7 nodes (cum <= 1.35MB)
Showing top 10 nodes out of 17
flat flat% sum% cum cum%
135.54MB 50.04% 50.04% 202.56MB 74.79% github.com/allegro/bigcache/v3.initNewShard
67.02MB 24.74% 74.79% 67.02MB 24.74% github.com/allegro/bigcache/v3/queue.NewBytesQueue (inline)
65.29MB 24.10% 98.89% 65.29MB 24.10% github.com/allegro/bigcache/v3.wrapEntry
2.50MB 0.92% 99.82% 2.50MB 0.92% runtime.allocm
0 0% 99.82% 65.29MB 24.10% github.com/allegro/bigcache/v3.(*BigCache).Set
0 0% 99.82% 65.29MB 24.10% github.com/allegro/bigcache/v3.(*cacheShard).set
0 0% 99.82% 202.56MB 74.79% github.com/allegro/bigcache/v3.NewBigCache (inline)
0 0% 99.82% 202.56MB 74.79% github.com/allegro/bigcache/v3.newBigCache
0 0% 99.82% 267.84MB 98.89% main.main
0 0% 99.82% 267.84MB 98.89% runtime.main

profile001

As you can see underlying bytes queue is correctly allocated and all additional data goes from Set function where we wrap entry and 1024 shards that were created.

I understand that documentation might be misleading. Would you like to fix it? add information that this limit only apply to bytes queue (not shards) and real memory usage depends of number of shards, entries and HardMaxCacheSize

@AlexBlack772
Copy link
Author

Thank you for explanation and additional tests.

In seems, there is a dependency between cache size and shards count. Could anybody clarify how to predict memory consumption with different values of shards count?

@janisz
Copy link
Collaborator

janisz commented Oct 28, 2021

Probably the easiest option will be to create some tests for it. Here are some hints where memory is allocated:

  1. HardMaxCacheSize - limits memory used by BytesQueue
  2. Shards - every shard consumes additional memory for map of keys and statistics (map[uint64]uint32) the size of this map is equal to number of entries in cache ~ 2×(64+32)×n bits + overhead or map itself.
  3. Every operation on cache may require additional memory but it should be freed when operation is finished.

@hanshuhuang
Copy link
Contributor

@janisz #298 I modified the documentation. Could you review the change?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants