Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Thousands of "WARNING: This entry should have been caught." log messages #1031

Closed
markadev opened this issue Sep 9, 2019 · 13 comments
Closed
Assignees
Labels
area/gc Garbage Collection issues. kind/bug Something is broken. priority/P1 Serious issue that requires eventual attention (can wait a bit) status/accepted We accept to investigate or work on it. status/confirmed The issue has been triaged by still not reproduced.

Comments

@markadev
Copy link

markadev commented Sep 9, 2019

What version of Go are you using (go version)?

$ go version
go version go1.12.6 linux/amd64

What version of Badger are you using?

master (badger 2.0 prerelease hash a1ff348)

I also have added a few extra log messages in the GC functionality to try to figure out what's going on, but no other changes to the code.

Does this issue reproduce with the latest master?

Yes

What are the hardware specifications of the machine (RAM, OS, Disk)?

SLES 12 Linux (64 bit) VM, 8 GB RAM

What did you do?

I ran a test program that mimics our application's behavior. The test program is just doing some writes, reads, and value log GC. After 10-24 hours, this warning starts showing up.

main.go.txt

What did you expect to see?

I expected the program to run forever with no warning messages.

What did you see instead?

I saw 10's of thousands of warning messages like:

badger 2019/09/06 20:11:59 WARNING: This entry should have been caught. e={Key:[33 98 97 100 103 101 114 33 109 111 118 101 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 48 51 52 53 48 49 57 255 255 255 255 255 255 242 132] Value:[97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97 97] UserMeta:0 ExpiresAt:0 meta:0 offset:1017934987 skipVlog:false hlen:7}, vp={Fid:5 Len:1823 Offset:847085971}

As far as I can tell, this warning message happens when a value log is GC'd and the entry in the LSM tree points to a value in a vlog file before the one being GC'd, which should be theoretically impossible. In this case, it was rewriting vlog fid 14 but the value pointer in the LSM tree for that timestamp was pointing into vlog fid 5. Vlog fid 5 had already been rewritten a few hours earlier.

@manishrjain
Copy link
Contributor

@jarifibrahim please look into this urgently.

@markadev
Copy link
Author

markadev commented Sep 9, 2019

I did a little analysis last week so I'll post what I have so far.

In the 2 reproductions I've dug into so far (1.6.0 and master), I saw that the versions and vlog ptr reported in the log statement were retrieved from an entry down in an L2 table and that L2 table was fairly old. The key was also always a !move key, if that makes any difference.

The LSM compaction code caught my eye and I'm wondering if there's an issue keeping track of the "latest" vlog ptr for a version when an entry at a specific version is rewritten with a new vlog ptr. (Because the same version is written with a new value (pointer), you can kind of lose track of which value pointer is the latest one for a specific version of a key)

My best theory on how to reproduce is:

  1. Store 'key'@v1 -> FID1
  2. GC vlog FID1: creates '!badger!movekey'@v1 -> FID2
  3. Compact '!badger!movekey'@v1 down to an L2 sst
  4. GC vlog FID2: creates '!badger!movekey'@v1 -> FID3 in the memtable
  5. Flush the memtables out to L0

Right now a 'get' of '!badger!movekey'@v1 will return a pointer to vlog FID3 (from L0). All OK. The LSM tree should contain something like:

  • L0: '!badger!movekey'@v1 -> FID3
  • L2: '!badger!movekey'@v1 -> FID2
  1. Store 'key'@v2 -> FID3
  2. GC vlog FID3: creates '!badger!movekey'@v1 -> FID4 and '!badger!movekey@V2' -> FID4
  3. Flush memtables out to L0

At this point I think the LSM tree will look like:

  • L0: '!badger!movekey'@v1 -> FID4
  • L0: '!badger!movekey'@v2 -> FID4
  • L2: '!badger!movekey'@v1 -> FID2

A get of '!badger!movekey'@v1 will return a pointer into FID4. Still all OK.

  1. Compact L0 into L1

Looking at the compaction code, I think it will discard the '!badger!movekey'@v1 in L0 because there's already a newer version and it only keeps the number of versions set in the options struct. In that case, the SSTs will end up with these entries for the movekey:

  • L0: none
  • L1: '!badger!movekey'@v2 -> FID4
  • L2: '!badger!movekey'@v1 -> FID2 (even though vlog FID4 contains the value for V1)

Then when the FID4 rewrite eventually runs, it only finds the older vlog pointer in the LSM tree.

@jarifibrahim jarifibrahim self-assigned this Sep 10, 2019
@jarifibrahim jarifibrahim added kind/bug Something is broken. priority/P0 Critical issue that requires immediate attention. status/confirmed The issue has been triaged by still not reproduced. labels Sep 10, 2019
@connorgorman
Copy link
Contributor

@jarifibrahim Any update on this one? Seems pretty rare, but impactful

@jarifibrahim
Copy link
Contributor

Hey @connorgorman I was able to reproduce this issue using the provided script. Interestingly, if I change any of the DB options in that script, the issue is no longer reproducible.
I also noticed that the warnings automatically disappeared after some time ( > 30 hours). I think we could be leaving away stale move keys which are eventually being cleaned up by the compaction (I can't be sure until we have a test for this). The rewrite function performs the lookup for a specific version of the key. A user would never do that. A user would call txn.Get(..) which would always show the latest version. So this issue shouldn't affect the value a user would see.

I'm working on trying to write a test that can reliably reproduce the issue.

  • L0: '!badger!movekey'@v1 -> FID4
  • L0: '!badger!movekey'@v2 -> FID4
  • L2: '!badger!movekey'@v1 -> FID2

A get of '!badger!movekey'@v1 will return a pointer into FID4. Still all OK.

  1. Compact L0 into L1

Looking at the compaction code, I think it will discard the '!badger!movekey'@v1 in L0 because there's already a newer version and it only keeps the number of versions set in the options struct. In that case, the SSTs will end up with these entries for the movekey:

@markadev This wouldn't happen because of

badger/levels.go

Lines 554 to 558 in 86a77bb

} else if hasOverlap {
// If this key range has overlap with lower levels, then keep the deletion
// marker with the latest version, discarding the rest. We have set skipKey,
// so the following key versions would be skipped.
} else {

If we find a key has overlap which lower levels, we keep the key. We don't remove it.

Also, if a user was to do txn.Get([]byte("key") they would get the latest value for the key which is key@v2 which points to !badger!move!key@2 which would be stored in Level1.

@markadev
Copy link
Author

Thanks @jarifibrahim , you're right about the overlap checking.

I also agree that there's not really any functional impact (no data loss or unavailability in this case for sure). In my application I can't have thousands of spurious warning messages clogging up my logs though. For a temporary workaround I think I will just make a local modification to turn the log message into a counter and log the aggregate count.

@jarifibrahim jarifibrahim added priority/P1 Serious issue that requires eventual attention (can wait a bit) and removed priority/P0 Critical issue that requires immediate attention. labels Sep 13, 2019
@jarifibrahim
Copy link
Contributor

@markadev That makes sense, please make a temporary change in the logs. I'll send a PR to fix this as soon as I am able to build a test for it 💭

Since this isn't a data loss or a crash issue, I've reduced its priority to P1.

@jarifibrahim jarifibrahim added the area/gc Garbage Collection issues. label Sep 13, 2019
@stale
Copy link

stale bot commented Oct 13, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale The issue hasn't had activity for a while and it's marked for closing. label Oct 13, 2019
@jarifibrahim jarifibrahim removed the status/stale The issue hasn't had activity for a while and it's marked for closing. label Oct 14, 2019
@stale
Copy link

stale bot commented Nov 13, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale The issue hasn't had activity for a while and it's marked for closing. label Nov 13, 2019
@stale
Copy link

stale bot commented Nov 20, 2019

This issue was marked as stale and no activity has occurred since then, therefore it will now be closed. Please, reopen if the issue is still relevant.

@stale stale bot closed this as completed Nov 20, 2019
@jarifibrahim jarifibrahim removed the status/stale The issue hasn't had activity for a while and it's marked for closing. label Nov 20, 2019
@jarifibrahim jarifibrahim reopened this Nov 27, 2019
@stale
Copy link

stale bot commented Dec 27, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the status/stale The issue hasn't had activity for a while and it's marked for closing. label Dec 27, 2019
@stale
Copy link

stale bot commented Jan 3, 2020

This issue was marked as stale and no activity has occurred since then, therefore it will now be closed. Please, reopen if the issue is still relevant.

@stale stale bot closed this as completed Jan 3, 2020
@lgalatin
Copy link
Contributor

lgalatin commented Jan 3, 2020

@jarifibrahim is working on this. re-opening.

@lgalatin lgalatin reopened this Jan 3, 2020
@stale stale bot removed the status/stale The issue hasn't had activity for a while and it's marked for closing. label Jan 3, 2020
@lgalatin lgalatin added the status/accepted We accept to investigate or work on it. label Jan 3, 2020
jarifibrahim pushed a commit that referenced this issue Jan 8, 2020
Fixes - #1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```
@jarifibrahim
Copy link
Contributor

This has been fixed via #1170 .

jarifibrahim pushed a commit that referenced this issue Mar 5, 2020
Fixes - #1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```

(cherry picked from commit 2a90c66)
jarifibrahim pushed a commit that referenced this issue Mar 6, 2020
* Cast sz to uint32 to fix compilation on 32 bit (#1175)

`env GOOS=linux GOARCH=arm GOARM=7 go build` no longer fails with overflow.

Similar to commit fb0cdb8.

Signed-off-by: Christian Stewart <[email protected]>
(cherry picked from commit 465f28a)

* Fix commit sha for WithInMemory in CHANGELOG. (#1172)

(cherry picked from commit 03af216)

* Fix windows build (#1177)

Fix windows build and some deepsource warnings

(cherry picked from commit 0f2e629)

* Fix checkOverlap in compaction (#1166)

The overlap check in compaction would keep additional keys in case
the levels under compaction had overlap amongst themselves.
This commit fixes it.

This commit also fixes the `numVersionsToKeep` check. Without this
commit, we would end up keeping `2` versions of a key even when the
number of versions to keep was set to `1`. See
`level 0 to level 1 with lower overlap` test.

Fixes #1053

The following test fails on master but works with this commit
```go
func TestCompaction(t *testing.T) {
	t.Run("level 0 to level 1", func(t *testing.T) {
		dir, err := ioutil.TempDir("", "badger-test")
		require.NoError(t, err)
		defer removeDir(dir)

		// Disable compactions and keep single version of each key.
		opt := DefaultOptions(dir).WithNumCompactors(0).WithNumVersionsToKeep(1)
		db, err := OpenManaged(opt)
		require.NoError(t, err)

		l0 := []keyValVersion{{"foo", "bar", 3}, {"fooz", "baz", 1}}
		l01 := []keyValVersion{{"foo", "bar", 2}}
		l1 := []keyValVersion{{"foo", "bar", 1}}
		// Level 0 has table l0 and l01.
		createAndOpen(db, l0, 0)
		createAndOpen(db, l01, 0)
		// Level 1 has table l1.
		createAndOpen(db, l1, 1)

		// Set a high discard timestamp so that all the keys are below the discard timestamp.
		db.SetDiscardTs(10)

		getAllAndCheck(t, db, []keyValVersion{
			{"foo", "bar", 3}, {"foo", "bar", 2}, {"foo", "bar", 1}, {"fooz", "baz", 1},
		})
		cdef := compactDef{
			thisLevel: db.lc.levels[0],
			nextLevel: db.lc.levels[1],
			top:       db.lc.levels[0].tables,
			bot:       db.lc.levels[1].tables,
		}
		require.NoError(t, db.lc.runCompactDef(0, cdef))
		// foo version 2 should be dropped after compaction.
		getAllAndCheck(t, db, []keyValVersion{{"foo", "bar", 3}, {"fooz", "baz", 1}})
	})
}
```

(cherry picked from commit 0a06173)

* Remove the 'this entry should've caught' log from value.go (#1170)

Fixes - #1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```

(cherry picked from commit 2a90c66)

* Avoid sync in inmemory mode (#1190)

This makes db.Sync() no-op when badger is running in in-memory mode.
The previous code would unnecessarily load up an atomic and
acquire locks.

(cherry picked from commit 2698bfc)

* Use fastRand instead of locked-rand in skiplist (#1173)

The math/rand package (https://golang.org/src/math/rand/rand.go) uses
a global lock to allow concurrent access to the rand object.

The PR replaces `math.Rand` with `ristretto/z.FastRand()`. `FastRand`
is much faster than `math.Rand` and `rand.New(..)` generators.

The change improves concurrent writes to skiplist by ~30%
```go
func BenchmarkWrite(b *testing.B) {
	value := newValue(123)
	l := NewSkiplist(int64((b.N + 1) * MaxNodeSize))
	defer l.DecrRef()
	b.ResetTimer()
	b.RunParallel(func(pb *testing.PB) {
		rng := rand.New(rand.NewSource(time.Now().UnixNano()))
		for pb.Next() {
			l.Put(randomKey(rng), y.ValueStruct{Value: value, Meta: 0, UserMeta: 0})
		}
	})
}
```
```
name      old time/op  new time/op  delta
Write-16   657ns ± 3%   441ns ± 1%  -32.94%  (p=0.000 n=9+10)
```

(cherry picked from commit 9d6512b)

* Add Jaegar to list of projects (#1192)


(cherry picked from commit 01a00cb)

* Run all tests on CI (#1189)

This could possibly be a bug in `go test` command
golang/go#36527 .

The `go test` command would skip tests in sub-packages if
the top-level package has a `custom flag` defined and the
sub-packages don't define it.

Issue golang/go#36527 (comment)
has an example of this.

This PR also removes the code from the test that would unnecessary
start a web server. I see two problems here
1. An unnecessary web server running.
2. We cannot run multiple tests are the same time since the second
    run of the test would try to start a web server and
    crash saying `port already in use`.

(cherry picked from commit 5870b7b)

* Improve write stalling on level 0 and 1

We don't need to stall writes if Level 1 does not have enough space.
Level 1 is stored on the disk and it should be okay to have more
number of tables (more size) on Level 1 than the `max level 1 size`.
These tables will eventually be compacted to lower levels.

This commit changes the following
- We no longer stall writes if L1 doesn't have enough space.
- We stall writes on level 0 only if `KeepL0InMemory` is true.
- Upper levels (L0, L1, etc) get priority in compaction (previously,
    level with higher priority score would get preference)

(cherry picked from commit 3747be5)

* Update ristretto to version  8f368f2 (#1195)

This commit pulls following changes from ristretto
```
git log c1f00be0418e...8f368f2 --oneline
```
```

8f368f2 (HEAD -> master) Fix DeepSource warnings
adb35f0 delete item immediately 
fce5e91 Support nil *Cache values in Clear and Close
4e224f9 Add .travis.yml 
8350826 Fix the way metrics are handled for deletions
99d1bbb (tag: v0.0.1) default to 128bit hashing for collision checks
1eea1b1 atomic Get-Delete operation for eviction
8d6a8a7 add cache test for identifying MaxCost overflows
ae09250 use the custom KeyToHash function if one is set
1dd5a4d #19 Fixes memory leak due to Oct 1st regression in processItems
```

(cherry picked from commit 82381ac)

* Support disabling the cache completely. (#1183) (#1185)

The cache can be disabled by setting `opt.MaxCacheSize=0`

(cherry picked from commit 7e5a956)

* Fix L0/L1 stall test (#1201)

The test `TestL0Stall` and `TestL1Stall` would never fail
because of errors in the manifest file. This commit fixes it.

(cherry picked from commit 0acb3f6)

* Disable compression and set ZSTD Compression Level to 1 (#1191)

This PR
- Disables compression. By default, badger does not use any compression.
- Set default ZSTD compression level to 1
    Level 15 is very slow for any practical use of badger.

```
no_compression-16              10	 502848865 ns/op	 165.46 MB/s
zstd_compression/level_1-16     7	 739037966 ns/op	 112.58 MB/s
zstd_compression/level_3-16     7	 756950250 ns/op	 109.91 MB/s
zstd_compression/level_15-16    1	11135686219 ns/op	   7.47 MB/s
```

(cherry picked from commit c3333a5)

* Add support for caching bloomfilters (#1204)

This PR adds support for caching bloom filters in ristretto.
The bloom filters and blocks are removed from the cache 
when the table is deleted.

(cherry picked from commit 4676ca9)

* Rework concurrency semantics of valueLog.maxFid (#1184) (#1187)

Move all access to `valueLog.maxFid` under `valueLog.filesLock`, while
all mutations happen either with writes stopped or sequentially under
valueLog.write.

Fixes a concurrency issue in `valueLog.Read` where the maxFid variable
and the `writableLogOffset` variable could point to two different log files.

(cherry picked from commit 3e25d77)

* Change else-if statements to idiomatic switch statements. (#1207)



(cherry picked from commit eee1602)

* Fix flaky TestPageBufferReader2 test (#1210)

Fixes #1197

The `TestPageBufferReader2` test would fail often because of an
`off-by-1` issue. The problem can be reproduced by setting `randOffset`
to the biggest number that randInt31n may return statically like:

```
    //randOffset := int(rand.Int31n(int32(b.length)))
    randOffset := int(int32(b.length-1))
```

This makes the problem reliably reproducible as the offset is now
pointing at EOF.

Thus changing the line to this should hopefully solve the problem:
`randOffset := int(rand.Int31n(int32(b.length-1

(cherry picked from commit c51748e)

* Replace t.Fatal with require.NoError in tests (#1213)

We are using the following pattern in tests that can be
replaced with `require.NoError(t, err)`.
```go
if err != nil {
    t.Fatal(err)
}
```

(cherry picked from commit 78d405a)

* Add missing package to README for badger.NewEntry (#1223)


(cherry picked from commit 8734e3a)

* Remove ExampleDB_Subscribe Test (#1214)

The test ExampleDB_Subscribe doesn't run reliably on appveyor.
This commit removes it.

(cherry picked from commit e029e93)

* Fix int overflow for 32bit (#1216)

- Fix tests for 32 bit systems
- Enable 32-bit builds on Travis

(cherry picked from commit bce069c)

* Update CHANGELOG for Badger 2.0.2 release. (#1230)

Co-authored-by: Ibrahim Jarif <[email protected]>
(cherry picked from commit e908818)

* Fix changelog for v2.0.2

(cherry picked from commit b81faa5)

Co-authored-by: Christian Stewart <[email protected]>
Co-authored-by: Leyla Galatin <[email protected]>
Co-authored-by: Damien Tournoud <[email protected]>
Co-authored-by: Martin Martinez Rivera <[email protected]>
Co-authored-by: Dieter Plaetinck <[email protected]>
Co-authored-by: Apoorv Vardhan <[email protected]>
jarifibrahim pushed a commit that referenced this issue Mar 12, 2020
Fixes - #1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```

(cherry picked from commit 2a90c66)
hpucha pushed a commit to neevaco/badger that referenced this issue Mar 20, 2020
…#1170)

Fixes - dgraph-io#1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```
jarifibrahim pushed a commit that referenced this issue Mar 24, 2020
Fixes - #1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```

(cherry picked from commit 2a90c66)
manishrjain pushed a commit to outcaste-io/outserv that referenced this issue Jul 6, 2022
Fixes - dgraph-io/badger#1031
(There wasn't a bug to fix. The log statement shouldn't have been there)

This PR removes the warning message `WARNING: This entry should have
been caught.`. The warning message assumed that we will always find the
**newest key if two keys have the same version** This assumption is
valid in case of a normal key but it's **NOT TRUE** in case of
**move keys**.

Here's how we can end up fetching the older version of a move key if
two move keys have the same version.

```
It might be possible that the entry read from LSM Tree points to an
older vlog file. This can happen in the following situation. Assume DB
is opened with numberOfVersionsToKeep=1

Now, if we have ONLY one key in the system "FOO" which has been updated
3 times and the same key has been garbage collected 3 times, we'll have
3 versions of the movekey for the same key "FOO".
NOTE: moveKeyi is the moveKey with version i
Assume we have 3 move keys in L0.
- moveKey1 (points to vlog file 10),
- moveKey2 (points to vlog file 14) and
- moveKey3 (points to vlog file 15).

Also, assume there is another move key "moveKey1" (points to vlog
file 6) (this is also a move Key for key "FOO" ) on upper levels (let's
say level 3). The move key "moveKey1" on level 0 was inserted because
vlog file 6 was GCed.

Here's what the arrangement looks like
L0 => (moveKey1 => vlog10), (moveKey2 => vlog14), (moveKey3 => vlog15)
L1 => ....
L2 => ....
L3 => (moveKey1 => vlog6)

When L0 compaction runs, it keeps only moveKey3 because the number of
versions to keep is set to 1. (we've dropped moveKey1's latest version)

The new arrangement of keys is
L0 => ....
L1 => (moveKey3 => vlog15)
L2 => ....
L3 => (moveKey1 => vlog6)

Now if we try to GC vlog file 10, the entry read from vlog file will
point to vlog10 but the entry read from LSM Tree will point to vlog6.
The move key read from LSM tree will point to vlog6 because we've asked
for version 1 of the move key.

This might seem like an issue but it's not really an issue because the
user has set the number of versions to keep to 1 and the latest version
of moveKey points to the correct vlog file and offset. The stale move
key on L3 will be eventually dropped by compaction because there is a
newer version in the upper levels.
```
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/gc Garbage Collection issues. kind/bug Something is broken. priority/P1 Serious issue that requires eventual attention (can wait a bit) status/accepted We accept to investigate or work on it. status/confirmed The issue has been triaged by still not reproduced.
Development

No branches or pull requests

5 participants