Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Blog post about new hash table (WIP) #195

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

zuiderkwast
Copy link
Contributor

A first stab.

The new new hash table is one of the highlights of the upcoming 8.1 release.

  • Improve structure and content of the text
  • Replace ascii art with other art
  • Decide which benchmarks we want
  • Add benchmark results

Signed-off-by: Viktor Söderqvist <[email protected]>
| Valkey 8.0 | ? bytes |
| Valkey 8.1 | ? bytes |

The benchmarks below were run using a key size of N and a value size of M bytes, without pipelining.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's add something for set/zset/hash and see if we get even more performance and memory savings since those datatypes are hashtables inside of a hashtable. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes... Feel free to replace these tables with some completely different tests.

There's a fixed overhead for the key and then per field-value. Still I'd like to see a table of memory savings per element/field/etc. for these types.

I want to do hash value embedding (to save the value pointer and an extra allocation) and Ran noticed that our embedded sds (key and field) are sds8 even when they should be sds5, so we could save a two more bytes for those. That's because they're copied from an EMBSTR robj value and those are always sds8. I have some idea to fix that too though.

Comment on lines +61 to +63
Why not use an open-source state-of-the-art hash table implementation such as
Swiss tables? The answer is that we require some specific features, apart from
the basic operations like add, lookup, replace, delete:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another reason: Swiss table is very fast, but it stores the elements directly in a contiguous array, which requires that the elements all be the same size. Because our elements vary in size, we had to choose a different design - we chose cache-line sized buckets with element pointers. (This idea was mentioned at the end of the swiss table talk - up to you if you want to make that reference though.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you can store pointers in a Swiss table, just like how we store pointers in our bucket layout. The pointers are the fixed-size elements, no?

I don't think it allows a custom key-value entry design like we do though. It can be either a set or a map (key and value) IIUC.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could have picked an off-the-shelf implementation even if we couldn't embed key and value the way we do, as long as would be better than dict. It's good to use a battle-tested ready-to-use one too. It's easier to get it right, and less work... I think scan and incremental rehashing were clearly blockers though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants