Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP diff viewer: utilize NEW_TOKEN frames #8

Draft
wants to merge 24 commits into
base: main
Choose a base branch
from
Draft

Conversation

gretchenfrage
Copy link
Owner

@gretchenfrage gretchenfrage commented Jun 16, 2024

Key points:

  • Token generalized to have both a "retry token" variant and a "new token frame token" variant
    • Additional byte acts as discriminant. It is not encrypted, but rather considered the "additional data" of the token's AEAD encryption. Other than that, the "retry token" variant remains the same.
    • The NewToken variant's aead_from_hkdf key derivation is based on an empty byte slice &[] rather than the retry_src_cid.
    • The NewToken variant's encrypted data consists of: randomly generated 128 bits, IP address (not including port), issued timestamp.
  • Server sends client 2 NEW_TOKEN frames whenever client's path is validated (reconfigurable through ServerConfig.new_tokens_sent_upon_validation)
  • ClientConfig.new_token_store: Option<Arc<dyn NewTokenStore>> object stores NEW_TOKEN tokens received by client, and dispenses them for one-time use when connecting to same server_name again
    • Default implementation InMemNewTokenStore stores 2 newest unused tokens for up to 256 servers with LRU eviction policy of server names, so as to pair well with rustls::client::ClientSessionMemoryCache
  • ServerConfig.token_reuse_preventer: Option<Arc<Mutex<Box<dyn TokenReusePreventer>>>> object is responsible for mitigating reuse of NEW_TOKEN tokens
    • Default implementation BloomTokenReusePreventer:

      Divides all time into periods of length new_token_lifetime starting at unix epoch. Always maintains two "filters" which track used tokens which expires in that period. Turning over filters as time passes prevents infinite accumulation of tracked tokens.

      Filters start out as FxHashSets. This achieves the desirable property of linear-ish memory usage: if few NEW_TOKEN tokens are actually being used, the server's bloom token reuse preventer uses negligible memory.

      Once a hash set filter would exceed a configurable maximum memory consumption, it's converted to a bloom filter. This achieves the property that an upper bound is set on the number of bytes allocated by the reuse preventer. Instead, as more tokens are added to the bloom filter, the false positive rate (tokens not actually reused but considered to be reused and thus ignored anyways) increases.

  • ServerConfig.new_token_lifetime is different from ServerConfig.retry_token_lifetime and defaults to 2 weeks.

TODO:

  • Send when validated rather than upon first connecting
  • Send upon path change
  • Update stats
  • Tests
  • Reuse prevention
    • Simplify it--it's not even used concurrently
  • Make sure encryption is good
  • Make not break if receive Retry in response to request with NEW_TOKEN token
  • NEW_TOKEN tokens should not encode the port (?)
  • We don't need a top-level Token.encode

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants