Placeholders for kafka codec and simple kafka-broker filter#4869
Placeholders for kafka codec and simple kafka-broker filter#4869adamkotwasinski wants to merge 4 commits intoenvoyproxy:masterfrom
Conversation
| class Encoder { | ||
| public: | ||
| template <typename T> | ||
| size_t encode(const T& arg, char* dst); |
There was a problem hiding this comment.
current implementation just assumes there's enough space, what's wrong
what would be the favoured API?
size_t encode(const T& arg, char* dst, size_t dst_size) with dst_size carrying max size that can be written
or are there any Envoy classes I should utilize instead?
|
|
||
| ParserSharedPtr createParser(INT16 api_key, INT16 api_version, RequestContextSharedPtr context_) const; | ||
|
|
||
| static const RequestParserResolver KAFKA_0_11; |
There was a problem hiding this comment.
left that separation explicitly, I'd intend to have various versions such as KAFKA_10, KAFKA_11 etc. depending on configuration - this might allow us to e.g. perform request rewrites when someone sends a request that cannot be handled by cluster (or the other way?)
| }; | ||
| request.apiVersion() = 4; | ||
| request.correlationId() = 10; | ||
| request.clientId() = "client-id"; |
There was a problem hiding this comment.
the API for RequestConstruction is not perfect
especially considering that some versions are structurally identical, so it's not like we can create constructor for v1, for v2 etc., as they'd be identical
might need some rework (thinking about static factory functions e.g. MetadataRequest::makeV1Request(.....) etc. - that should simplify kafka-cluster filter code)
|
Assigning myself for review/shepherding. |
|
@adamkotwasinski what is the best way to go about this? Ultimately we need to break this down into smaller PRs that I can review. My preference is to start with a codec similar to what we have for Mongo (per offline discussion). Can we do an independent PR with just the codec and tests and then go from there? |
|
@mattklein123 Sure, will do. Just bear in mind that the codec (request codec) will still contain most of https://github.com/envoyproxy/envoy/pull/4869/files#diff-8d4ae011af8e71e202b1a64a07f33192R220 - the code out there is implementation of https://kafka.apache.org/protocol#protocol_messages , with 40-something types of messages (and not ~9, like it is the case with Mongo), each also having structurally different versions (what has certain impact on how encoder api looks like - I just don't want to enumerate every possible type :) ) |
|
Yes understood that there are 40 messages. If you want to split it further, just start with a portion of the messages. |
|
@mattklein123 Raised #4950 |
|
I'm going to go ahead and close this one for now. We can refer back to it as needed but I suspect there will be enough changes in the current PR to make this one not that useful to have to look at. |
Description: stub for Kafka filter [not mergeable] (relates to #2852 )
_V1,_V2, ..._Vnvariants)Risk Level: medium
Testing: manual testing using kafka broker 1.0 + kafka client 0.11
Docs Changes: to be implemented
Release Notes: n/a
[Optional Fixes #Issue]
[Optional Deprecated:]