Skip to content

Placeholders for kafka codec and simple kafka-broker filter#4869

Closed
adamkotwasinski wants to merge 4 commits intoenvoyproxy:masterfrom
adamkotwasinski:review
Closed

Placeholders for kafka codec and simple kafka-broker filter#4869
adamkotwasinski wants to merge 4 commits intoenvoyproxy:masterfrom
adamkotwasinski:review

Conversation

@adamkotwasinski
Copy link
Contributor

@adamkotwasinski adamkotwasinski commented Oct 26, 2018

Description: stub for Kafka filter [not mergeable] (relates to #2852 )

  • very simple Kafka filter that just captures payloads sent and received and logs them
  • kafka codec capable of handling of (some) kafka 0.11 request types
  • the purpose of this PR is to even check if the code committed is something acceptable re Envoy's standards and if the further work (remaining request types, response types, etc.) can follow
  • some questions to Envoy SME's are left throughout the review
  • what needs to be implemented (codec-side):

Risk Level: medium
Testing: manual testing using kafka broker 1.0 + kafka client 0.11
Docs Changes: to be implemented
Release Notes: n/a
[Optional Fixes #Issue]
[Optional Deprecated:]

class Encoder {
public:
template <typename T>
size_t encode(const T& arg, char* dst);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

current implementation just assumes there's enough space, what's wrong
what would be the favoured API?
size_t encode(const T& arg, char* dst, size_t dst_size) with dst_size carrying max size that can be written
or are there any Envoy classes I should utilize instead?


ParserSharedPtr createParser(INT16 api_key, INT16 api_version, RequestContextSharedPtr context_) const;

static const RequestParserResolver KAFKA_0_11;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

left that separation explicitly, I'd intend to have various versions such as KAFKA_10, KAFKA_11 etc. depending on configuration - this might allow us to e.g. perform request rewrites when someone sends a request that cannot be handled by cluster (or the other way?)

};
request.apiVersion() = 4;
request.correlationId() = 10;
request.clientId() = "client-id";
Copy link
Contributor Author

@adamkotwasinski adamkotwasinski Oct 26, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the API for RequestConstruction is not perfect
especially considering that some versions are structurally identical, so it's not like we can create constructor for v1, for v2 etc., as they'd be identical
might need some rework (thinking about static factory functions e.g. MetadataRequest::makeV1Request(.....) etc. - that should simplify kafka-cluster filter code)

@adamkotwasinski adamkotwasinski changed the title stub of kafka(broker) filter / codec Placeholders for kafka codec and simple kafka-broker filter Oct 26, 2018
@mattklein123 mattklein123 self-assigned this Oct 27, 2018
@mattklein123
Copy link
Member

Assigning myself for review/shepherding.

@mattklein123 mattklein123 added the no stalebot Disables stalebot from closing an issue label Oct 30, 2018
@mattklein123
Copy link
Member

@adamkotwasinski what is the best way to go about this? Ultimately we need to break this down into smaller PRs that I can review. My preference is to start with a codec similar to what we have for Mongo (per offline discussion). Can we do an independent PR with just the codec and tests and then go from there?

@adamkotwasinski
Copy link
Contributor Author

adamkotwasinski commented Oct 30, 2018

@mattklein123 Sure, will do. Just bear in mind that the codec (request codec) will still contain most of https://github.com/envoyproxy/envoy/pull/4869/files#diff-8d4ae011af8e71e202b1a64a07f33192R220 - the code out there is implementation of https://kafka.apache.org/protocol#protocol_messages , with 40-something types of messages (and not ~9, like it is the case with Mongo), each also having structurally different versions (what has certain impact on how encoder api looks like - I just don't want to enumerate every possible type :) )

@mattklein123
Copy link
Member

Yes understood that there are 40 messages. If you want to split it further, just start with a portion of the messages.

@adamkotwasinski
Copy link
Contributor Author

@mattklein123 Raised #4950

@mattklein123
Copy link
Member

I'm going to go ahead and close this one for now. We can refer back to it as needed but I suspect there will be enough changes in the current PR to make this one not that useful to have to look at.

@adamkotwasinski adamkotwasinski deleted the review branch May 7, 2019 18:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

no stalebot Disables stalebot from closing an issue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants