You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on May 5, 2022. It is now read-only.
Through using reactive-nakadi, and from feedback from different users, its been decided to focus on a 1.0 version. This new version will have a number of major improvements and stabilities. One such major improvement is to re-write the core consumer implementation. To achieve this it is suggested to take advantage of Akka's Json Streaming as apposed to Akka's lower level Actor subscriber/publisher model.
Another improvement is to take advantage of Nakadi's high level API in order to persist offsets. Ideally the implementation of this could be extended to provide offset checkpoint in other storage services such as a DB or DynamoDB.
Finally we would like to be able to provide a richer and simpler API interface.
Approach
The plan is to keep the current 0.0.x version available in the current master branch. All version 1.0 work will go on in a 1.0 branch. At a point we look to release, the main repo branch will point to 1.0 and the master branch will be renamed to alpha. All other branches that currently exist to date will be removed.
Workflow
1.0 will be treated as the main brach for the new version, all work done will be reviewed as a pull request between a given branch and 1.0. Once work is merged, the branch can then be removed. There will be a travis build setup to watch branch 1.0
TODO
Very high level overview on what to do:
Re-implement the core consumer using Akka's Json Streaming that can support both Nakadi's low and high level streaming API.
Re-write the API interface. Breaking into two very simple DSLs, Consumer.scala and Producer.scala
Create an extendable offset managing trait with multiple implementations
Note
This is a rough draft as how i see 1.0 version to be. All things mentioned above is up for comment, change and debate. Please feel free to leave comments or make changes.
Please try keep all dependancies to a minimum. Perhaps it may be an idea to split the library into multiple sub-modules, each providing different functionality. For example have reactive-nakadi-core which provides the core streaming abilities, reactive-nakadi-db-commit that can be added as an extra dependancy, etc. The way in which we split into sub modules can be later discussed as we know more.
The text was updated successfully, but these errors were encountered:
Overview
Through using reactive-nakadi, and from feedback from different users, its been decided to focus on a 1.0 version. This new version will have a number of major improvements and stabilities. One such major improvement is to re-write the core consumer implementation. To achieve this it is suggested to take advantage of Akka's Json Streaming as apposed to Akka's lower level Actor subscriber/publisher model.
Another improvement is to take advantage of Nakadi's high level API in order to persist offsets. Ideally the implementation of this could be extended to provide offset checkpoint in other storage services such as a DB or DynamoDB.
Finally we would like to be able to provide a richer and simpler API interface.
Approach
The plan is to keep the current 0.0.x version available in the current master branch. All version 1.0 work will go on in a 1.0 branch. At a point we look to release, the main repo branch will point to 1.0 and the master branch will be renamed to alpha. All other branches that currently exist to date will be removed.
Workflow
1.0 will be treated as the main brach for the new version, all work done will be reviewed as a pull request between a given branch and 1.0. Once work is merged, the branch can then be removed. There will be a travis build setup to watch branch 1.0
TODO
Very high level overview on what to do:
Consumer.scala
andProducer.scala
Note
reactive-nakadi-core
which provides the core streaming abilities,reactive-nakadi-db-commit
that can be added as an extra dependancy, etc. The way in which we split into sub modules can be later discussed as we know more.The text was updated successfully, but these errors were encountered: