-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement restore #96
Conversation
9973bb7
to
4eb678b
Compare
41ee717
to
106df32
Compare
Ignore the fact that After this PR is merged an update to the branch status check settings will fix this up. |
9b37743
to
0f4ba25
Compare
e7d1140
to
569d2c9
Compare
@@ -73,7 +73,10 @@ class Entry(val initializedApp: AtomicReference[Option[App[_]]] = new AtomicRefe | |||
block.withBootstrapServers(value.toList.mkString(",")) | |||
|
|||
Some(block).validNel | |||
case None if Options.checkConfigKeyIsDefined("kafka-client.bootstrap.servers") => None.validNel | |||
case None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a quick fix, there are multiple configurations, a global one which is shared with kafka-client.bootstrap.servers
and a specific one just for the consumer at akka.kafka.consumer.kafka-clients.bootstrap.servers
@@ -105,3 +99,17 @@ class KafkaClient( | |||
override def batchCursorContext(cursors: immutable.Iterable[CommittableOffset]): CommittableOffsetBatch = | |||
CommittableOffsetBatch(cursors.toSeq) | |||
} | |||
|
|||
object KafkaClient { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is refactored out since we use it in a lot of places
569d2c9
to
5af33a6
Compare
@@ -92,6 +93,7 @@ lazy val core = project | |||
"org.scalatest" %% "scalatest" % scalaTestVersion % Test, | |||
"org.scalatestplus" %% "scalacheck-1-15" % scalaTestScalaCheckVersion % Test, | |||
"org.mdedetrich" %% "scalacheck" % scalaCheckVersion % Test, | |||
"com.rallyhealth" %% "scalacheck-ops_1-15" % scalaCheckOpsVersion % Test, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is an extension library for scalacheck that adds generators for different types. Specifically we needed a generator for OffsetDateTime
About this change - What it does
This PR implements the restore functionality for S3 persistence and adds unit/mock tests as well as a full round trip end to end test that backs up data from an actual Kafka cluster into S3 and then restores that S3 backup into an actual kafka cluster under a different set of topics.
Why this way
This PR has a lot of significant changes so here is a breakdown
Restore
which has the following optional settingsfromWhen
This is a setting I think would be useful, it only restores topics after thefromWhen
timestamp (according to Kafka's own internal timestamp).backupTopic
tonewRestoredTopic
would be `Map("backupTopic" -> "newRestoredTopic").RealS3RestoreSpec
uses this config when restoring an S3 backup to avoid the hassle of having to create a new Kafka cluster (as a bonus it also tests this functionality)RealS3BackupSpec
was moved into common traits since it was common code that was also needed forRealS3RestoreSpec
build.sbt
the project was restructured a bit to introduce acoreRestore
project. Also thes3Restore
project includes3Backup
main classpath but ONLY in test which is what allows us to do a full end to end test.ISO_OFFSET_DATE_TIME
bug in JDK 8 its a good idea to specify the min version to JDK 11akka.http.client.stream-cancellation-delay
was set to 1000 milliseconds because it was causing random issues in github actions due to the fast network speed in data centers. See https://discuss.lightbend.com/t/about-nomoreelementsneeded-exception/8599/10 and NoMoreElementsNeeded Exception on Upgrade to Akka HTTP 10.1.12 with Akka 2.5.30 akka/akka-http#3201Producer
in theRestore
module, namelyTransactional.source
only works if the source is a Kafka cluster rather than another source (even if you construct theProducerRecord
the exact same way). I will create an issue for this to investigate furtherUtils.runSequentially
to make sure theFuture
's run sequentially (Future's are strict by default in Scala)