Scala client for Amazon Kinesis with Apache Spark support.
For Apache Spark, reading from Kinesis is supported by Spark Streaming Kinesis Integration, but it does not support writing to Kinesis. This library makes possible to write Spark's RDD and Spark Streaming's DStream to Kinesis.
Add a following dependency into your build.sbt
.
core only:
libraryDependencies += "jp.co.bizreach" %% "aws-kinesis-scala" % "0.0.12"
use spark integration:
libraryDependencies += "jp.co.bizreach" %% "aws-kinesis-spark" % "0.0.12"
Create the AmazonKinesis
at first.
import jp.co.bizreach.kinesis._
implicit val region = Regions.AP_NORTHEAST_1
// use DefaultAWSCredentialsProviderChain
val client = AmazonKinesis()
// specify an explicit Provider
val client = AmazonKinesis(new InstanceProfileCredentialsProvider())
// specify an explicit client configuration
val client = AmazonKinesis(new ClientConfiguration().withProxyHost("proxyHost"))
// both
val client = AmazonKinesis(
new InstanceProfileCredentialsProvider(),
new ClientConfiguration().withProxyHost("proxyHost")
)
Then you can access Kinesis as following:
val request = PutRecordRequest(
streamName = "streamName",
partitionKey = "partitionKey",
data = "data".getBytes("UTF-8")
)
// not retry
client.putRecord(request)
// if failure, max retry count is 3 (SDK default)
client.putRecordWithRetry(request)
Create the AmazonKinesisFirehose
at first.
import jp.co.bizreach.kinesisfirehose._
implicit val region = Regions.US_EAST_1
// use DefaultAWSCredentialsProviderChain
val client = AmazonKinesisFirehose()
... as with kinesis ...
Then you can access Kinesis Firehose as following:
val request = PutRecordRequest(
deliveryStreamName = "firehose-example",
record = "data".getBytes("UTF-8")
)
// not retry
client.putRecord(request)
// if failure, max retry count is 3 (SDK default)
client.putRecordWithRetry(request)
aws-kinesis-spark provides integration with Spark: for writing, methods that work on any RDD
.
Import the jp.co.bizreach.kinesis.spark._
to gain saveToKinesis
method on your RDDs:
import jp.co.bizreach.kinesis.spark._
val rdd: RDD[Map[String, Option[Any]]] = ...
rdd.saveToKinesis(
streamName = "streamName",
region = Regions.AP_NORTHEAST_1,
chunk = 30
)
You can also write data to Kinesis from Spark Streaming with DStreams.
import jp.co.bizreach.kinesis.spark._
val dstream: DStream[Map[String, Option[Any]]] = ...
dstream.foreachRDD { rdd =>
rdd.saveToKinesis( ... )
}