Skip to content

The Opinionated RabbitMQ Library for Scala and Pekko (fork of Spingo version with Pekko support)

License

Notifications You must be signed in to change notification settings

pjfanning/op-rabbit

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Maven Central

Op-Rabbit

An opinionated RabbitMQ library for Scala and Apache Pekko.

Documentation

See https://github.com/SpinGo/op-rabbit for the main documentation.

This fork switches from Akka to Apache Pekko (releases v3 and above)

Releases for v2 were for Akka. Akka will no longer be supported by this project.

Installation

Op-Rabbit is available on Maven Central.

val opRabbitVersion = "3.0.0"

libraryDependencies ++= Seq(
  "com.github.pjfanning" %% "op-rabbit-core"         % opRabbitVersion,
  "com.github.pjfanning" %% "op-rabbit-play-json"    % opRabbitVersion,
  "com.github.pjfanning" %% "op-rabbit-json4s"       % opRabbitVersion,
  "com.github.pjfanning" %% "op-rabbit-airbrake"     % opRabbitVersion,
  "com.github.pjfanning" %% "op-rabbit-pekko-stream" % opRabbitVersion
)

A high-level overview of the available components:

  • op-rabbit-core
    • Implements basic patterns for serialization and message processing.
  • op-rabbit-play-json
    • Easily use Play Json formats to publish or consume messages; automatically sets RabbitMQ message headers to indicate content type.
  • op-rabbit-json4s
    • Easily use Json4s to serialization messages; automatically sets RabbitMQ message headers to indicate content type.
  • op-rabbit-airbrake
    • Report consumer exceptions to airbrake, using the Airbrake Java library.
  • op-rabbit-pekko-stream
    • Process or publish messages using pekko-stream.

Usage

Set up RabbitMQ connection information in application.conf:

op-rabbit {
  topic-exchange-name = "amq.topic"
  channel-dispatcher = "op-rabbit.default-channel-dispatcher"
  default-channel-dispatcher {
    # Dispatcher is the name of the event-based dispatcher
    type = Dispatcher

    # What kind of ExecutionService to use
    executor = "fork-join-executor"

    # Configuration for the fork join pool
    fork-join-executor {
      # Min number of threads to cap factor-based parallelism number to
      parallelism-min = 2

      # Parallelism (threads) ... ceil(available processors * factor)
      parallelism-factor = 2.0

      # Max number of threads to cap factor-based parallelism number to
      parallelism-max = 4
    }
    # Throughput defines the maximum number of messages to be
    # processed per actor before the thread jumps to the next actor.
    # Set to 1 for as fair as possible.
    throughput = 100
  }
  connection {
    virtual-host = "/"
    hosts = ["127.0.0.1"]
    username = "guest"
    password = "guest"
    port = 5672
    ssl = false
    connection-timeout = 3s
  }
}

Note that hosts is an array; Connection attempts will be made to hosts in that order, with a default timeout of 3s. This way you can specify addresses of your rabbitMQ cluster, and if one of the instances goes down, your application will automatically reconnect to another member of the cluster.

topic-exchange-name is the default topic exchange to use; this can be overriden by passing exchange = "my-topic" to TopicBinding or Message.topic.

Boot up the RabbitMQ control actor:

import com.github.pjfanning.op_rabbit.RabbitControl
import org.apache.pekko.actor.{ActorSystem, Props}

implicit val actorSystem = ActorSystem("such-system")
val rabbitControl = actorSystem.actorOf(Props[RabbitControl])

Set up a consumer: (Topic subscription)

(this example uses op-rabbit-play-json)

import com.github.pjfanning.op_rabbit.PlayJsonSupport._
import com.github.pjfanning.op_rabbit._
import play.api.libs.json._

import scala.concurrent.ExecutionContext.Implicits.global
case class Person(name: String, age: Int)
// setup play-json serializer
implicit val personFormat = Json.format[Person]
implicit val recoveryStrategy = RecoveryStrategy.none

val subscriptionRef = Subscription.run(rabbitControl) {
  import Directives._
  // A qos of 3 will cause up to 3 concurrent messages to be processed at any given time.
  channel(qos = 3) {
    consume(topic(queue("such-message-queue"), List("some-topic.#"))) {
      (body(as[Person]) & routingKey) { (person, key) =>
        /* do work; this body is executed in a separate thread, as
           provided by the implicit execution context */
        println(s"""A person named '${person.name}' with age
          ${person.age} was received over '${key}'.""")
        ack
      }
    }
  }
}

Now, test the consumer by sending a message:

subscriptionRef.initialized.foreach { _ =>
  rabbitControl ! Message.topic(
    Person("Your name here", 33), "some-topic.cool")
}

Stop the consumer:

subscriptionRef.close()

Note, if your call generates an additional future, you can pass it to ack, and message will be acked based off the Future success, and nacked with Failure (such that the configured RecoveryStrategy if the Future fails):

  // ...
      (body(as[Person]) & routingKey) { (person, key) =>
        /* do work; this body is executed in a separate thread, as
           provided by the implicit execution context */
        val result: Future[Unit] = myApi.methodCall(person)
        ack(result)
      }
  // ...

Consuming from existing queues

If the queue already exists and doesn't match the expected configuration, topic subscription will fail. To bind to an externally configured queue use Queue.passive:

  channel(qos = 3) {
    consume(Queue.passive("very-exist-queue")) { ...

It is also possible to optionally create the queue if it doesn't exist, by providing a QueueDefinition instead of a String:

  channel(qos = 3) {
    consume(Queue.passive(topic(queue("wow-maybe-queue"), List("some-topic.#")))) { ...

Accessing additional headers

As seen in the example above, you can extract headers in addition to the message body, using op-rabbit's Directives. You can use multiple declaratives via multiple nested functions, as follows:

import com.github.pjfanning.op_rabbit.properties._

// Nested directives
// ...
      body(as[Person]) { person =>
        optionalProperty(ReplyTo) { replyTo =>
          // do work
          ack
        }
      }
// ...

Or, you can combine directives using & to form a compound directive, as follows:

// Compound directive
// ...
      (body(as[Person]) & optionalProperty(ReplyTo)) { (person, replyTo) =>
        // do work
        ack
      }
// ...

See the documentation on Directives for more details.

Shutting down a consumer

The following methods are available on a SubscriptionRef which will allow control over the subscription.

/* stop receiving new messages from RabbitMQ immediately; shut down
   consumer and channel as soon as pending messages are completed. A
   grace period of 30 seconds is given, after which the subscription
   forcefully shuts down. (Default of 5 minutes used if duration not
   provided) */
subscription.close(30 seconds)

/* Shut down the subscription immediately; don't wait for messages to
   finish processing. */
subscription.abort()

/* Future[Unit] which completes once the provided binding has been
   applied (IE: queue has been created and topic bindings
   configured). Useful if you need to assert you don't send a message
   before a message queue is created in which to place it. */
subscription.initialized

// Future[Unit] which completes when the subscription is closed.
subscription.closed

Recovery strategy:

A recovery strategy defines how a subscription should handle exceptions and must be provided. Should it redeliver them a limited number of times? Or, should it drop them? Several pre-defined recovery strategies with their corresponding documentation are defined in the RecoveryStrategy companion object.

implicit val recoveryStrategy = RecoveryStrategy.nack()

Publish a message:

rabbitControl ! Message.topic(
  Person(name = "Mike How", age = 33),
  routingKey = "some-topic.very-interest")

rabbitControl ! Message.queue(
  Person(name = "Ivanah Tinkle", age = 25),
  queue = "such-message-queue")

By default:

  • Messages will be queued up until a connection is available

  • Messages are monitored via publisherConfirms; if a connection is lost before RabbitMQ confirms receipt of the message, then the message is published again. This means that the message may be delivered twice, the default opinion being that at-least-once is better than at-most-once. You can use UnconfirmedMessage if you'd like at-most-once delivery, instead.

  • If you would like to be notified of confirmation, use the ask pattern:

    import org.apache.pekko.pattern.ask
    import org.apache.pekko.util.Timeout
    import scala.concurrent.duration._
    implicit val timeout = Timeout(5 seconds)
    val received = (
      rabbitControl ? Message.queue(
        Person(name = "Ivanah Tinkle", age = 25),
        queue = "such-message-queue")
    ).mapTo[ConfirmResponse]

Consuming using Pekko streams

(this example uses op-rabbit-play-json and op-rabbit-pekko-streams)

import Directives._
implicit val recoveryStrategy = RecoveryStrategy.drop()
RabbitSource(
  rabbitControl,
  channel(qos = 3),
  consume(queue(
    "such-queue",
    durable = true,
    exclusive = false,
    autoDelete = false)),
  body(as[Person])). // marshalling is automatically hooked up using implicits
  runForeach { person =>
    greet(person)
  } // after each successful iteration the message is acknowledged.

Note: RabbitSource yields an AckedSource, which can be combined with an AckedFlow and an AckedSink (such as MessagePublisherSink). You can convert an acked stream into a normal stream by calling AckedStream.acked; once messages flow passed the acked component, they are considered acknowledged, and acknowledgement tracking is no longer a concern (and thus, you are free to use the pekko-stream library in its entirety).

Stream failures and recovery strategies

When using the DSL as described in the consumer setup section, recovery strategies are triggered if fail is called or if a failed future is passed to ack. For streams, we have to do something a little different.

To trigger the specified recovery strategy when using op-rabbit-pekko-stream and its acked components, an exception should be thrown within the acked part of the graph. However, the default exception-handling behavior in pekko-stream is stopping the graph, which in op-rabbit's case would mean stopping the consumer and preventing further messages from being processed. To explicitly allow the graph to continue running, a ResumingDecider supervision strategy should be declared. (To learn more about supervision strategies please refer to the Pekko Streams docs).

  implicit val system = ActorSystem()
  private val rabbitControl = system.actorOf(Props[RabbitControl], name = "op-rabbit")
  // We define an ActorMaterializer with a resumingDecider supervision strategy,
  // which prevents the graph from stopping when an exception is thrown.
  implicit val materializer = ActorMaterializer(
    ActorMaterializerSettings(system)
      .withSupervisionStrategy(Supervision.resumingDecider: Decider)
  )
  // As a recovery strategy, let's suppose we want all nacked messages to go to
  // an existing queue called "failed-events"
  implicit private val recoveryStrategy = RecoveryStrategy.abandonedQueue(
    7.days,
    abandonQueueName = (_: String) => "failed-events"
  )

  private val src = RabbitSource(
    rabbitControl,
    channel(qos = 3),
    consume(Queue("events")),
    body(as[String])
  )

  // This may throw an exception, in which case the defined recovery strategy
  // will be triggered and our flow will continue thanks to the resumingDecider.
  private val flow = AckedFlow[String].map(_.toInt)

  private val sink = AckedSink.foreach[Int](println)
  
  src.via(flow).to(sink).run

Error notification

It's important to know when your consumers fail. Out of the box, op-rabbit ships with support for logging to slf4j (and therefore syslog), and also airbrake via op-rabbit-airbrake. Without any additional signal provided by you, slf4j will be used, making error visibility a default.

You can report errors to multiple sources by combining error logging strategies; for example, if you'd like to report to both slf4j and to airbrake, import / set the following implicit RabbitErrorLogging in the scope where your consumer is instantiated:

import com.github.pjfanning.op_rabbit.{Slf4jLogger, AirbrakeLogger}

implicit val rabbitErrorLogging = Slf4jLogger + AirbrakeLogger.fromConfig

Implementing your own error reporting strategy is simple; here's the source code for the slf4jLogger:

object Slf4jLogger extends RabbitErrorLogging {
  def apply(
    name: String,
    message: String,
    exception: Throwable,
    consumerTag: String,
    envelope: Envelope,
    properties: BasicProperties,
    body: Array[Byte]): Unit = {

    val logger = LoggerFactory.getLogger(name)
    logger.error(s"${message}. Body=${bodyAsString(body, properties)}. Envelope=${envelope}", exception)
  }
}

Credits

Op-Rabbit was created by Tim Harper

This library builds upon the Pekko RabbitMQ client.

About

The Opinionated RabbitMQ Library for Scala and Pekko (fork of Spingo version with Pekko support)

Topics

Resources

License

Security policy

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Scala 99.7%
  • Other 0.3%