Protected
Protected

Dispatchers (Scala)


Module stability: SOLID

The Dispatcher is an important piece that allows you to configure the right semantics and parameters for optimal performance, throughput and scalability. Different Actors have different needs.


Akka supports dispatchers for both event-driven lightweight threads, allowing creation of millions threads on a single workstation, and thread-based Actors, where each dispatcher is bound to a dedicated OS thread.

The event-based Actors currently consume ~600 bytes per Actor which means that you can create more than 6.5 million Actors on 4 G RAM.

Default dispatcher


For most scenarios the default settings are the best. Here we have one single event-based dispatcher for all Actors created. The dispatcher used is this one:

Dispatchers.globalExecutorBasedEventDrivenDispatcher

But if you feel that you are starting to contend on the single dispatcher (the 'Executor' and its queue) or want to group a specific set of Actors for a dedicated dispatcher for better flexibility and configurability then you can override the defaults and define your own dispatcher. See below for details on which ones are available and how they can be configured.

Setting the dispatcher


Normally you set the dispatcher from within the Actor itself. The dispatcher is defined by the 'dispatcher: MessageDispatcher' member field in 'ActorRef'.

class MyActor extends Actor {
  self.dispatcher = ... // set the dispatcher
   ...
}

You can also set the dispatcher for an Actor before it has been started:

actorRef.dispatcher = dispatcher

Types of dispatchers


There are five different types of message dispatchers:

  • Thread-based
  • Event-based
  • Work-stealing event-based
  • Single-threaded event-base
  • Reactor-based event-driven

Factory methods for all of these, including global versions of some of them, are in the 'se.scalablesolutions.akka.dispatch.Dispatchers' object.

Let's now walk through the different dispatchers in more detail.

Thread-based


The 'ThreadBasedDispatcher' binds a dedicated OS thread to each specific Actor. The messages are posted to a 'LinkedBlockingQueue' which feeds the messages to the dispatcher one by one. A 'ThreadBasedDispatcher' cannot be shared between actors. This dispatcher has worse performance and scalability than the event-based dispatcher but works great for creating "daemon" Actors that consumes a low frequency of messages and are allowed to go off and do their own thing for a longer period of time. Another advantage with this dispatcher is that Actors do not block threads for each other.

val dispatcher = Dispatchers.newThreadBasedDispatcher(actorRef)

It would normally by used from within the actor like this:

class MyActor extends Actor {
  self.dispatcher = Dispatchers.newThreadBasedDispatcher(self)
   ...
}

Event-based


The 'ExecutorBasedEventDrivenDispatcher' binds a set of Actors to a thread pool backed up by a 'BlockingQueue'. This dispatcher is highly configurable and supports a fluent configuration API to configure the 'BlockingQueue' (type of queue, max items etc.) as well as the thread pool.

The event-driven dispatchers must be shared between multiple Active Objects and/or Actors. One best practice is to let each top-level Actor, e.g. the Actors you define in the declarative supervisor config, to get their own dispatcher but reuse the dispatcher for each new Actor that the top-level Actor creates. But you can also share dispatcher between multiple top-level Actors. This is very use-case specific and needs to be tried out on a case by case basis. The important thing is that Akka tries to provide you with the freedom you need to design and implement your system in the most efficient way in regards to performance, throughput and latency.

It comes with many different predefined BlockingQueue configurations:
  • Bounded LinkedBlockingQueue
  • Unbounded LinkedBlockingQueue
  • Bounded ArrayBlockingQueue
  • Unbounded ArrayBlockingQueue
  • SynchronousQueue

You can also set the rejection policy that should be used, e.g. what should be done if the dispatcher (e.g. the Actor) can't keep up and the mailbox is growing up to the limit defined. You can choose between four different rejection policies:

  • java.util.concurrent.ThreadPoolExecutor.CallerRuns - will run the message processing in the caller's thread as a way to slow him down and balance producer/consumer
  • java.util.concurrent.ThreadPoolExecutor.AbortPolicy - rejected messages by throwing a 'RejectedExecutionException'
  • java.util.concurrent.ThreadPoolExecutor.DiscardPolicy - discards the message (throws it away)
  • java.util.concurrent.ThreadPoolExecutor.DiscardOldestPolicy - discards the oldest message in the mailbox (throws it away)

You cane read more about these policies here.

Here is an example:

class MyActor extends Actor {
  self.dispatcher = Dispatchers.newExecutorBasedEventDrivenDispatcher(name)
  dispatcher
    .withNewThreadPoolWithBoundedBlockingQueue(100)
    .setCorePoolSize(16)
    .setMaxPoolSize(128)
    .setKeepAliveTimeInMillis(60000)
    .setRejectionPolicy(new CallerRunsPolicy)
    .buildThreadPool
   ...
}

This 'ExecutorBasedEventDrivenDispatcher' allows you to define the 'throughput' it should have. This defines the number of messages for a specific Actor the dispatcher should process in one single sweep.
Setting this to a higher number will increase throughput but lower fairness, and vice versa. If you don't specify it explicitly then it uses the default value defined in the 'akka.conf' configuration file:

<actor>
  throughput = 5
</actor>

If you don't define a the 'throughput' option in the configuration file then the default value of '5' will be used.

Browse the ScalaDoc or look at the code for all the options available.

Work-stealing event-based


The 'ExecutorBasedEventDrivenWorkStealingDispatcher' is a variation of the 'ExecutorBasedEventDrivenDispatcher' in which Actors of the same type can be set up to share this dispatcher and during execution time the different actors will steal messages from other actors if they have less messages to process. This can be a great way to improve throughput at the cost of a little higher latency.

Normally the way you use it is to create an Actor companion object to hold the dispatcher and then set in in the Actor explicitly.

object MyActor {
  val dispatcher = Dispatchers.newExecutorEventBasedWorkStealingDispatcher(name)
}
 
class MyActor extends Actor {
  self.dispatcher = MyActor.dispatcher
  ...
}

Here is an article with some more information: Load Balancing Actors with Work Stealing Techniques
Here is another article discussing this particular dispatcher: Flexible load balancing with Akka in Scala

Event-based single-threaded


The 'EventBasedSingleThreadDispatcher' binds a set of Actors to a single OS thread where the execution in interleaved. This dispatcher has best performance but zero scalability and is very dangerous since one single Actor can block all other Actors.

class MyActor extends Actor {
  self.dispatcher = Dispatchers.globalReactorBasedSingleThreadEventDrivenDispatcher
  ...
}

Reactor-based event-driven


The 'ReactorBasedThreadPoolEventDrivenDispatcher' implements the Reactor pattern with a single queue and a demultiplexer.

class MyActor extends Actor {
  self.dispatcher = Dispatchers.globalReactorBasedThreadPoolEventDrivenDispatcher
  ...
}

Java API


MessageDispatcher dispatcher = Dispatchers.newExecutorEventBasedThreadPoolDispatcher(name);

The dispatcher for an Active Object can be defined in the declarative configuration:
... // part of configuration
new Component(
  MyActiveObject.class,
  new LifeCycle(new Permanent()),
  dispatcher, // <<== set it here
  1000)
...

It can also be set when creating a new Active Object programmatically.
MyPOJO pojo = (MyPOJO) ActiveObject.newInstance(MyPOJO.class, 1000, dispatcher);
Home
close
Loading...
Home Turn Off "Getting Started"
close
Loading...