Fault Tolerance Through Supervisor Hierarchies

Module stability: SOLID

The "let it crash" approach to fault/error handling, implemented by linking actors, is very different to what Java and most non-concurrency oriented languages/frameworks have adopted. It’s a way of dealing with failure that is designed for concurrent and distributed systems.


Throwing an exception in concurrent code (let’s assume we are using non-linked actors), will just simply blow up the thread that currently executes the actor.

  1. There is no way to find out that things went wrong (apart from inspecting the stack trace).
  2. There is nothing you can do about it.

Here actors provide a clean way of getting notification of the error and do something about it.

Linking actors also allow you to create sets of actors where you can be sure that either:
  1. All are dead
  2. None are dead

This is very useful when you have thousands of concurrent actors. Some actors might have implicit dependencies and together implement a service, computation, user session etc.

It encourages non-defensive programming. Don’t try to prevent things from go wrong, because they will, whether you want it or not. Instead; expect failure as a natural state in the life-cycle of your app, crash early and let someone else (that sees the whole picture), deal with it.

Distributed actors

You can’t build a fault-tolerant system with just one single box - you need at least two. Also, you (usually) need to know if one box is down and/or the service you are talking to on the other box is down. Here actor supervision/linking is a critical tool for not only monitoring the health of remote services, but to actually manage the service, do something about the problem if the actor or node is down. Such as restarting actors on the same node or on another node.

In short, it is a very different way of thinking, but a way that is very useful (if not critical) to building fault-tolerant highly concurrent and distributed applications, which is as valid if you are writing applications for the JVM or the Erlang VM (the origin of the idea of "let-it-crash" and actor supervision).


Supervisor hierarchies originate from Erlang’s OTP framework.

A supervisor is responsible for starting, stopping and monitoring its child processes. The basic idea of a supervisor is that it should keep its child processes alive by restarting them when necessary. This makes for a completely different view on how to write fault-tolerant servers. Instead of trying all things possible to prevent an error from happening, this approach embraces failure. It shifts the view to look at errors as something natural and something that will happen, instead of trying to prevent it; embraces it. Just ‘Let It Crash™’, since the components will be reset to a stable state and restarted upon failure.

Akka has two different restart strategies; All-For-One and One-For-One. Best explained using some pictures (referenced from ):


The OneForOne fault handler will restart only the component that has crashed.
external image sup4.gif


The AllForOne fault handler will restart all the components that the supervisor is managing, including the one that have crashed. This strategy should be used when you have a certain set of components that are coupled in some way that if one is crashing they all need to be reset to a stable state before continuing.
external image sup5.gif

Restart callbacks

There are two different callbacks that the Active Object (Java "actor") and Actor can hook in to:
  • Pre restart
  • Post restart

These are called prior to and after the restart upon failure and can be used to clean up and reset/reinitialize state upon restart. This is important in order to reset the component failure and leave the component in a fresh and stable state before consuming further messages.

Defining a supervisor's restart strategy

Both the Active Object supervisor configuration and the Actor supervisor configuration take a ‘RestartStrategy’ instance which defines the fault management. The different strategies are:
  • AllForOne
  • OneForOne
These have the semantics outlined in the section above.

Here is an example of how to define a restart strategy:
  AllForOne(), // restart policy (AllForOne or OneForOne)
  3,           // maximum number of restart retries
  5000         // within time in millis

Defining actor life-cycle

The other common configuration element is the ‘LifeCycle’ which defines the life-cycle. The supervised actor can define one of two different life-cycle configurations:
  • Permanent: which means that the actor will always be restarted.
  • Temporary: which means that the actor will not be restarted, but it will be shut down through the regular shutdown process so the 'shutdown' callback function will called.

Here is an example of how to define the life-cycle:
  Permanent(), // 'Permanent()' means that the component will always be restarted
               // 'Temporary()' means that it will not be restarted, but it will be shut
               // down through the regular shutdown process so the 'shutdown' hook will called

Supervising Actors

Declarative supervisor configuration

The Actor’s supervision can be declaratively defined by creating a ‘Supervisor’ factory object. Here is an example:
val supervisor = Supervisor(
    RestartStrategy(AllForOne, 3, 1000, List(classOf[Exception])),
      LifeCycle(Permanent)) ::
      LifeCycle(Permanent)) ::

Supervisors created like this are implicitly instantiated and started.

You can link and unlink actors from a declaratively defined supervisor using the 'link' and 'unlink' methods:

val supervisor = Supervisor(...)

You can also create declarative supervisors through the 'SupervisorFactory' factory object. Use this factory instead of the 'Supervisor' factory object if you want to control instantiation and starting of the Supervisor, if not then it is easier and better to use the 'Supervisor' factory object.

Example usage:

val factory = SupervisorFactory(
    RestartStrategy(OneForOne, 3, 10, List(classOf[Exception]),
      LifeCycle(Permanent)) ::
      LifeCycle(Permanent)) ::

Then create a new instance our Supervisor and start it up explicitly.

val supervisor = factory.newInstance
supervisor.start // start up all managed servers

Declaratively define actors as remote services

You can declaratively define an actor to be available as a remote actor using the 'RemoteActor(hostname: String, port: Int)' config element.

Here is an example:
val supervisor = Supervisor(
    RestartStrategy(AllForOne, 3, 1000, List(classOf[Exception])),
      RemoteAddress("localhost", 9999))
    :: Nil))

Programmatical linking and supervision of Actors

Actors can at runtime create, spawn, link and supervise other actors. Linking and unlinking is done using one of the 'link' and 'unlink' methods available in the 'ActorRef' (therefore prefixed with 'self' in these examples).

Here is the API and how to use it from within an 'Actor':

// link and unlink actors
// starts and links Actors atomically
// spawns (creates and starts) actors
// spawns and links Actors atomically

A child actor can tell the supervising actor to unlink him by sending him the 'Unlink(this)' message. When the supervisor receives the message he will unlink and shut down the child. The supervisor for an actor is available in the 'supervisor: Option[Actor]' method in the 'ActorRef' class. Here is how it can be used.

if (supervisor.isDefined) supervisor.get ! Unlink(this)
// Or shorter using 'foreach':
supervisor.foreach(_ ! Unlink(this))

The supervising actor's side of things

If a linked Actor is failing and throws an exception then an ‘Exit(deadActor, cause)’ message will be sent to the supervisor (however you should never try to catch this message in your own message handler, it is managed by the runtime).

If the supervisor wants to be able to do something with this message, e.g. be able restart the linked Actor according to the predefined fault handling schemes then it has to set its ‘trapExit’ flag to a list of Exceptions that he wants to be able to trap.
class MyActor extends Actor {
  self.trapExit = List(classOf[MyException], classOf[IOException])
  ... // implementation omitted

The supervising Actor also needs to define a fault handler that defines the restart strategy the Actor should accommodate when it traps an ‘Exit’ message. This is done by setting the ‘faultHandler’ field.
protected var faultHandler: Option[FaultHandlingStrategy] = None

The different options are:
  • AllForOneStrategy(maxNrOfRetries, withinTimeRange)
  • OneForOneStrategy(maxNrOfRetries, withinTimeRange)

Here is an example:
self.faultHandler = Some(AllForOneStrategy(3, 1000))

Putting all this together it can look something like this:

class MySupervisor extends Actor {
  self.trapExit = List(classOf[Throwable])
  self.faultHandler = Some(OneForOneStrategy(5, 5000))
  def receive = {
    case Register(actor) =>

The supervised actor's side of things

The supervised actor needs to define a life-cycle. This is done by setting the lifeCycle field as follows:
self.lifeCycle = Some(LifeCycle(Permanent)) // Permanent or Temporary

In the supervised Actor you can override the ‘preRestart’ and ‘postRestart’ callback methods to add hooks into the restart process. These methods take the reason for the failure, e.g. the exception that caused termination and restart of the actor as argument. It is in these methods that you have to add code to do cleanup before termination and initialization after restart. Here is an example:
class FaultTolerantService extends Actor {
  override def preRestart(reason: Throwable) {
    ... // clean up before restart
  override def postRestart(reason: Throwable) {
    ... // reinit stable state after restart

Handling too many actor restarts within a specific time limit

If you remember, when you define the 'RestartStrategy' you also defined maximum number of restart retries within time in millis.

  AllForOne(), // restart policy (AllForOne or OneForOne)
  3,           // maximum number of restart retries
  5000         // within time in millis

Now, what happens if this limit is reached?

What will happen is that the failing actor will send a system message to its supervisor called 'MaximumNumberOfRestartsWithinTimeRangeReached' with the following signature:

case class MaximumNumberOfRestartsWithinTimeRangeReached(
  victim: ActorRef, maxNrOfRetries: Int, withinTimeRange: Int, lastExceptionCausingRestart: Throwable)

If you want to be able to take action upon this event (highly recommended) then you have to create a message handle for it in the supervisor.

Here is an example:

val supervisor = actorOf(new Actor{
  self.trapExit = List(classOf[Throwable])
  self.faultHandler = Some(OneForOneStrategy(5, 5000))
  protected def receive = { 
    case MaximumNumberOfRestartsWithinTimeRangeReached(
      victimActorRef, maxNrOfRetries, withinTimeRange, lastExceptionCausingRestart) =>
      ... // handle the error situation

You will also get this log warning similar to this:

WAR [20100715-14:05:25.821] actor: Maximum number of restarts [5] within time range [5000] reached.
WAR [20100715-14:05:25.821] actor:     Will *not* restart actor [Actor[$CountDownActor:1279195525812]] anymore.
WAR [20100715-14:05:25.821] actor:     Last exception causing restart was [$FireWorkerException: Fire the worker!].

If you don't define a message handler for this message then you don't get an error but the message is simply not sent to the supervisor. Instead you will get a log warning.

Supervising Active Objects (Java API)

Declarative supervisor configuration

To configure Active Objects for supervision you have to consult the ‘ActiveObjectConfigurator’ and its ‘configure’ method. This method takes a ‘RestartStrategy’ and an array of ‘Component’ definitions defining the Active Objects and their ‘LifeCycle’. Finally you call the ‘supervise’ method to start everything up. The Java configuration elements reside in the ‘se.scalablesolutions.akka.config.JavaConfig’ class and need to be imported statically.

Here is an example:
import static se.scalablesolutions.akka.config.JavaConfig.*;
private ActiveObjectConfigurator manager = new ActiveObjectConfigurator();
  new RestartStrategy(new AllForOne(), 3, 1000, new Class[]{Exception.class}),
    new Component[] {
      new Component(
        new LifeCycle(new Permanent()),
      new Component(
        new LifeCycle(new Permanent()),

Then you can retrieve the Active Object as follows:
Foo foo = (Foo) manager.getInstance(Foo.class);

Restart callbacks

For Active Objects restart callbacks can be defined in two different ways. The first one is to use annotations. You can annotate a no-argument method void as return type with:
  • @se.scalablesolutions.akka.annotation.prerestart
  • @se.scalablesolutions.akka.annotation.postrestart

The methods can be arbitrarily named.
public void preRestart() {
  ... // clean up before restart
public void postRestart() {
  ... // reinit stable state after restart

Which will invoke these methods upon restart.

The second one is to define the names of these callback methods in the declarative supervisor configuration:
new Component(
  new LifeCycle(new Permanent()),
  new RestartCallbacks("preRestart", "postRestart")),

Shutdown callback

Active objects configured with a Temporary lifecycle will be shut down by the supervisor (and not being restarted). A shutdown callback can be defined in two different ways. The first one is to use the @se.scalablesolutions.akka.annotation.shutdown annotation. The method can be arbitrarily named.

public void shutdown() {
  ... // cleanup

This will invoke the shutdown method when the active object is being shut down by the supervisor. The second one is to define the name of the callback method in the declarative supervisor configuration:

new Component(
  new LifeCycle(new Temporary()),
  new ShutdownCallback("shutdown")),

Programatical linking and supervision of ActiveObjects

ActiveObjects can be linked an unlinked just like actors - in fact the linking is done on the underlying actor:, supervised);
ActiveObject.unlink(supervisor, supervised);

If the parent ActiveObject (supervisor) wants to be able to do handle failing child ActiveObjects, e.g. be able restart the linked ActiveObject according to a given fault handling scheme then it has to set its ‘trapExit’ flag to an array of Exceptions that it wants to be able to trap:

ActiveObject.trapExit(supervisor, new Class[]{IOException.class});
ActiveObject.faultHandler(supervisor, new AllForOneStrategy(3, 2000));

For convenience there is an overloaded link that takes trapExit and faultHandler for the supervisor as arguments. Here is an example:

import static*;
foo = newInstance(Foo.class, 1000);
bar = newInstance(Bar.class, 1000);
link(foo, bar, new AllForOneStrategy(3, 2000), new Class[]{IOException.class});
// alternative: chaining
bar = trapExit(foo, new Class[]{IOException.class})
  .faultHandler(foo, new AllForOneStrategy(3, 2000))
  .newInstance(Bar.class, 1000);
link(foo, bar);
Home Turn Off "Getting Started"