Lazy initialisation and threads (in C#)

Lazy initialisation is a way to set up everything needed to initialise something, but put off the initialising until later, usually when the value of the thing-being-initialised is needed for the first time.  You might want to use lazy initialisation because something is expensive to initialise, and you might not need it.  Or because you can put it off until later, and so avoid a big bang of initialisation by spreading it out over a longer time, to give better performance or a better impression of performance.

This is a general principle across much of computing, but I will get into details that are specific to C#.  In C# you can do lazy initialisation automatically i.e. the code that first reads the value will trigger the deferred initialisation behind the scenes.

Basics

There is a good page explaining how to do lazy initialisation on the Microsoft site, and this article will use the code from there as the basis of some examples.

To summarise, instead of doing e.g.

Person author = new Person();

you do

Lazy<Person> author = new Lazy<Person>(initPerson, option);

where initPerson is a Func that creates and returns a Person object, and option will be covered in Multi-threading below.

It is initPerson that is the mechanism for deferment – when the variable author is declared initPerson is linked to it, but initPerson isn’t called and hence author is uninitialized until something tries to read the value of author.  Actually, Lazy<T> x is like Null<T> x in that they are both wrappers that introduce a layer of indirection.   When you want to read things, in both cases you read x.Value rather than x directly.

Multi-threading

Like many things, multi-threading makes lazy initialisation interesting, i.e. builds it into a trap that can lead to unexpected pain.

There are two options you need to choose between when using lazy initialisation in multi-threaded code, based on whether you want the threads reading the value to experience identical behaviour or not.  The major difference between the options relates to what happens when the initialiser Func throws an exception.

In one option, where LazyThreadSafetyMode.ExecutionAndPublication is passed to the constructor of the Lazy<T>, the following behaviour will happen when N threads share a lazy initialised object:

  1. The first thread to read the value finds the object uninitialized and so calls the initialiser Func. This throws an exception which the thread needs to deal with in some way.  The lazy initialised object has no value because the initialiser Func has failed.
  2. When any of the N-1 other threads read the lazy initialised object’s value, they will find it uninitialized. However, instead of calling the Func to initialise the object, the C# run-time will replay the same exception that the first thread received.  This is known as exception caching.  In this way, all threads get identical behaviour (the same exception).

In the other option, where LazyThreadSafetyMode.PublicationOnly is passed to the constructor of Lazy<T>, the following behaviour will happen:

  1. When the first thread reads the value things happen as in the previous example.
  2. Some number X of the N-1 other threads will read the lazy initialised object’s value around the same time and find it uninitialized. They will each call the Func to initialise the object which will create a new instance ready to write back as the lazy initialised object’s value.  However, the C# run-time guarantees that only one thread will be able to write its new object back as the value.  The X-1 other instances will be created but not used.  Any other side-effects of calling the initialiser Func X-1 times too many aren’t handled automatically, for instance lines written to a file won’t be deleted.

Multi-threaded example

The code

This is based on the example code in the page linked to above, but modified such that the initialiser Func throws exceptions.

The class used in the lazy initialisation is LargeObject.  It keeps track of which thread initialised it, and logs when its constructor is called, but otherwise has no real purpose:

class LargeObject
{
   public int InitializedBy { get { return initBy; } }

   int initBy = 0;
   public LargeObject(int initializedBy)
   {
      initBy = initializedBy;
      Console.WriteLine($"{initBy}: In constructor for LargeObject");
   }

   public long[] Data = new long[100000000];
}

For the code itself we will work from the top level down, starting with Main:

static readonly int numThreads = 5;
static Lazy<LargeObject> lazyLargeObject = null;

static void Main()
{
   // to switch between the two different examples, switch which of these lines is commented out
   LazyThreadSafetyMode option = LazyThreadSafetyMode.PublicationOnly;
   // LazyThreadSafetyMode option = LazyThreadSafetyMode.ExecutionAndPublication;

   lazyLargeObject = new Lazy<LargeObject>(ThrowExceptionInsteadOfInitLargeObject, option);

   Console.WriteLine("\r\nLargeObject is not created until you access the Value property of the lazy" +
                     "\r\ninitializer. Press Enter to create LargeObject.");
   Console.ReadLine();

   // Create and start several threads, each of which uses LargeObject.
   Thread[] threads = new Thread[numThreads];
   for (int i = 0; i < numThreads; i++)
   {
      threads[i] = new Thread(ThreadProc);
      threads[i].Start();
   }

   // Wait for all threads to finish. 
   foreach (Thread t in threads)
   {
      t.Join();
   }

   Console.WriteLine("\r\nPress Enter to end the program");
   Console.ReadLine();
}

It spawns 5 threads, and each of them attempts to initialise a shared instance of LargeObject in a lazy way, by reading its value.  This happens inside ThreadProc:

static void ThreadProc(object state)
{
   try
   {
      Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: About to get the value of LargeObject");
      LargeObject large = lazyLargeObject.Value;

      lock (large)
      {
         large.Data[0] = Thread.CurrentThread.ManagedThreadId;
         Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: Initialized by thread {large.InitializedBy}; last used by thread {large.Data[0]}.");
      }
   }
   catch
   {
      Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: Exception handled when trying to get the value");
   }
}

The initialiser Func for the lazy initialised object is designed to throw exceptions for the first few threads – it’s not thread-safe, but good enough for our purposes.

static LargeObject ThrowExceptionInsteadOfInitLargeObject()
{
   // this isn't thread-safe, but it's good enough
   numInitAttempts++;

   Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: numInitAttempts = {numInitAttempts}");

   if (numInitAttempts < numThreads - 2)
   {
      Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: About to throw exception instead of constructing LargeObject");
      throw new Exception($"{Thread.CurrentThread.ManagedThreadId}: Throwing exception instead of constructing LargeObject");
   }
   else
   {
      Console.WriteLine($"{Thread.CurrentThread.ManagedThreadId}: Creating LargeObject");
      return InitLargeObject();
   }
}

It acts as a wrapper around the original initialiser Func from the Microsoft website:

 static LargeObject InitLargeObject()
{
   LargeObject large = new LargeObject(Thread.CurrentThread.ManagedThreadId);
   // Perform additional initialization here.
   return large;
}

Note that each line in the log output starts with the thread that wrote it.

Behaviour when LazyThreadSafetyMode = PublicationOnly

Exception caching doesn’t happen in this case.

Thread 5 is the first to attempt to initialise the object, but the constructor throws an exception rather than creating an object.  Threads 6, 3, 7, 4 all call the constructor because the value is still null.  They each create an instance, but only thread 3 can assign its instance back to the Lazy<T> object.

The single-threaded check isn’t on who gets to call the constructor, so the constructor could be called by each thread, resulting in a copy of the T object being created by each thread.  The single-threaded check is on who gets to assign their instance back to the shared Lazy<T> variable.

lazy_init_pub_only

Behaviour when LazyThreadSafetyMode = ExecutionAndPublication

With mode set to ExecutionAndPublication, there is exception caching.  This means that after the first thread has triggered an exception in the constructor, the later threads don’t even bother calling the constructor.  The exception they receive is replayed from the cache, rather than created fresh by the constructor.

lazy_init_exec_and_pub

Summary

Lazy initialisation can be a way of avoiding wasting time and resources, or at least giving the appearance of better performance by deferring slow operations to later (at which time the delay due to creating them might not be as obvious). The speed-up isn’t guaranteed, and in some cases (often involving databases) lazy intialisation can really hurt performance and so should be avoided.

In single-threaded applications, things should just happen automatically behind the scenes.  As in many things, multi-threading makes lazy initialisation tricky.  There is no right or wrong answer; just be aware of the options and choose the one that fits best with your situation.

 

2 thoughts on “Lazy initialisation and threads (in C#)

  1. Great insight Bob,

    Surely though the c# authors are deffering expensive construction, which is an anti-pattern. If it’s as cheap as copying values, or references adding the indirection would pad that leading to more Io.

    If it’s fast (below ms), then wouldn’t splitting and bounding with deferred page access be both faster & simpler? Instead of processing n (unbounded) you enforce (convention) an upper bound of n say 50, which also saves processing time of n elements the greater n becomes.

    Like

  2. I think that there are two cases where it might make sense. The first is that the expensive thing is needed only in certain circumstances, e.g. if the user clicks button A rather than button B, or a packet read over the network has flag A set rather than flag B. It means that the first time the crucial condition is true you will get a spike in latency, but there is no delay for all the time up to that point (when the condition is false). If the user never clicks on button A, or the packets never have flag A set, then the code will have been slowed down unnecessarily.

    The other case is that it will be needed but not immediately. For instance, a web browser might need to check for updates (to its own software) but that isn’t needed immediately. In order to give the user a good experience, the updates object is deferred and there’s a thread set on a timer that will ask for the updates object in e.g. 2 seconds’ time. During those 2 seconds all the things the user needs for right now can happen e.g. fetching and rendering the starting page. Then the user has something to be reading (causing them to stop making fresh demands on the browser to e.g. fetch a new page), which gives the browser a window of opportunity to check for updates without the user noticing the delay.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s