Threads and sequences

Introduction

Chromium introduces a lot of new concepts to facilitate writing faster and safer multithreaded code. It is crucial to understand them to fully utilize provided tools. libbase tries to bring the most useful of them to you.

Concepts

Task

A single unit of work. It can be represented with base::OnceCallback<...> or base::RepeatingCallback<...> callbacks, while most often it is base::OnceClosure.

See also

See Callbacks page for more details.

Thread

Threads are provided by the operating system and are used to execute code in parallel. Special care has to be taken if multiple threads can access the same data simultaneously.

base::Thread is an utility class that wraps a physical thread and allows you to execute arbitrary tasks queued on its task queue.

Thread pool

A thread pool is a pool of threads that share a common task queue. Each task enqueued on it will be executed by one of the threads within the pool but it is not specified by which one.

base::ThreadPool is an utility class that provides you the thread pool functionality while also exposing multiple different task runners to schedule work on them.

Sequence

A sequence (sometimes also called virtual thread) is a logical construct specifying the rules of execution of tasks. All tasks that are to be executed on a given sequence will be executed in a specified order and none will overlap (only one task can be executed at the same time) on some physical thread(s), but there are no guarantees on which physical threads of execution the tasks will be executed and it may change between the tasks.

Task runner

Task runners are objects that can be used to enqueue tasks on a specific task queue to which they are bound. Such task will later be executed on the corresponding physcial thread of execution. Queuing (or scheduling) of tasks through a task runner is often referred to as post-tasking them.

There are 3 types of task runners available to use:

  • base::TaskRunner

    Provides no guarantees about the order of execution of posted tasks.

  • base::SequencedTaskRunner

    All posted tasks will be invoked in the same order in which they were posted and none will overlap. There are no guarantees specifying on which physical thread of execution given tasks will execute.

  • base::SingleThreadTaskRunner

    All posted tasks will be invoked in the same order they were posted on the same physical thread of execution.

Task types

When dealing with a group of tasks to be executed in parallel, we can categorize them in these groups:

  • Thread unsafe

    These tasks do not use any synchronization primitives and may need external synchronization if executed in parallel on the same set of data. Alternatively, they can be executed on a single thread or a single sequence.

  • Thread-affine

    These tasks require to be run only on a single (possibly on a specific instance of) thread.

  • Thread-safe

    These tasks can be safely executed from any threads and/or sequences in parallel.

Sequences vs Threads

Important

There are a number of benefits to executing your tasks in sequences over the physical threads and as such, it is highly preferred to write code that can be executed on any sequence instead of being thread-affine.

Some benefits to using sequences over physical threads:

  • Code is easier to understand and reason about.

  • Code is easier to reuse in different components.

  • Code is easier to be parallelized.

  • Fewer physical threads means smaller overhead.

  • You can post tasks to any number of sequences that are tied to a number of threads that match your hardware. This allows you to fully utilize the CPU power without the overhead of context switches.

base::TaskRunner

The base::TaskRunner interface has two main methods and a few helpers to make it easier to write your code. The main method is:

  • base::TaskRunner::PostTask()

    This function takes two arguments - a task to be executed and a location in source code (aquired via FROM_HERE macro) from where the post-task operation is done. When called, the passed task will be queued on a task queue associated with that task runner.

    Important

    Remember: there are no guarantees as to ordering of execution between two tasks posted to the same base::TaskRunner or whether they will be executed on the same physical thread at all.

    Example

    void ScheduleTwoTasks(std::shared_ptr<base::TaskRunner> task_runner) {
      DCHECK(task_runner) << "task_runner should be provided";
    
      base::OnceClosure task_1 = /* acquire task_1 */;
      base::OnceClosure task_2 = /* acquire task_2 */;
    
      task_runner->PostTask(FROM_HERE, std::move(task_1));
      task_runner->PostTask(FROM_HERE, std::move(task_2));
    
      // `task_1` and `task_2` will be executed in some order in the future
    }
    
  • base::TaskRunner::PostDelayedTask()

    This function behaves similarly to the above one, but takes one more parameter (base::TimeDelta delay) and ensures that the posted task will not be executed before delay time has passed.

    Example

    void ScheduleTwoDelayedTasks(std::shared_ptr<base::TaskRunner> task_runner) {
      DCHECK(task_runner) << "task_runner should be provided";
    
      base::OnceClosure task_1 = /* acquire task_1 */;
      base::OnceClosure task_2 = /* acquire task_2 */;
    
      task_runner->PostDelayedTask(FROM_HERE, std::move(task_1), base::Seconds(1));
      task_runner->PostDelayedTask(FROM_HERE, std::move(task_2), base::Seconds(2));
    
      // `task_1` will be executed after at least one second has passed
      // `task_2` will be executed after at least two seconds have passed
    }
    

    Caution

    In the above example it is still not guaranteed that task_1 will be executed before task_2!

There are also two additional helper functions defined in that class:

  • base::TaskRunner::PostTaskAndReply()

    This helper can be used to post-task operation that - when finished - will automatically post-task a reply task on the original task runner from which the original call was made.

    Note

    In this case, the reply callback is guaranteed to be run after the task callback.

    Caution

    This method can be called only from a thread with a task queue (base::Thread or base::ThreadPool)!

  • base::TaskRunner::PostTaskAndReply()

    Similar to the above, but task callback should return a result that will be passed to the reply callback.

base::SequencedTaskRunner

This interface inherits from base::TaskRunner and adds an additional method called base::SequencedTaskRunner::RunsTasksInCurrentSequence() that can be used to check if currently-executed task is executing within the same sequence as the one affiliated with that task runner.

Important

All tasks posted with task runners of this type will be executed in the same sequence in order in which they were posted.

Example

void ScheduleTwoSequencedTasks(
    std::shared_ptr<base::SequencedTaskRunner> sequenced_task_runner) {
  DCHECK(task_runner) << "task_runner should be provided";

  base::OnceClosure task_1 = /* acquire task_1 */;
  base::OnceClosure task_2 = /* acquire task_2 */;

  sequenced_task_runner->PostTask(FROM_HERE, std::move(task_1));
  sequenced_task_runner->PostTask(FROM_HERE, std::move(task_2));

  // It is guaranteed that `task_1` will finish before `task_2` will be
  // started and that `task_2` will *see* all effects of `task_1`'s work.
}

base::SingleThreadTaskRunner

This interface inherits from base::SequencedTaskRunner and adds an additional method called base::SingleThreadTaskRunner::BelongsToCurrentThread() which is just an alias for base::SequencedTaskRunner::RunsTasksInCurrentSequence().

Important

All tasks posted with task runners of this type will be executed on the same physical thread in order in which they were posted.

Example

void ScheduleTwoSingleThreadedTasks(
    std::shared_ptr<base::SingleThreadTaskRunner> single_thread_task_runner) {
  DCHECK(task_runner) << "task_runner should be provided";

  base::OnceClosure task_1 = /* acquire task_1 */;
  base::OnceClosure task_2 = /* acquire task_2 */;

  single_thread_task_runner->PostTask(FROM_HERE, std::move(task_1));
  single_thread_task_runner->PostTask(FROM_HERE, std::move(task_2));

  // It is guaranteed that both tasks will be executed on the same physical
  // thread and that `task_1` will finish before `task_2` will be started.
}

base::Thread

This class can be used to create a new physical thread of execution. Once created, it needs to be started (with base::Thread::Start()) to start execution of tasks on its task queue. If not stopped before being destroyed, it will stop and join in its destructor.

After the thread is started, you can obtain a base::SingleThreadTaskRunner by calling base::Thread::TaskRunner() member function.

Example - base::Thread

Program
 1 #include <iostream>
 2 
 3 #include "base/bind.h"
 4 #include "base/callback.h"
 5 #include "base/single_thread_task_runner.h"
 6 #include "base/threading/thread.h"
 7 
 8 void SayHello(const std::string& text) {
 9   std::cout << "Hello " << text << "!" << std::endl;
10 }
11 
12 int main() {
13   base::Thread thread;
14   auto task_runner = thread.TaskRunner();
15 
16   task_runner->PostTask(FROM_HERE, base::BindOnce(&SayHello, "World"));
17   task_runner->PostTask(FROM_HERE, base::BindOnce(&SayHello, "Everyone"));
18 
19   thread.Stop();
20   return 0;
21 }
Program output
Hello World!
Hello Everyone!

base::ThreadPool

This class can be used to create a pool of physical threads of execution. To create it, you need to specify the initial number of physical threads that will be created in that pool. Once created, it needs to be started (with base::ThreadPool::Start()) to start execution of tasks on its task queue. If not stopped before being destroyed, it will stop and join in its destructor.

After the thread is started, you can obtain or create different task runners to this thread pool with these methods:

  • base::ThreadPool::GetTaskRunner()

    This member function returns a base::TaskRunner that schedules tasks for execution on the thread pool without any guarantees about ordering with different tasks scheduled to it and without specifying on which thread within the pool the task will be executed.

  • base::ThreadPool::CreateSequencedTaskRunner()

    This member function creates a new base::SequencedTaskRunner that schedules tasks for execution on the thread pool within a single sequence.

    Caution

    Calling this method multiple times will return you task runners belonging to new and unique sequences! If you want to ensure that tasks end up being posted to the same sequence, you need to hold on to the already obtained task runners and reuse them.

  • base::ThreadPool::CreateSingleThreadTaskRunner()

    This member function creates a new base::SingleThreadTaskRunner that schedules tasks for execution on a single (but unspecified which) physical thread within the thread.

    Caution

    Calling this method multiple times will return you task runners thay may be bound to a different physical threads! If you want to ensure that tasks end up being posted to the same physical thread, you need to hold on to the already obtained task runners and reuse them.

Example - base::ThreadPool

Program
 1 #include <iostream>
 2 #include <mutex>
 3 
 4 #include "base/bind.h"
 5 #include "base/callback.h"
 6 #include "base/single_thread_task_runner.h"
 7 #include "base/threading/thread.h"
 8 
 9 std::mutex g_cout_mutex;
10 
11 void Print(const std::string& text) {
12   std::lock_guard<std::mutex> guard(g_cout_mutex);
13   std::cout << text << std::endl;
14 }
15 
16 int main() {
17   base::ThreadPool thread_pool{4};
18 
19   auto task_runner = thread_pool.GetTaskRunner();
20   auto seq_task_runner = thread_pool.CreateSequencedTaskRunner();
21   auto st_task_runner = thread_pool.CreateSingleThreadTaskRunner();
22 
23   task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "Generic1"));
24   task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "Generic2"));
25 
26   seq_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "Seq1"));
27   seq_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "Seq2"));
28   seq_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "Seq3"));
29 
30   st_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "St1"));
31   st_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "St2"));
32   st_task_runner->PostTask(FROM_HERE, base::BindOnce(&Print, "St3"));
33 
34   thread_pool.Stop();
35   return 0;
36 }
Possible program output
St1
St2
Generic2
Seq1
St3
Generic1
Seq2
Seq3

Hint

The only guarantees about the output of the above program are that:

  • Seq1 will be printed before Seq2 and both will be printed before Seq3.

  • St1 will be printed before St2, both will be printed before Seq3, and all of these three outputs will be printed from the same physical thread.

Obtaining current base::SequencedTaskRunner

You can obtain the base::SequencedTaskRunner on which the currently executed task is executed with base::SequencedTaskRunnerHandle::Get() static function.

Warning

This method may only be called from tasks executed within a sequence. If you’re not sure where the task is executed, you need to first call a base::SequencedTaskRunnerHandle::IsSet() static function to check if current task is executed on a sequence.

Example - base::SequencedTaskRunnerHandle

 1 #include "base/bind.h"
 2 #include "base/callback.h"
 3 #include "base/sequenced_task_runner.h"
 4 
 5 void VerifyRunOnSpecificTaskRunner(
 6     std::shared_ptr<base::SequencedTaskRunner> task_runner) {
 7   CHECK(task_runner == base::SequencedTaskRunnerHandle::Get());
 8 }
 9 
10 void Test(std::shared_ptr<base::SequencedTaskRunner> task_runner) {
11   task_runner->PostTask(
12       FROM_HERE,
13       base::BindOnce(&VerifyRunOnSpecificTaskRunner, task_runner));
14 }

Ensuring sequence affinity

When writing a code that must be executed within the same sequence, it is a good practice to add (possibly debug-only) checks that will verify if - for example - all calls to such class are made from the correct sequence.

You can use base::SequenceChecker helper class to do this. Objects of this class, when constructed, bind to the sequence the current task runs in and later on you can verify that the object is used on correct (the same) sequence by calling base::SequenceChecker::CalledOnValidSequence() member function. Invalid usage of code protected with that check will trigger a CHECK() macro that will crash your application.

You can also detach a bound base::SequenceChecker from current sequence and allow it to bind to the one on which it will be used the next time. This must be done from the previously bound sequence and is often useful when creating objects on some sequence and them passing them to another sequence, possibly for the rest of their lifetime. This way allows you to acquire - for example - a weak pointer to that object, so that you can safely post tasks to it even if you’re not sure about whether it is still alive.

Hint

There are also helper macros defined for you that can be used to create a base::SequenceChecker and perform checks but only within a debug builds:

Example - base::SequenceChecker

#include "base/sequence_checker.h"

// This class can be created on any sequenced and then all other usages
// (including its destruction) must be done on the same (but possibly
// a different one than the one on which the object was created) sequence.
class Foo {
 public:
  Foo() {
    DETACH_FROM_SEQUENCE(sequence_checker_);
    // ...
  }

  ~Foo() {
    DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
    // ...
  }

  void Bar() {
    DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
    // ...
  }

 private:
  SEQUENCE_CHECKER(sequence_checker_);
  // ...
};

Hint

It is best to use provided macros to work with base::SequenceChecker to avoid overhead in release builds.

Canceling posted task

By default, once a task is posted, you have no control whether it will be executed or not. To allow yourself to cancel already posted task (but only if it wasn’t executed yet) you can bind the callback to a base::WeakPtr and - if needed - invalidate it which will stop it from being executed.

See also

See more on Weak pointers page.

Blocking post-tasks

Caution

Using this mechanism is not recommended unless there is a good reason for it. Overusing it may complicate your code and severly affect performance of your application. In most cases a better way is to design components to work asynchronously whenever possible. Take special care to minimize usages of blocking post-tasks and ensure that they are used in a safe and correct way to avoid problems (e.g. deadlocks).

If you need to block execution of the current task or thread until some other task finishes on the other thread, you can use base::WaitableEvent class.

To create it you need to specify two parameters that decide how the object will behave:

Once created, you can pass a reference or a pointer to it to some callback that will - eventually - signal it, and wait for it on your thread. This will stop processing current and any other tasks on this thread and sequence until the waitable event will be signalled from a different thread.

Hint

To write safer code, you can also use base::AutoSignaller that will automatically signal the waitable event on its destruction. This can help you ensure that the logic will be unblocked even if post-tasking fails or something unexpected happens.

Example - base::WaitableEvent

void DoSomethingOnOtherSequence(base::AutoSignaller) {
  DoSomething();
}

void PostDoSomethingOnSequenceAndWait(
    std::shared_ptr<base::TaskRunner> task_runner) {
  base::WaitableEvent event{};

  task_runner->PostTask(
      FROM_HERE,
      base::BindOnce(&DoSomethingOnOtherSequence,
                     base::AutoSignaller{&event}));
  event.Wait();

  // `DoSomething()` has finished by this point (or post-tasking has failed)
}

Attention

If you need to use base::WaitableEvent together with std::mutex, you should probably use std::mutex with std::condition_variable instead.

See also

For more details, please refer to the Chromium’s Threading and tasks documentation page.