Contextual Async

Project goals


The goal is to make easy the use of sync + async code.  Sync code can be left unchanged but become async automatically if it calls async code (and is called by a future) !!!

You will see in the examples the the sync and async methods mix perfectly.


Async tasks and thread logic become even more similar

Imagine you have some code written for monothread. And you want to include your code in a multithread environment.  Do you need to adapt all your code which is what you do when you want to migrate to multithrreaded code ? The answer is no.

Functionnally these constraints are not justified neither technically

Do we have the tools to do this ? Yes because thanks to boost::context we can switch context between tasks. When a task suspends, it just calls a function (the event loop or reactor) to potentially switch to another task. Just like threads switch contexts…

Async/Await logic has introduced a symetric relation wich introduces unnecessary contraints. We should just use the same logic as thread logic.

The code below has the same simplicity than async/await logic (it uses future but it could use await syntax), except that it does not need to use this syntax in all the intermediate code. There is an await at one end, just normal calls in between, and async primitive on the other end.


The project is  called 5a5 (pronounced sync async in french, because it mixes sync with async)

The link:


See the github link.

Important warning

This is only a prototype…

Maybe there are important considerations which could lead to the conclusion that it is not a good idea. I am rather confident that it could have many interesting applications but I have to challenge the concept.


Illustration by examples

Again, the examples could be rewritten with await syntax for the people more used to it.

The future.get is simply an await.


first version

Let’s say you have developped a complex libray with multiple level for the calculation, For simplification we will only have 3 levels.

#include "pch.h"
#include <iostream>
#include "async.h"

int f2(int x) {
	return x + 1;

int f1(int x) {
	return f2(x + 1);

int calculate(int x) {
	return f1(x + 1);

int main()
	std::cout << calculate(1) << std::endl;;

Very basic example..

second version

Now in your last level you are going to compute it with an asynchronous call.. Could be the call to a server via a TCP request,  let’s replace it by a simpler form as a first step, we will just create a delay, so we replace f2 with:

int f2(int x) {
	return x + 1;

And it displays 4 after a 5 second delay just like the synchronous code. It is synchronous in fact, in the sense that there is no coroutine/context switching, just normal function calls.

So if a sync caller calls  async code, it will remain synchronous. So far it is not impressive but let’s continue.

Third version

Let’s make a change in the caller part :

int main()
	future<int> fut_calc(
		[]() { return calculate(1); });

	std::cout << "start future" << std::endl;;

	std::cout << "waiting for result in main task" << std::endl;
	std::cout << fut_calc.get() << std::endl;;

The output is:

start future
sleeping 5 seconds
waiting for result in main task // displayed immediately !!
4 // displayed after 5 seconds

It is clearly async !

No changes were needed in “library” code => the calculate and the f1 functions had zero changes in their code, they are written just like synchronous functions.

Fourth version (just for curiosity)

A last use case, let’s comment the wait_duration in the library, which will be 100% synchronous. Is it going to be a problem  because a future waits for it ? Let’s try

// wait_duration(5)

Works perfectly . All the combinations work.

A more complex example

In this example we will have

  • The main task

It makes its own computation calc1, which takes alone about 5 seconds (on my computer).

  • A calculation task

It computes calc2 in parallel with the main task. For this it will add the results of 2 requested servers, the servers will take time to answer… :

  1. The first request will take 6 seconds
  2. The second request will take 10 seconds
  • A clock

Every seconds, it dispays a counter

#include "pch.h"
#include <iostream>
#include <winsock2.h>
#include <ws2tcpip.h>
#include <time.h>
#include "async.h"

#define DEFAULT_PORT "27015"

extern SOCKET do_connect();

int calc2() {
	future<int> pricing1([]() { return call_server("6"); });
	future<int> pricing2([]() { return call_server("10"); });;;

	return 1 + pricing1.get() + pricing2.get();

int clock1s() {
	int count = 0;
	while (1) {
		std::cerr << "count: " << count++ << std::endl;
	return 0;

double calc1() {
	std::cout << "start calc1" << std::endl;
	double z = 0;
	int x;
	for (x = 0; x < 100; x++) {
		for (int y = 0; y < 500000; y++)
			z += sin(x) * sin(y);
	std::cout << "end calc1 " << x << std::endl;
	return 0;

int main()
	time_t start, endt; // we will display the total time.

	future<int> fut_clock(clock1s);;

	future<int> fut_calc(
		[]() { return calc2(); });

	std::cout << "start future" << std::endl;;
	std::cout << "waiting for result in main task" << std::endl;
	std::cout << "Result: " << fut_calc.get() << std::endl;;
	std::cout << "It took: " << difftime(endt, start) << std::endl;
	int a;
	std::cin >> a;

Here is the output, you can see thanks to the clock the duration of the different tasks:

start future
start calc1
count: 0
count: 1
count: 2
count: 3
count: 4
end calc1 100
waiting for result in main task
count: 5
count: 6
count: 7
count: 8
count: 9
Result: 17
It took: 10

So instead of taking 6 + 10 + 5  = 21 seconds it takes 10 seconds to compute everything. And for the same price you have a clock which runs concurrently.


  • wait_duration(0) in calc1 is simply used to  give chance for other tasks to run (like the clock).
  • An exemple of server is available in the repository, it receives a string that it converts in number which corresponds to the delay before it answers to the client. So if it sends “6”, the server waits for 6 seconds and replies “6”.

Extension of the concept: Generators !

Based on the same concept, we could imagine being able to easily add generator to a complex library.

How this works (very briefly)

The limitation with current implementation is due to the symetric coroutine.  We use instead asymetric coroutine thanks to boost::context. These are continuations.

A continuation is a task . When a task calls an asynchronous primitive (wait_duration, wait_socket_rcv), it runs directly the reactor.  The reactor is a function which polls the events. If the event is for himself, it will return directly from the reactor function. It will not make any context switching !!

In the opposite, it can receive an event to awake another continuation (which is also running a reactor), in this case it will resume the other continuation. 

It is in fact very simple .. I hope I haven’t forgotten anything, it is too simple to be true.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s