Talk

Algorithm of Massively Parallel Networking in C++

  • In Russian

There are 3 reasons to have a backend written in C/C++:

  • Legacy.
  • Truly exceptional team expertise in C/C++.
  • Performance is the highest priority.

This talk mainly focuses on the last reason - performance. I've had the opportunity to work on projects where it was critical to maximize network throughput, minimize latency, and efficiently scale on many CPU cores with multithreading.

What I've learned is that in 99.999% of cases boost::asio will do the job. It is a stable battle-tested project that has become the de facto standard for async networking in C++. I would honestly stick to it when possible. However sometimes it can't be used. I've seen these common reasons:

  • Boost is C++-only and might be not available at all for some platforms.
  • Company has in-house "libraries of everything" and avoids large external dependencies like boost.
  • Boost::asio performance in stress load is questionable, and the source code at times is hard to comprehend.

At one of my projects I had to find an alternative to boost::asio due to the facts above. As a result, I designed a new generic task scheduling algorithm and built a networking library around it. In this talk I am presenting my solution, focusing on the following features:

  • Fair scheduling - even distribution of CPU usage across worker threads, enabling smooth scaling on many cores.
  • C++ coroutine support - can write linear asynchronous code.
  • Benchmarks - reproducible and automatic, the library outperforms boost::asio by N times.
  • Formal correctness - the core algorithm is formally verified in TLA+.
  • Simplicity - the algorithm is easy to understand, can be ported to other non-C++ languages.

The entire project is battle-tested, is open-source and available here: https://github.com/Gerold103/serverbox.

Speakers

Talks