The uvco C++ async library

Sun, Nov 26, 2023 tags: [ Programming Async C++ Uvco ]

Updated: May 7, 2024

New article, going in depth on uvco: Design of the uvco async library

Since starting my new position at Zurich Instruments I’ve worked a lot with the lovely kj framework, in conjunction with capnproto RPC. kj is an asynchronous network programming library, and gained support for C++ coroutines in version 1.0. For this use case, coroutines are great, simplify usage, and may even yield performance improvements.

To get a better understanding, I attempted my hand at writing a small coroutine framework driven by the libuv event loop. While there were some snags to be navigated, this turned out to be a great fit, and I quickly managed to map a good part of its functionality to a coroutine-based implementation.

Which functionality? For example,

As the foundation of uvco, libuv not only provides standard event loop functionality, but using the new io_uring API of the Linux kernel, it also gives us the chance to significantly reduce the number of system calls needed and thus improve performance. Whether this results in a net-benefit, after accounting for all the dynamic allocation required by coroutines, remains to be benchmarked :-)

My main goal was ergonomics: writing coroutine code should be as simple as writing conventional single-threaded non-concurrent code. The approach chosen by uvco avoids having the developer install and manage callbacks, which is complex and unpleasant. In essence, uvco automates callback hell and provides a nice “flat” experience.

Here’s a simple example of using the curl integration to download a file:

#include <uvco/loop/loop.h>
#include <uvco/promise/promise.h>
#include <uvco/run.h>
#include <uvco/integrations/curl/curl.h>
 
// and other includes ...
 
using namespace uvco;
 
Promise<void> testCurl(const Loop& loop) {
    Curl curl{loop};
    // API subject to change! (request must outlive download)
    auto request = curl.get("https://borgac.net/~lbo/doc/uvco");
    auto download = request.start();
 
    try {
      // Download the file bit by bit.
      while (true) {
        auto result = co_await download;
        if (!result) {
          fmt::print("Downloaded file\n");
          break;
        }
        fmt::print("Received data: {}\n", *result);
      }
    } catch (const UvcoException &e) {
      fmt::print("Caught exception: {}\n", e.what());
    } catch (const CurlException &e) {
      fmt::print("Caught curl exception: {}\n", e.what());
    }
 
    co_await curl.close();
}
 
int main() {
  // Entry point to asynchronous code.
  runMain<void>([](const Loop& loop) -> Promise<void> {
    return testCurl(loop);
  });
}

You will have noticed that there’s not much of uvco to see here - just a Promise<void> here and a co_await there. That’s intentional: uvco wants to get out of the way and let you write your code as if it were synchronous.

And if you’re old school, maybe you just want to write your HTTP/1.0 request straight to a TCP socket? Not a problem either:

#include <cassert>
#include <fmt/format.h>

// Assuming the header files are installed appropriately:

#include <uvco/promise/promise.h>
#include <uvco/run.h>
#include <uvco/tcp.h>

using namespace uvco;

Promise<void> testHttpRequest(const Loop& loop) {
  // A client is prepared with the target host.
  TcpClient client{loop, "borgac.net", 80, AF_INET};
  // But it first must be connected; this is an asynchronous operation
  // and therefore can't occur in the constructor.
  TcpStream stream = co_await client.connect();

  // Write out the request. However long this takes, just continue once it's done.
  // Other coroutines may run in the meantime! Here, we don't take notice of them.
  co_await stream.write(
      fmt::format("HEAD / HTTP/1.0\r\nHost: borgac.net\r\n\r\n"));
  // The control flow is very natural.
  while (true) {
    std::optional<std::string> chunk = co_await stream.read();
    if (chunk) {
      fmt::print("Got chunk: >> {} <<\n", *chunk);
    } else {
      break;
    }
  }
  // Closing is also an asynchronous operation; this is mostly owed to libuv.
  co_await stream.closeReset();
}

void run_loop() {
  uvco::runMain([](const Loop& loop) -> uvco::Promise<void> {
    // Run the coroutine, and wait for it to finish.
    // Could also co_await it!
    return testHttpRequest(loop);
  });
}

int main() {
  run_loop();
  return 0;
}

You can try playing with uvco yourself: follow the installation instructions in the README. The library is exposed as a CMake module, and should therefore be reasonably easy to integrate in other code bases.

Idea

The idea was to make the simplest coroutine framework that’s still useful, without complex reactor/task/scheduler system. This has led to some interesting outcomes, for example:

Code

The uvco code has been extended to support UDP, TCP, name resolution, stdin/out, buffered intra-process channels (like Go’s), files (io_uring, hell yeah!), curl, as well as timers. This is already enough for many simpler applications.

If you’d like to learn more about uvco, here are some starting points:

For development, I am using Address Sanitizer (asan) to investigate the occasionally difficult-to-find memory leaks or use-after-frees (coroutines using dynamic allocation can make this intransparant), and gcov and lcov for generating coverage information. Both can be enabled in the cmake project.

At the moment, all coroutines are launched immediately after construction. Callbacks schedule coroutines for resumption on the currently active event loop. Previously, coroutines would be resumed synchronously, which sometimes triggered bugs when shared state was accessed from multiple coroutines concurrently (though not in parallel).

Goal

I will keep working on uvco for the time being. Ultimately, I hope to support all of libuv’s functionality, on multiple platforms, exposing an ergonomic and safe (where possible) API to build fast networking applications in C++ on top of. Maybe as nice and simple as Go or node.js? :-)

Pull requests, comments, and other contributions are welcome!