asio-grpc v3.4.0
Asynchronous gRPC with Asio/unified executors
Loading...
Searching...
No Matches
Using Asio io_context

Note
Due to limitations of the gRPC CompletionQueue and Callback API an asio::io_context cannot be used to handle RPCs directly. See the end of this document for a detailed explanation.

This article describes how to interoperate between a GrpcContext and an asio::io_context.

Implicitly constructed io_context

Since a GrpcContext is also an asio::execution_context it supports Asio's Service mechanism. The following code will therefore implicitly create an io_context, a background thread, run the io_context on that thread and post the completion of async_wait onto the GrpcContext where the lambda is being invoked.

agrpc::GrpcContext grpc_context;
asio::signal_set signals{grpc_context, SIGINT, SIGTERM};
signals.async_wait(
[](const std::error_code&, int)
{
// executed in the thread that called grpc_context.run().
});
grpc_context.run();

Signal_set is just used as an example, it could be any Asio I/O object like ip::tcp::socket.

While this is the most convenient approach is also has some downsides:

  • The io_context cannot be run on more than one thread.
  • There is runtime overhead due to non-customizable thread switching.

Explicitly constructed io_context

GrpcContext and io_context can also be created directly and used as usual: submit work and run. It is often convenient to utilize one of them as the "main" context. For an example, a gRPC server might use the io_context only for HTTP client operations and the GrpcContext for everything else.

In the following example the io_context is used as the "main" context. When its main coroutine runs to completion, it will signal the GrpcContext to stop (by releasing the work guard):

asio::io_context io_context{1};
grpc_context; // for gRPC servers this would be constructed using `grpc::ServerBuilder::AddCompletionQueue`
asio::co_spawn(
io_context, // Spawning onto the io_context means that completed operations will switch back to the it before
// resuming the coroutine. This can be customized on a per-operation basis using
// `asio::bind_executor`.
[&, grpc_context_work_guard = asio::make_work_guard(grpc_context)]() mutable -> asio::awaitable<void>
{
using namespace asio::experimental::awaitable_operators;
co_await (make_grpc_request(grpc_context, stub) && make_tcp_request(tcp_port));
grpc_context_work_guard.reset();
},
example::RethrowFirstArg{});

For running the contexts there are two choices:

Run on separate threads

std::thread grpc_context_thread{[&]
{
grpc_context.run();
}};
io_context.run();
grpc_context_thread.join();

Run on same thread

Until the GrpcContext stops:

// First, initiate the io_context's thread_local variables by posting on it. The io_context uses them to optimize
// dynamic memory allocations. This is an optional step but it can improve performance.
asio::post(io_context,
[&]
{
agrpc::run(grpc_context, io_context,
[&]
{
return grpc_context.is_stopped();
});
});
io_context.run();

Or until both contexts stop:

// First, initiate the io_context's thread_local variables by posting on it. The io_context uses them to optimize
// dynamic memory allocations. This is an optional step but it can improve performance.
// Then undo the work counting of asio::post.
// Run GrpcContext and io_context until both stop.
// Finally, redo the work counting.
asio::post(io_context,
[&]
{
io_context.get_executor().on_work_finished();
agrpc::run(grpc_context, io_context);
io_context.get_executor().on_work_started();
});
io_context.run();

Conclusion

Both approaches come with their own different kind of overheads. Running on two threads might require additional synchronization in the user code while running on the same thread reduces peak performance. In the Performance section of the README you can find results for using an idle io_context with a busy GrpcContext running on the same thread (look for cpp_asio_grpc_io_context_coro).

Why not use io_context for gRPC directly?

Event loops like the ones used in Asio and gRPC typically utilize system APIs (epoll, IOCompletionPorts, kqueue, ...) in the following order:

  1. Create file descriptors for network operations (e.g. sockets and pipes).
  2. Initiate some operations on those descriptors (e.g. read and write).
  3. Perform a system call (e.g. poll) to sleep on ALL descriptors until one or more are ready (e.g. received data).
  4. Notify some part of the application, typically by invoking a function pointer.

The important part is to wait on ALL descriptors at once. Which means, for Asio and gRPC to interoperate nicely we would need to collect the descriptors first and then perform the system call to wait. However, file descriptors are created deep in the implementation details of those libraries and the sleep is performed even deeper. GRPC is working on an EventEngine which should make it possible to use Asio sockets for gRPC. Whether it will be enough to fully use Asio for all gRPC network operations remains to be seen.