c++boostboost-asiolibuv

How does libuv compare to Boost/ASIO?


I'd be interested in aspects like:


Solution

  • Scope

    Boost.Asio is a C++ library that started with a focus on networking, but its asynchronous I/O capabilities have been extended to other resources. Additionally, with Boost.Asio being part of the Boost libraries, its scope is slightly narrowed to prevent duplication with other Boost libraries. For example, Boost.Asio will not provide a thread abstraction, as Boost.Thread already provides one.

    On the other hand, libuv is a C library designed to be the platform layer for Node.js. It provides an abstraction for IOCP on Windows, kqueue on macOS, and epoll on Linux. Additionally, it looks as though its scope has increased slightly to include abstractions and functionality, such as threads, threadpools, and inter-thread communication.

    At their core, each library provides an event loop and asynchronous I/O capabilities. They have overlap for some of the basic features, such as timers, sockets, and asynchronous operations. libuv has a broader scope, and provides additional functionality, such as thread and synchronization abstractions, synchronous and asynchronous file system operations, process management, etc. In contrast, Boost.Asio's original networking focus surfaces, as it provides a richer set of network related capabilities, such as ICMP, SSL, synchronous blocking and non-blocking operations, and higher-level operations for common tasks, including reading from a stream until a newline is received.


    Feature List

    Here is the brief side-by-side comparison on some of the major features. Since developers using Boost.Asio often have other Boost libraries available, I have opted to consider additional Boost libraries if they are either directly provided or trivial to implement.

                             libuv          Boost
    Event Loop:              yes            Asio
    Threadpool:              yes            Asio + Threads
    Threading:              
      Threads:               yes            Threads
      Synchronization:       yes            Threads
    File System Operations:
      Synchronous:           yes            FileSystem
      Asynchronous:          yes            Asio + Filesystem
    Timers:                  yes            Asio
    Scatter/Gather I/O[1]:    no             Asio
    Networking:
      ICMP:                  no             Asio
      DNS Resolution:        async-only     Asio
      SSL:                   no             Asio
      TCP:                   async-only     Asio
      UDP:                   async-only     Asio
    Signal:
      Handling:              yes            Asio
      Sending:               yes            no
    IPC:
      UNIX Domain Sockets:   yes            Asio
      Windows Named Pipe:    yes            Asio
    Process Management:
      Detaching:             yes            Process
      I/O Pipe:              yes            Process
      Spawning:              yes            Process
    System Queries:
      CPU:                   yes            no
      Network Interface:     yes            no
    Serial Ports:            no             yes
    TTY:                     yes            no
    Shared Library Loading:  yes            Extension[2]

    1. Scatter/Gather I/O.

    2. Boost.Extension was never submitted for review to Boost. As noted here, the author considers it to be complete.

    Event Loop

    While both libuv and Boost.Asio provide event loops, there are some subtle differences between the two:

    Threadpool

    Threading and Synchronization

    File System Operations

    Networking

    Signal

    IPC


    API Differences

    While the APIs are different based on the language alone, here are a few key differences:

    Operation and Handler Association

    Within Boost.Asio, there is a one-to-one mapping between an operation and a handler. For instance, each async_write operation will invoke the WriteHandler once. This is true for many of libuv operations and handlers. However, libuv's uv_async_send supports a many-to-one mapping. Multiple uv_async_send calls may result in the uv_async_cb being called once.

    Call Chains vs. Watcher Loops

    When dealing with task, such as reading from a stream/UDP, handling signals, or waiting on timers, Boost.Asio's asynchronous call chains are a bit more explicit. With libuv, a watcher is created to designate interests in a particular event. A loop is then started for the watcher, where a callback is provided. Upon receiving the event of interests, the callback will be invoked. On the other hand, Boost.Asio requires an operation to be issued each time the application is interested in handling the event.

    To help illustrate this difference, here is an asynchronous read loop with Boost.Asio, where the async_receive call will be issued multiple times:

    void start()
    {
      socket.async_receive( buffer, handle_read ); ----.
    }                                                  |
        .----------------------------------------------'
        |      .---------------------------------------.
        V      V                                       |
    void handle_read( ... )                            |
    {                                                  |
      std::cout << "got data" << std::endl;            |
      socket.async_receive( buffer, handle_read );   --'
    }    
    

    And here is the same example with libuv, where handle_read is invoked each time the watcher observes that the socket has data:

    uv_read_start( socket, alloc_buffer, handle_read ); --.
                                                          |
        .-------------------------------------------------'
        |
        V
    void handle_read( ... )
    {
      fprintf( stdout, "got data\n" );
    }
    

    Memory Allocation

    As a result of the asynchronous call chains in Boost.Asio and the watchers in libuv, memory allocation often occurs at different times. With watchers, libuv defers allocation until after it receives an event that requires memory to handle. The allocation is done through a user callback, invoked internal to libuv, and defers deallocation responsibility of the application. On the other hand, many of the Boost.Asio operations require that the memory be allocated before issuing the asynchronous operation, such as the case of the buffer for async_read. Boost.Asio does provide null_buffers, that can be used to listen for an event, allowing applications to defer memory allocation until memory is needed, although this is deprecated.

    This memory allocation difference also presents itself within the bind->listen->accept loop. With libuv, uv_listen creates an event loop that will invoke the user callback when a connection is ready to be accepted. This allows the application to defer the allocation of the client until a connection is being attempted. On the other hand, Boost.Asio's listen only changes the state of the acceptor. The async_accept listens for the connection event, and requires the peer to be allocated before being invoked.


    Performance

    Unfortunately, I do not have any concrete benchmark numbers to compare libuv and Boost.Asio. However, I have observed similar performance using the libraries in real-time and near-real-time applications. If hard numbers are desired, libuv's benchmark test may serve as a starting point.

    Additionally, while profiling should be done to identify actual bottlenecks, be aware of memory allocations. For libuv, the memory allocation strategy is primarily limited to the allocator callback. On the other hand, Boost.Asio's API does not allow for an allocator callback, and instead pushes the allocation strategy to the application. However, the handlers/callbacks in Boost.Asio may be copied, allocated, and deallocated. Boost.Asio allows for applications to provide custom memory allocation functions in order to implement a memory allocation strategy for handlers.


    Maturity

    Boost.Asio

    Asio's development dates back to at least OCT-2004, and it was accepted into Boost 1.35 on 22-MAR-2006 after undergoing a 20-day peer review. It also served as the reference implementation and API for Networking Library Proposal for TR2. Boost.Asio has a fair amount of documentation, although its usefulness varies from user to user.

    The API also have a fairly consistent feel. Additionally, the asynchronous operations are explicit in the operation's name. For example, accept is synchronous blocking and async_accept is asynchronous. The API provides free functions for common I/O task, for instance, reading from a stream until a \r\n is read. Attention has also been given to hide some network specific details, such as the ip::address_v4::any() representing the "all interfaces" address of 0.0.0.0.

    Finally, Boost 1.47+ provides handler tracking, which can prove to be useful when debugging, as well as C++11 support.

    libuv

    Based on their github graphs, Node.js's development dates back to at least FEB-2009, and libuv's development dates to MAR-2011. The uvbook is a great place for a libuv introduction. The API documentation is here.

    Overall, the API is fairly consistent and easy to use. One anomaly that may be a source of confusion is that uv_tcp_listen creates a watcher loop. This is different than other watchers that generally have a uv_*_start and uv_*_stop pair of functions to control the life of the watcher loop. Also, some of the uv_fs_* operations have a decent amount of arguments (up to 7). With the synchronous and asynchronous behavior being determined on the presence of a callback (the last argument), the visibility of the synchronous behavior can be diminished.

    Finally, a quick glance at the libuv commit history shows that the developers are very active.