c++boostboost-asioboost-beast

boost-beast specify source endpoint to bind to


I'm using boost::beast for TCP connections, with an async flow. I'm wondering what the most appropriate way is to specify the source IP/port to bind to.

I illustrate one attempt below,

        template <typename Stream, typename Endpoints, typename Handler>
        static void connect(Stream& stream, const Endpoints& endpoints, Handler handler) {
            auto& tcpStream = beast::get_lowest_layer(stream);
            tcpStream.expires_after(std::chrono::seconds(10));

            // one attempt.
            tcpStream.socket().open(boost::asio::ip::tcp::v4());
            tcpStream.socket().bind( boost::asio::ip::tcp::endpoint(boost::asio::ip::address::from_string("127.0.0.1"), 0));

            tcpStream.async_connect(
                endpoints,
                [&stream, handler](const auto ec, const auto&){ onConnect(stream, ec, handler); }
            );
        }

This appears to work fine for one end-point (but is perhaps non-idiomatic?), but presumably the socket gets opened and closed repeatedly on failure in case of an end-point sequence as in https://www.boost.org/doc/libs/develop/libs/beast/doc/html/beast/ref/boost__beast__basic_stream/async_connect/overload2.html - and then this wouldn't behave correctly.

Is there a more idiomatic approach? Thank you.


Solution

  • but presumably the socket gets opened and closed repeatedly on failure

    I was skeptical. The documentation specifically states:

    It seems fair to expect that if the socket was already open, it will also not be returned to the closed state.

    However, you're right to smell the opportunity for a problem here. And I went to check.

    The Surprise

    Indeed, the bind doesn't appear to hold, even when there's a single endpoint, but it's using the endpoint-sequence overload. Here's a comprehensive demonstration:

    #include <boost/asio.hpp>
    #include <boost/beast/core/tcp_stream.hpp>
    #include <iostream>
    namespace asio = boost::asio;
    using asio::ip::tcp;
    using error_code = boost::system::error_code;
    
    auto fake_dns_result() {
        tcp::endpoint eps[]{
            {{}, 7878}, // not available
            {{}, 6767}, // running netcat
        };
        return tcp::resolver::results_type::create( //
            std::begin(eps), std::end(eps), "localhost", "dummysvc");
    }
    
    void test(tcp::endpoint local_ep) {
        asio::io_context ioc;
    
        auto endpoints = fake_dns_result();
       
        boost::beast::tcp_stream s(ioc);
    
        s.socket().open(tcp::v4());
        s.socket().bind(local_ep);
    
        auto log = [&s, local_ep](error_code ec, tcp::endpoint const& next) {
            std::cout << ec.message() << " next:" << next << " "
                      << (s.socket().is_open() ? "open" : "closed")
                      << " bound:" << s.socket().local_endpoint()
                      << std::endl;
            return true;
        };
    
        s.async_connect(endpoints, log, [&s](auto ec, const auto& /*ep*/) {
            std::cout << " --> Final " << ec.message() << " local "
                      << s.socket().local_endpoint() << " to "
                      << s.socket().remote_endpoint() << "\n\n";
        });
    
        ioc.run();
    }
    
    int main() {
        using A = asio::ip::address_v4;
        test({});                          // 0.0.0.0:0
        test({A::loopback(), 0});          // 127.0.0.1:0
        test({A{{127, 0, 0, 42}}, 0});     // 127.0.0.1:0
        test({A{{192, 168, 50, 225}}, 0}); // 192.168.50.225:0
    }
    

    See it Live On Coliru:

    Success next:0.0.0.0:7878 open bound:0.0.0.0:36700
    Connection refused next:0.0.0.0:6767 open bound:127.0.0.1:35126
     --> Final Success local 127.0.0.1:59672 to 127.0.0.1:6767
    
    Success next:0.0.0.0:7878 open bound:127.0.0.1:51845
    Connection refused next:0.0.0.0:6767 open bound:127.0.0.1:35130
     --> Final Success local 127.0.0.1:59676 to 127.0.0.1:6767
    
    Success next:0.0.0.0:7878 open bound:127.0.0.42:37227
    Connection refused next:0.0.0.0:6767 open bound:127.0.0.1:35134
     --> Final Success local 127.0.0.1:59680 to 127.0.0.1:6767
    
    Success next:0.0.0.0:7878 open bound:173.203.57.63:49499
    Connection refused next:0.0.0.0:6767 open bound:127.0.0.1:35138
     --> Final Success local 127.0.0.1:59684 to 127.0.0.1:6767
    

    As you suggested, the bound endpoint is not honoured. Even moving the bind into the connection condition doesn't help:

    s.socket().open(tcp::v4());
    
    auto log = [&s, local_ep](error_code ec, tcp::endpoint const& next) {
        std::cout << ec.message() << " next:" << next << " "
                  << (s.socket().is_open() ? "open" : "closed")
                  << " bound:" << s.socket().local_endpoint()
                  << std::endl;
        s.socket().bind(local_ep);
        return true;
    };
    

    Still prints the same

    Success next:0.0.0.0:7878 open bound:173.203.57.63:49499
    Connection refused next:0.0.0.0:6767 open bound:127.0.0.1:35138
     --> Final Success local 127.0.0.1:59684 to 127.0.0.1:6767
    

    Stripping The Beast

    Reviewing the implementation, it looks like asio::[async_]connect should have the exact same behaviour, because Beast only adds the timeout logic. Let's reduce: Coliru.

    As expected, same output. On to reviewing the Asio implementation.

    Stripping The Async

    Just to simplify the review, let's also check the synchronous connect behaves the same:

    error_code ec;
    /*auto ep =*/ asio::connect(s, endpoints, log, ec);
    
    std::cout << " --> Final " << ec.message() << " local "
              << s.local_endpoint() << " to " << s.remote_endpoint() << "\n\n";
    

    Still the same: Coliru

    The Culprit

    Indeed, the implementation of the range-connect in Asio looks like this:

      for (Iterator iter = begin; iter != end; ++iter)
      {
        iter = (detail::call_connect_condition(connect_condition, ec, iter, end));
        if (iter != end)
        {
          s.close(ec);
          s.connect(*iter, ec);
          if (!ec)
            return iter;
        }
        else
          break;
      }
    

    It follows that it will be impossible to get the required behavior on range-connect, short of