I read http request using boost::beast like this:
// ...
auto buffer = std::make_shared<boost::beast::flat_buffer>();
auto request = std::make_shared<boost::beast::http::request<boost::beast::http::string_body>>();
boost::beast::http::async_read(*socket, *buffer, *request,
[socket, buffer, request](const boost::system::error_code& ec, std::size_t bytes_transferred)
{
// time to cache request
});
where socket is std::shared_ptr<ip::tcp::socket>
. In the http::async_read
callback I need to cache http::request
object for further usage in different method. My question is, should I also cache buffer
and extend its lifetime as long as http::request
exists? In other words, can http::request
be used after buffer
is destroyed?
Good question. I guess the confusion stems from all the interface taking and returning non-owning representations (beast::string_view
). The question arises: who owns that?
http::message<>
owns its fields independent of source buffers. In fact, the parser during read will discard the consumed parts of the buffer.
Looking at the implementation for e.g. basic_fields
which is the default Fields
model that serves as a base class for all standard beast::http::message<>
instantiations:
template<class Allocator>
auto
basic_fields<Allocator>::
try_create_new_element(
field name,
string_view sname,
string_view value,
error_code& ec) -> element*
{
if(sname.size() > max_name_size)
{
BOOST_BEAST_ASSIGN_EC(ec, error::header_field_name_too_large);
return nullptr;
}
if(value.size() > max_value_size)
{
BOOST_BEAST_ASSIGN_EC(ec, error::header_field_value_too_large);
return nullptr;
}
value = detail::trim(value);
std::uint16_t const off =
static_cast<off_t>(sname.size() + 2);
std::uint16_t const len =
static_cast<off_t>(value.size());
auto a = rebind_type{this->get()};
auto const p = alloc_traits::allocate(a,
(sizeof(element) + off + len + 2 + sizeof(align_type) - 1) /
sizeof(align_type));
return ::new(p) element(name, sname, value);
}
Here it depends on the choice of body. Since you're using string_body
, again the model is owning the storage, so it's fine to use until the end of the message object lifetime.
You still need to keep the read-buffer around if you are going to receive more data on the same stream (e.g. socket). That's because of the nature of TCP it's very possible that data arrives in different packets than message-aligned, and the last read had already receied part (or whole) of subsequent request.
For this reason, it's customary to have buffer
not dynamically allocated, but allocated in the same object that owns the stream.