If we have multiple buffers, we can use iter
on each buffer, then chain
iterators, and read all these buffers like a single continuous collection.
But can we do the opposite? Like, allocating some buffers with capacity, then put them as targets into this 'write iterator', and just append elements into this iterator as we do with a single collection. So each buffer will fill up to its capacity, and after all buffers are filled, this iterator returns err.
Iterators can yield mutable references (commonly .iter_mut()
) and so writing into them is a simple iteration and assignment but you'd have to handle boundary conditions yourself (a.k.a. input shorter or buffers shorter). Doesn't seem idiomatic.
There is the Extend
trait that is designed to take an iterable and store the results in a collection. However there are no implementations for space-limited collections nor for "chaining". You could make a struct wrapping multiple buffers and implement Extend
yourself. Though, Extend
is not designed to be fallible, so hitting the capacity would likely mean the iterator was not exhausted which you'd have to check for separately.
In the same vein as Extend
but when writing bytes there is the Write
trait. However I similarly don't see an existing implementation that would work for multiple backing buffers - again you'd have to implement that yourself. If your buffers are for bytes then this is likely what you'd want.
In other contexts, a fallible receiver of values may be called a "sink" but I don't see that pattern used very commonly. Its not something available in the standard library. There's the Sink
trait from the futures crate, but that is for async operations.
Maybe not the most satisfying answer, but there's stuff out there to build from.