I'm trying to design a SwiftNIO server where multiple clients (like 2 or 3) can connect to the server, and when connected, they can all receive information from the server.
To do this, I create a ServerHandler
class which is shared & added to each pipeline of connected clients.
let group = MultiThreadedEventLoopGroup(numberOfThreads: 2)
let handler = ServerHandler()
let bootstrap = ServerBootstrap(group: group)
.serverChannelOption(ChannelOptions.backlog, value: 2)
.serverChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
.childChannelInitializer { $0.pipeline.addHandler(handler) }
.childChannelOption(ChannelOptions.socketOption(.so_reuseaddr), value: 1)
The above code is inspired from https://github.com/apple/swift-nio/blob/main/Sources/NIOChatServer/main.swift
In the ServerHandler
class, whenever a new client connects, that channel is added to an array. Then, when I'm ready to send data to all the clients, I just loop through the channels in the ServerHandler
, and call writeAndFlush
.
This seems to work pretty well, but there are a couple things I'm concerned about:
Channel.write
not seem to do anything? My client is unable to receive any data if I use Channel.write
instead of writeAndFlush
in the server.I apologize if these questions are stupid, I just started with SwiftNIO
and networking in general very recently.
If anybody could give me some insight, that would be awesome.
Your questions aren't stupid at all!
Yeah, sharing a ChannelHandler
probably counts as "not recommended". But not because it doesn't work, it's more that it's unusual and probably not something other NIO programmers would expect. But if you're comfortable with it, it's fine. If you're high-performance enough that you worry about the exact number of allocations per Channel
then you may be able to save some by sharing handlers. But I really wouldn't optimise the prematurely.
If you didn't want to share handlers, then you could use multiple handlers that share a reference to some kind of coordinator object. Don't get me wrong, it's really still the same thing: One shared reference across multiple network connections. The only real difference is that testing that may be a little easier and it would possibly feel more natural to other NIO programmers. (In any case be careful to either make sure that all those Channel
s are on the same EventLoop
or to use external synchronisation (with say a lock, which might not be ideal from a performance point of view).
write
just enqueues some data to be written. flush
makes SwiftNIO attempt to send all the previously written data. writeAndFlush
simply calls write
and then flush
.
Why does NIO distinguish between write
and flush
at all? In high-performance networking applications, the biggest overhead might be the syscall overhead. And to send data over TCP, SwiftNIO has to do a syscall (write
, writev
, send
, ...).
Any SwiftNIO program will work if you just ignore write
and flush
and always use writeAndFlush
. But, if the network is keeping up, this will cost you one syscall per writeAndFlush
call. In many cases however, a library/app that's using SwiftNIO already knows that it wants to enqueue multiple bits of data to be sent over the network. And in that case doing say three writeAndFlush
in a row would be wasteful. If would be much better to accumulate the three bits of data and then send them all in one syscall using a "vector write" (e.g. writev
syscall). And that's exactly what SwiftNIO would do if you did say write
, write
, write
, flush
. So the three writes will all be sent using one writev
system call. SwiftNIO will simply get the three pointers to the bits of data and hand them to the kernel which then attempts to send them over the network.
You can take this even a little further. Let's assume you're a high-performance server and you want to respond to a flood of incoming requests. You'll get your requests from the client over channelRead
. If you're now able to reply synchronously, you could just write
them responses (which will enqueue) them. And once you get channelReadComplete
(which marks the end of a "read burst") you can flush
. That would allow you to respond to as many requests as you can get in a single read burst using just one writev
syscall. This can be quite an important optimisation in certain scenarios.