We have a network engine which currently supports both unicast and broadcast - we have been using unicast up to now, but I'm exploring the broadcast capability in an effort to reduce network traffic.
My tests seem to suggest that unicasting is far more reliable than broadcasting - this isn't something I was expecting to see (I don't have a strong networking background).
For example, with a simple test scenario where packets are sent from a host to a client across a gigabit ethernet LAN - I can unicast 15,000 1000 byte packets every second with no packet loss. If I switch to broadcasting, I can only manage about 1200 1000 byte packets before encountering +/- 20% packet loss.
Is this expected? I haven't been able to find a clear answer to this specific question while reading through info on the differences between unicasting and broadcasting.
If this is indeed expected behvaiour, would multicasting be more akin to unicasting than broadcasting in this respect ('reliability')?
Test Details
I've whipped up an app which allows various settings to be manipulated (send rate, packet size, number of packets sent per 'iteration', etc.). I run several instances of this app locally and then some more instances on other PCs on the network (I have run with just a single instance on my PC and another PC).
Each packet contains the name of the app instance it was sent from, an integer ID (that is incremented each time the app sends a packet) and a payload (typically 0, 500, 1000 or 1400 bytes big).
When a packet is received the ID field is inspected and I determine if it is the expected ID (I track the next ID I expect to receive from each app instance). An ID that is later than what I'm expecting indicates packets have been lost or are arriving out-of-sequence, an ID earlier than what I'm expecting indicates the packet has arrived out-of-sequence.
The send rate can be modified, by default I use a 20ms delay. The number of packet sent each time can be modified too - so I can, for example, specify that I want to send 50 packets every 20ms (each packet would have a unique ID).
When you send a UDP broadcast to the network, the packet must be duplicated and sent to every node on the network creating significant work for the network device. If your LAN consists of a large number of devices which were not previously being unicast to, doing a full broadcast can create a lot more network activity and work for the network devices.
However, if your network is entirely made up of nodes desiring participation, I would expect the broadcast option to be more reliable, as the network device is receiving less data from the source and is able to run optimized procedures for sending out broadcast packets to the same number of hosts as it would have if the packets were individually unicasted.
When doing network testing it's important to use a controlled environment where you can understand the variables involved and see how different types of scaling impact your performance.