While interning at a video conferencing company, there was much talk of packet loss. Wikipedia states the following:
Packet loss is typically caused by network congestion.
I understand that because video requires such massive amounts of data to be sent over the wire, packets are bound to be lost.
What I don't understand is why packet loss is not experienced in other cases such as HTTP requests and AJAX calls. If packet loss is truly due to congested networks, why have I never experienced it with my own HTTP requests?
Are HTTP connections invulnerable to packet loss, or are the requests that I am sending too small to be affected. If HTTP is immune to packet loss, why is this the case?
Packet loss for congestion can occur with any protocol based on IP. If there is congestion routers in the middle between two machines can drop IP datagrams as IP is a best-effort protocol.
The difference is that video is usually transmitted over UDP protocol while HTTP is transmitted over TCP protocol. IP is a layer 3 protocol. TCP and UDP are two types of layer 4 protocol.
UDP is not connection oriented nor reliable. This means that if a datagram is dropped in the middle, no end-point node realizes about that (unless there is a highe layer protocol implementing reliability). The datagram is lost.
TCP is a connection oriented and reliable protocol. Explained in a simple way, the node receiving an TCP segment will send acknowledgements for the data received. If a TCP segment is lost in the middle, the receiving node will not send an ACK (acknowledgment) and the sending node will have a timeout after certain time. Upon timeout, the sending node will re-transmit the missing data. This is why the receiving node will either receive the whole HTTP message or, in an extreme case, there is going to be an error in the application telling you that something like "the connection is broken" (this means if there is problem both ends will realize about it).