network-programmingnetwork-protocolsraw-sockets

How The network header is represented at the bit level


how the header structure in bit representation let's take the example icmp header because this icmp has fewest fields.

| TYPE | CODE | CHECKSUM |
       CONTENT


Just for example then let's assign some value to the field :

Type Size = 1 byte
Code Size = 1 byte
Checksum = 2 byte
Content = 4 byte

Does the format header in bit level will look like this

00010100 00001010 1111000000001111 00000000000000000000000000000000

As you can see, I formatted the header into bits based on the header format sequence. Where type is the first field, the code is the second field, checksum is the third field and data was the fourth field

00010100 = 20 where is this the type field
00001010 = 10 where is this the code field
1111000000001111 = where is this a checksum
00000000000000000000000000000000 = and this was equal to the payload or the content.

But does in the real world examples, network headers that are formatted into bits will follow the standard header format sequence as I have shown in the example?

And also do I have to pay attention to the byte ordering used by my system in order to read the header correctly


Solution

  • I'm concerned that your line of questioning will not work well in stack overflow, but here's what I can answer that might be helpful

    1. Are the standards obeyed in the "real world"?

    Generally speaking, yes, the standards actually followed for the internet (IPv4, IPv6, TCP/IP family, etc.). That's why everything interoperates most of the time. (I'm not trying to sound snarky, there was a previous era where standards were not followed, sometimes even by the vendor that wrote the standards. You might need to find someone with 30+ years of networking experience, but they won't like it if you say "packet sniffer" and "IPX | AppleTalk | SMB | LMHost" in the same sentence.

    So yeah, if you read a low level packet trace of ICMP, it should show you that it says: "blah blah", and if you change the display to the byte level (I don't know if every analyzer will translate to bit-wise display), then you'd see the contents match the spec. And the bytes will be in big endian.

    1. Byte ordering:

    Your API should clarify the usage, but the relevant, modern RFCs prefer big-endian. Unless you are writing something that does this at the lowest levels, most API's handle this.

    Long ago, some architectures (Intel 32 bit?) were internally little endian and had sparsely coded network/OS stacks where they expected the programmer to convert little endian -> big endian. This was in an era where it was non-trivial and/or expensive, so some early applications were written without this consideration. What resulted was pools of PCs that interoperated, and everything else puzzled by what was going on.