p2pdhtkademlia

Highly unbalanced Kademlia routing table


In the Kademlia paper, the last paragraph of section 2.4 states that in order to properly handle highly unbalanced trees...

Kademlia nodes keep all valid contacts in a subtree of size at least k nodes, even if this requires splitting buckets in which the node's own ID does not reside.

However, the previous section of the paper seem to state that if a k-bucket already has k elements, that any further additions in to that k-bucket require removing the stalest node (pinging it first to see if its alive) or otherwise caching the addition until a slot becomes available in that k-bucket.

The paper seems to be contradicting itself with these two points.

Under what conditions should a k-bucket be split and why? It seems not practical to keep "all valid contacts" in the routing table, as the routing table would then get very large very quickly. The example talks about a tree that has many nodes starting with 001 and one node starting with 000. The node starting with 000 would have to continually split its k-bucket for 001 to hold every valid node starting with 001? In a 160-bit address space, wouldn't that end up potentially storing 2^157 nodes in 000's routing table??

The wording in the quoted block is also very confusing...

"in a subtree" -- in which subtree of the routing table?

"of size atleast k nodes" -- what metric are we using to determine size of the subtree? nodes in this case refers to kademlia nodes or k-buckets or something else?


Solution

  • However, the previous section of the paper seem to state that if a k-bucket already has k elements, that any further additions in to that k-bucket require removing the stalest node (pinging it first to see if its alive) or otherwise caching the addition until a slot becomes available in that k-bucket.

    That is how a bucket is maintained whenever there is a node contact to insert but the bucket is not eligible for splitting.

    Under what conditions should a k-bucket be split and why?

    As a first approximation: Split a bucket whenever a new node cannot be inserted and the bucket's ID-space covers your node ID.

    This is necessary to maintain full awareness of your neighborhood while having only vague awareness of remote keyspace portions. I.e. for locality.

    To cover the unbalanced-tree case - which may happen either if node IDs aren't (pseudo-)random or, at least in leaf buckets, due to statistical flukes when they are assigning at random - the approach has to be relaxed as follows:

    When

    then split the bucket.

    In practice this has to be modified a little further so that relaxed splitting is used for responses while unsolicited requests should only use strict splitting, otherwise you could get some strangely distorted routing table when the relaxed splitting happens early during startup when the table isn't populated yet.

    And of course this also needs to be taken into account when merging buckets so that buckets created by relaxed-splits don't get merged back into a single non-home bucket as long as there are no closer nodes in the routing table.


    One of the improvements proposed in the S/Kademlia paper is using a sibling list insead to keep track of a node's nearest neighbors which may be easier to implement than non-local splitting.