Merkle trees (aka hash trees) are used for data synchronization in both "Cassandra" & "Dynamo".
As with any hash function, there is a probability that different data can have the same hash value:
There exists an x and y where [y!=x] but [hash(x) = hash(y)]
As the "big data" in NOSQL grows, the probability of encountering such data becomes higher.
This means that as data sets get bigger, it is almost certain that different nodes in the Merkle tree will yield the same parent hash.
On such an occasion, when two different machines in the cluster traverse their merkle trees, they will get a false positive that their data is consistent. If no more data is written to that branch of the tree, the machines will remain unsynchronized forever.
How is this handled?
Most systems don't handle this. Why? Because the probability of having two different inputs that have the same hash value is very, very low. With a good hash function (which I think you are using), this should approach 1/2^{hash-bits}. And since most hashes for these purposes are at least 128 bits long, you get a probability of 1/2^128 of such a collision. Which is about 2.9387359e-39 (0.{38 zeroes}29387359).
Using a hash of 160 bits (which most of these systems use, SHA-1 hashes), is good enough that when you have as much objects in your database as there are grains of sand in the world. That you still have less than a probability of 1/2 that there will be such a collision. Thus, I wouldn't worry about the case where there is a collision. The probability of that happening, is really way too low.