I saw this exception from Colt OpenLongObjectHashMap
:
java.lang.ArithmeticException: divide by zero
at cern.colt.map.OpenLongObjectHashMap.indexOfKey(Unknown Source)
at cern.colt.map.OpenLongObjectHashMap.get(Unknown Source)
It's not reproduceable.
This is indexOfKey:
protected int indexOfKey(long key) {
final long tab[] = table;
final byte stat[] = state;
final int length = tab.length;
final int hash = HashFunctions.hash(key) & 0x7FFFFFFF;
int i = hash % length;
int decrement = hash % (length-2); // double hashing, see http://www.eece.unm.edu/faculty/heileman/hash/node4.html
//int decrement = (hash / length) % length;
if (decrement == 0) decrement = 1;
// stop if we find a free slot, or if we find the key itself.
// do skip over removed slots (yes, open addressing is like that...)
while (stat[i] != FREE && (stat[i] == REMOVED || tab[i] != key)) {
i -= decrement;
//hashCollisions++;
if (i<0) i+=length;
}
if (stat[i] == FREE) return -1; // not found
return i; //found, return index where key is contained
}
So the only divisors used are length
and (length - 2)
, where length
is table.length
, table
being an internal array.
However, table is only ever initialised to an array of minimum size 3 (and the default is 277 which is what I am using). Integer wrap around doesn't seem possible either.
So this would seem to be an impossible error.
Any ideas?
This turned out to be a Java compiler optimization error in the IBM JDK JIT compiler that was in use.
See this bug report: IJ06000: UNEXPECTED DIVIDE BY ZERO EXCEPTION
The recommended fix is to disable the LoopVersioner optimisation on the problem methods.