I store every position with a Zobrist key (64-bit). I store theses in a std::vector. At the beginning I std::vector::reserve(1,000,000). When a position is searched, it takes a long time to check if the key is in the vector, and if it is, a long time to locate it.
This happens at later depths when the vector of transpositions becomes so long it's faster just to re-compute the position instead of looking for a transposition.
What I've tried:
-inserting the keys into the vector sorted from least to greatest and using a binary search to locate them later.
-pushing the key to the vector, and every time I want to check for a key, looping through the vector to check for a matching key.
Also in case it helps hashing a key efficiently is not a problem, I have already implemented it so that it updates every time a move is made.
You can use a vector whose size is a power of 2, and mask off the corresponding part of the Zobrist hash to get an index into the vector. For example:
std::vector<whatever> x(0x100000)
std::int64_t hash = get_hash_from_somewhere();
whatever& value = x[hash & 0xFFFFF];
You might want to use a more sophisticated mask if that one leads to too many collisions.