javamultithreadingsynchronizedthread-synchronization

What might thread-safety entail in a typical storage environment?


I'm learning multi-threading in java. I wrote a simple generic storage class that always stores objects in ascending order. I now wanted to make it thread-safe. One way (and probably the easiest way) would be:

// LinkedList implementation of a typical BST
class MyStorage<T> { // implements a comparable and some other interfaces not required by the MRE
    private BSTreeNode rootNode;

    // in case of duplicates, the occurrence property of the BSTreeNode is utilized
    public synchronized boolean add(T);
    public synchronized boolean delete(T);
    public synchronized boolean search(T);
}

storage currenty holds: 1,2,3,4,5,6,7,9

However, I feel this is not a good idea as it is completely acceptable for one thread to erase 6 and the other to add 8 and no matter in what order they are called or interrupted, the storage will give the correct result. I feel what is required is to perform 2 different operations on the same value exclusively.

This is what I'm currently thinking of:

// same sync block for every(or just add/delete?) function
public boolean add(T value) {
    synchronized(/*some sort of mutex on the value object instead of on this*/) {

    }
}

How would I achieve this?


Solution

  • I'm assuming that "BST" means "binary search tree." Is that right?

    it is completely acceptable for one thread to erase 6 and the other to add 8 and no matter in what order they are called or interrupted, the storage will give the correct result.

    Could the delete(6) thread possibly update any link in the tree that the add(8) thread needs to update or follow? Or vice versa, could the add(8) thread possibly update any link in the tree that the delete(6) thread needs to update or follow? That's what you need to prevent from happening.

    There's a name for your "easiest way" in which you lock the entire tree. It's called "coarse-grained locking." And what you're hoping to use instead is called "fine-grained locking." https://en.wikipedia.org/wiki/Lock_(computer_science)#Granularity When fine-grained locking is done right, then there's less contention for the locks, but there are more of them.

    You haven't showed any part of your algorithm, so I can't really comment on what you might do differently, but a more specific name for fine-grained locking in linked data structures is, "hand-over-hand locking."

    I think you should search for that phrase in Google.