mysqlspring-boottransactionsinnodbpessimistic-locking

Can we prevent reading of rows from db table that are exclusively locked?


I have a table EMPLOYEE with the following columns in my MySQL (innoDB) database,

In a distributed system I want to ensure that multiple processes in the cluster read the top 100 distinct rows from the table each time for which the exported column is set to false. The rows read by the process should remain locked during calculation such that if process1 reads row 1-100, process2 should not be able to see the rows from 1-100 and should then pick up the next available 100 rows.

For this, I tried using pessimistic_write locks but they don't seem to serve the purpose. They do block multiple processes from updating at the same time but multiple processes can read the same locked rows.

I tried using the following java code,

    Query query = entityManager.createNativeQuery("select * from employee " +
        "where exported = 0 limit 100 for update");
    List<Employee> employeeListLocked = query.getResultList();
  

EDIT: Found the answer finally

What I needed was to use the "Skip Locked" feature. So my updated code has become:

  Query query = entityManager.createNativeQuery("select * from employee " +
        "where exported = 0 limit 100 for update skip locked");

with the help of 'skip locked' all the rows that are in a locked state are ignored/skipped by the db engine when running a select. Hope this helps you all.


Solution

  • FOUND THE ANSWER:

    What I needed was to use the "Skip Locked" feature. So my updated code has become:

      Query query = entityManager.createNativeQuery("select * from employee " +
            "where exported = 0 limit 100 for update skip locked");
        List<Employee> employeeListLocked = query.getResultList();
    

    with the help of 'skip locked' all the rows that are in a locked state are ignored/skipped by the db engine when running a select. Hope this helps you all.