(Using Spring Boot 2.3.3 w/ MySQL 8.0.)
Let's say I have an Account
entity that contains a total
field, and one of those account entities represents some kind of master account. I.e. that master account has its total
field updated by almost every transaction, and it's important that any updates to that total
field are done on the most recent value.
Which is the better choice within such a transaction:
Using a PESSIMISTIC_WRITE
lock, fetch the master account, increment the total field, and commit the transaction. Or,
Have a dedicated query that essentially does something like, UPDATE Account SET total = total + x
as part of the transaction? I'm assuming I'd still need the same pessimistic lock in this case for the UPDATE query, e.g. via @Query
and @Lock
.
Also, is it an anti-pattern to retry a failed transaction a set number of times due to a lock-acquisition timeout (or other lock-based exception)? Or is it better to let it fail, report it to the client, and let the client try to call the transaction/service again?
Apologies for the basic question, but, it's been some time since I've had to worry about doing such a thing in Spring.
Thanks in advance!
After exercising my Google Fu a bit more and digging even deeper, it seems variations of this question have already been asked, at least insofar as the 'locking' portion goes.
That is, while the Spring Data JPA docs mention redeclaring repository methods and adding the @Lock annotation, it seems that it is meant strictly for queries that read only. This is what I'd originally thought as it wouldn't make much sense to "lock" an UPDATE query unless there was some additional magic happening with the JPQL query.
As for retrying, retrying does seem to be the way to go, but of course using a number of retries that makes sense for the situation.
Hopefully this helps someone else in the future who has a brain cramp like I did.