I have 2 instances of containers with scheduled task and I would like to protect against start the same task in every docker container. I decided to make distributed lock using SQL Server.
I added configuration
@Bean
public DefaultLockRepository DefaultLockRepository(DataSource dataSource) {
return new DefaultLockRepository(dataSource);
}
@Bean
public JdbcLockRegistry jdbcLockRegistry(LockRepository lockRepository) {
return new JdbcLockRegistry(lockRepository);
}
And I'm trying acquire the lock like this:
public void doSomething() throws InterruptedException {
boolean lockAcquired;
var lock = lockRegistry.obtain("main");
try {
lockAcquired = lock.tryLock();
if (lockAcquired == true) {
logger.info("Locked");
} else {
logger.info("Not locked");
}
} catch (Exception ex) {
lockAcquired = false;
logger.info("Not locked");
}
Thread.sleep(3600000);
lock.unlock();
}
I started 2 instances of application with identical code. When I entered to method of first instance of application, lock has been acquired - and I've got "Locked" message - so it's ok
In database I saw the entry like this:
LOCK_KEY REGION CLIENT_ID CREATED_DATE
fad58de7-3664-35db-8650-cfefac2fcd61 DEFAULT c0314b87-0bee-4bdd-8567-a939fd21f0bf 2024-03-16 13:20:43.2559575I saw that in datatabase I've got new entry like this:
When I started second instance of application I expected to get message "Not locked", because lock has been acquired in first app which still running, but not - I've got log "Locked" again and my database entry has been updated to:
LOCK_KEY REGION CLIENT_ID CREATED_DATE
fad58de7-3664-35db-8650-cfefac2fcd61 DEFAULT ad58971d-c991-4709-a03b-9acb39a38c54 2024-03-16 13:24:42.3002365
So my distributed lock doesn't work.
What I'm doing wrong? How can I protect against execute scheduled task using distributed lock in many docker containers.
//UPDATE
When I execute method in two instances in short time between (like less then 5 s) it works, but when I execute method in first instance and I'm trying execute it in second instance after 30s it doesn't work.
//UPDATE2
As Alex sugested it was a problem with TTL in DefaultLockRepository. The lock was set to default time 10 seconds. After chaning code like this:
@Bean
public DefaultLockRepository DefaultLockRepository(DataSource dataSource) {
var repo = new DefaultLockRepository(dataSource);
repo.setTimeToLive(1000 * 100);
return repo;
}
The lock is set to 100s and during the next 100s other service instance cannot lock the method.
I've used ShedLock successfully. Adding following to the pom.xml (or corresponding gradle)
<dependency>
<groupId>net.javacrumbs.shedlock</groupId>
<artifactId>shedlock-spring</artifactId>
<version>${shedlock.version}</version>
</dependency>
Then add the following to the main class or a configuration class (example in Kotlin but same concept for Java):
@EnableSchedulerLock(defaultLockAtMostFor = "10m")
class MainClass //...
@Bean
fun lockProvider(dataSource: DataSource): LockProvider {
return JdbcTemplateLockProvider(dataSource)
}
//...
Then add the following annotation to the method:
@Service
class NeedingLockService //...
@Scheduled(fixedDelay = 5000) // using it at a schedule interval in my case but not relevant for lock
@SchedulerLock(name = "nameOfTheLock")
fun dispatch() { // name doesn't matter
//...
}
In this kotlin example, the service will run every 5 second but only in one of the container/pod if you have multiple.
Notes:
databaseChangeLog:
- changeSet:
id: 5
author: Geoffrey
changes:
- createTable:
tableName: shedlock
columns:
- column:
name: name
type: varchar(64)
constraints:
primaryKey: true
nullable: false
- column:
name: lock_until
type: timestamp
constraints:
nullable: false
- column:
name: locked_at
type: timestamp
defaultValueComputed: CURRENT_TIMESTAMP
constraints:
nullable: false
- column:
name: locked_by
type: varchar(255)
constraints:
nullable: false