javatomcath2jdbc-pool

H2 - Tomcat jdbc connection pool not reclaiming connections once it hits the max limit


Problem Statement

We have been using H2 in embedded mode for a while now. It has a connection pool configured above it. Following is the current pool configuration:

h2.datasource.min-idle=10
h2.datasource.initial-size=10
h2.datasource.max-active=200
h2.datasource.max-age=600000
h2.datasource.max-wait=3000
h2.datasource.min-evictable-idle-time-millis=60000
h2.datasource.remove-abandoned=true
h2.datasource.remove-abandoned-timeout=60
h2.datasource.log-abandoned=true
h2.datasource.abandonWhenPercentageFull=100

H2 config:

spring.h2.console.enabled=true
spring.h2.console.path=/h2
h2.datasource.url=jdbc:h2:file:~/h2/cartdb
h2.server.properties=webAllowOthers
spring.h2.console.settings.web-allow-others=true
h2.datasource.driver-class-name=org.h2.Driver

*skipping username and password properties.

We have verified that the above configuration takes effect by logging the pool properties.

The issue with this setup is that we are observing regular(though intermittent) connection pool exhaustion and once the pool hits the max limit it starts throwing the following exception for some queries.

SqlExceptionHelper.logExceptions(SqlExceptionHelper.java:129) - [http-apr-8080-exec-38] Timeout: Pool empty. Unable to fetch a connection in 3 seconds, none available[size:200; busy:200; idle:0; lastwait:3000].

And thereafter it fails to recover from this state even after many hours until we restart the web server(tomcat in this case).

H2 driver dependency:

<dependency>
    <groupId>com.h2database</groupId>
    <artifactId>h2</artifactId>
    <version>1.4.196</version>
    <scope>runtime</scope>
</dependency>

Query Pattern & Throughput

We use h2 to load up some data for every request, then execute a few(about 50) SELECT queries and finally delete the data. This results into consistent 30k-40k calls per minute(except off hours) on h2(according to new relic monitoring).

Every read operation acquires a new connection and releases the same after execution.

EntityManager entityManager = null;
try {
     entityManager = entityManagerFactory.createEntityManager();
     Query query = entityManager.createNativeQuery(sqlQuery);
     query.setParameter("cartId", cartId);
     List<String> resultList = query.getResultList();
     return resultList;
} finally {
         if(null != entityManager) { entityManager.close(); }
}

Observations

Please guide us in the right direction to solve this issue.

UPDATE

Recently we discovered the following causes in our stack trace when one such incident occured:

Caused by: org.h2.jdbc.JdbcSQLException: Database may be already in use: null. Possible solutions: close all other connection(s); use the server mode [90020-196]

Caused by: java.lang.IllegalStateException:The file is locked: nio:/root/h2/cartdb.mv.db [1.4.196/7]

Caused by: java.nio.channels.OverlappingFileLockException

So after digging into this we have decided to move to in-memory mode as we don't require to persist the data beyond the application's life time. As a result, the file lock should not occur thereby reducing or eradicating this issue.

Will come back and update in either case.


Solution

  • Since the last update on the question:

    After observing the performance for quite some time we have come to the conclusion that using H2 in file-mode(embedded) was somehow leading to file lock exceptions periodically(though irregular).

    Since our application does not require to persist the data beyond the application's lifetime, we decided to move to pure in-memory mode.

    Though the file lock exception's mystery still needs to disclosed.