javaoracle-databaseapache-tomee

Performance Degradation and Connection Pooling Issues during Migration from GlassFish to TomEE with Oracle Database


My apologies for the confusion! Here's the translation of your question in English:

It`s old project. I'm migrating a project from Java 7 to Java 8 and from GlassFish server to TomEE, using an Oracle database. When saving data, I encountered an issue: the time to save each row or batch of rows in the database is around 60 ms (execution time visible in the logs). However, the time it takes to exit

statement.executeBatch(); 

or

statement.executeQuery(); 

increases with each call, eventually reaching 5 minutes. This leads to a "pool connection" error. There were no such issues on GlassFish. Insert Query

 Connection con = null;
        PreparedStatement statement = null;
        CallableStatement call = null;
        try {
            con = HermesDS.getConnection();

            String extendedSql = "INSERT INTO hrm_wi_influence_for_ins(doc_type, doc_id, influence, cln_segment_type," +
                    "is_user_input, pw_from_date, pw_to_date, planned_stop, inform_client," +
                    " customer_number, service_type, address, customer_name, customer_segment, customer_sub_segment," +
                    "SLA_CODE) " +
                    "SELECT ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ?," +
                    "       ? " +
                    " FROM DUAL";

            statement = con.prepareStatement(extendedSql);
            int j = 0;
            final int maxBatchSize = 30;
            con.setAutoCommit(false);
            long time = System.currentTimeMillis();


            for (Map.Entry<String, InfluenceData> entry : influences.entrySet()) {
                String influence = entry.getKey().trim();
                InfluenceData influenceData = entry.getValue();

                statement.setString(1, docType);
                statement.setBigDecimal(2, docId);
                statement.setString(3, influence);
                statement.setString(4, influenceData.getSegment());
                statement.setBigDecimal(5,
                        (influenceData.getIsUserInput() != null && influenceData.getIsUserInput())
                                ? BigDecimal.ONE
                                : BigDecimal.ZERO);

                statement.setString(6, influenceData.getPlanStart());
                statement.setString(7, influenceData.getPlanEnd());
                statement.setString(8, influenceData.getPlanIdle());
                statement.setString(9, influenceData.getInformClient());
                if (influenceData.getCustomerNumber() != null) {
                    statement.setString(10, influenceData.getCustomerNumber().toString());
                } else {
                    statement.setString(10, "");
                }
                statement.setString(11, influenceData.getServiceType());
                statement.setString(12, influenceData.getAddress());
                statement.setString(13, influenceData.getCustomerName());
                statement.setString(14, influenceData.getSegment());
                statement.setString(15, influenceData.getUnicornSubSegment());
                statement.setString(16, influenceData.getClnSla());

                statement.addBatch();
                time = System.currentTimeMillis();
                if (++j == maxBatchSize) {
                    statement.executeBatch();
                    con.commit();
                    statement.clearBatch();
                    j = 0;
                    System.out.println("executeBatch finished in: " + (System.currentTimeMillis() - time) + "ms");
                }

            }

Isert log

First 30 lines: INSERT INTO hrm_wi_influence_for_ins --> 17ms executeBatch finished in: 24ms

next 30: --> 10ms executeBatch finished in: 19ms

next next next 30: --> 6ms executeBatch finished in: 2947ms It seems that statement.clearBatch(); is not effectively clearing the batch, and all the rows are accumulating in the query, adding up with each subsequent operation. If initially there were 30 rows, after 10 operations, there are now 300.

Tomee config:

JdbcDriver  oracle.jdbc.OracleDriver
    JdbcUrl jdbc:oracle:thin:---------
    UserName    ---------
    Password    ---------
    accessToUnderlyingConnectionAllowed = true
    jtaManaged = false
    DataSourceCreator = tomcat
    initialSize=10
    maxActive=100
    maxIdle=20
    minIdle=10
    timeBetweenEvictionRunsMillis=34000
    minEvictableIdleTimeMillis=55000
    validationQuery=SELECT 1 from dual
    validationInterval=30000
    testOnBorrow=true
    removeAbandoned=true
    removeAbandonedTimeout=1200
    logAbandoned=true

DB version 19, jdbs driver 8. Any ideas? If additional information is needed, please ask. Thank you.

It appears that the statement.clearBatch(); method may not be effectively clearing the batch, leading to an accumulation of rows in the query. This is evident as the number of rows increases with each subsequent operation. If you've attempted to use statement.clearBatch(); to resolve this issue, could you provide details on the implementation and share what results or behavior you were expecting?


Solution

  • Solution: need clear parametrs.

    statement.clearParameters();