sqloracle-databaseperformancebulk-operations

What is the best way to execute 100k insert statements?


I have created a set of 100k insert queries to generate data in multiple oracle tables for my performance testing. What is the best way to execute this ?

In the past, I've tried tools like Oracle SQL developer and Toad. However not sure if it can handle this large volume.

Simple insert statements like -

INSERT INTO SELLING_CODE (SELC_ID, VALC_ID, PROD_ID, SELC_CODE, SELC_MASK, VALC_ID_STATUS) 
VALUES (5000001, 63, 1, '91111111', 'N/A', 107);

Solution

  • Inserting 100,000 rows with SQL statements is fine. It's not a huge amount of data and there are a few simple tricks that can help you keep the run time down to a few seconds.

    First, make sure that your tool is not displaying something for each statement. Copying and pasting the statements into a worksheet window would be horribly slow. But saving the statements into a SQL*Plus script, and running that script can be fast. Use the real SQL*Plus client if possible. That program is available on almost any system and is good at running small scripts.

    If you have to use SQL Developer, save the 100K statements in a text file, and then run this as a script (F5). This method took 45 seconds on my PC.

    set feedback off
    @C:\temp\test1.sql
    

    Second, batch the SQL statements to eliminate the overhead. You don't have to batch all of them, batching 100 statements-at-a-time is enough to reduce 99% of the overhead. For example, generate one thousand statements like this:

    INSERT INTO SELLING_CODE (SELC_ID, VALC_ID, PROD_ID, SELC_CODE, SELC_MASK, VALC_ID_STATUS)
    select 5000001, 63, 1, '91111111', 'N/A', 107 from dual union all
    select 5000001, 63, 1, '91111111', 'N/A', 107 from dual union all
    ...
    select 5000001, 63, 1, '91111111', 'N/A', 107 from dual;
    

    Save that in a text file, run it the same way in SQL Developer (F5). This method took 4 seconds on my PC.

    set feedback off
    @C:\temp\test1.sql
    

    If you can't significantly change the format of the INSERT statements, you can simply add a BEGIN and END; / between every 100 lines. That will pass 100 statements at a time to the server, and significantly reduce the network overhead.

    For even faster speeds, run the script in regular SQL*Plus. On my PC it only takes 2 seconds to load the 100,000 rows.

    For medium-sized data like this it's helpful to keep the convenience of SQL statements. And with a few tricks you can get the performance almost the same as a binary format.