pythonsql-server-2012pymssqlexecutemany

pymssql executemany insert value very slow


python-2.7.15, pymssql-2.1.4, SQL_Server-2018, Windows 10 Pro, MS-Office-2016

import time
import csv
import pymssql

db_settings = {
    "host" : "127.0.0.1",
    "port" : "1433",
    "user" : "sa",
    "password" : "********",
    "database" : "testdb",
    "charset" : "utf8"
               }
conn = pymssql.connect(**db_settings)
cursor = conn.cursor()
ff = csv.reader(open('base.csv', 'r'))
sql = """
    BEGIN
        INSERT INTO Base([name], [year], [update], [status], 
    [timeline], [language], [pic]) VALUES (%s, %s, %s, %s, %s, %s, %s)
    END
    """
now=time.strftime("%M:%S")
t = []
for i in ff:
    i = i[1:]
    if "year" in i:
        pass
    else:
        t.append((i[0], i[1], i[3], i[4], i[6], i[5], i[8]))
cursor.executemany(sql, t)
conn.commit()

end=time.strftime("%M:%S")

print(now+","+end)

The file of "base.csv" size is 21.7 MB and 30374 rows. When I execute the above code, It will take 929 seconds to completed. This is meaning only 32.7 rows/second, it too slow. Who can to help me find out the reason?Thank a lot. :-)

enter image description here


Solution

  • I reduced time of execute_many in pymssql from 30min to 30s like this.

    In sql you can create insert statements with multiple rows at once. It looks like below

    INSERT (col_name1, col_name2) 
    INTO table_name 
    VALUES 
    (row1_val1, row1_val2), 
    (row2_val1, row2_val2) ... 
    (row1000_val1, row1000_val2)
    

    I implemented insert function which gets chunks of data and modifies the query to insert multiple values with one execute.

    def insert(query, data, chunk=999):
        conn = get_connection()
        cursor = conn.cursor()
        query = query.lower()
        insert_q, values_q = query.split('values') # get part with the query and the parameters
        insert_q += 'values' # add values to make sql query correct after split
        for chunk_data in chunks(data, chunk):
            # chunk_data contains list of row parameters
            flat_list = [item for sublist in chunk_data for item in sublist] # we make it flat to use execute later instead execute_many
            chunk_query = insert_q + ','.join([values_q] * len(chunk_data)) # creating the query with multiple values insert
            cursor.execute(chunk_query, tuple(flat_list))
            conn.commit()
    

    chunks can be implemented like this (thanks to on of the great reply from this forum)

    def chunks(lst, n):
        for i in range(0, len(lst), n):
            yield lst[i:i + n]
    

    Example usage

    insert('INSERT (user_id, name, surname) INTO users VALUES (%s, %s, %s)',
           [(1, 'Jack', 'Kcaj'), (2, 'Andrew', 'Golara')]