I'm trying to import CSV file to SQL Server using Python. However, during last parts of code something is not working as I'd like it to. Every row looks the same after importing it to SQL Server, even though rows in DataFrame are different.
CODE:
import pypyodbc as pdb
import plotly.graph_objects as go
import numpy as np
server = 'LAPTOP-124CPEDE\SQLEXPRESS'
database = 'Datasets'
conn = pdb.connect('DRIVER={SQL Server};SERVER='+server+';DATABASE='+database+';Trusted_Connection=yes;')
cursor = conn.cursor()
data = pd.read_csv(r'C:\Users\aleks\Desktop\zadania python+sql\dataset_1.csv', sep=';')
df = pd.DataFrame(data, columns=['a_timestamp', 'any_date', 'some_money', 'weird_name', 'count_of_something'])
df
cursor.execute('CREATE TABLE dataset_1 (a_timestamp nvarchar(255), any_date date, some_money float, weird_name nvarchar(255), count_of_something float)')
tuple = (row.a_timestamp, row.any_date, row.some_money, row.weird_name, row.count_of_something)
for row in df.itertuples():
cursor.execute('''
INSERT INTO Datasets.dbo.dataset_1 (a_timestamp, any_date, some_money, weird_name, count_of_something)
VALUES (?,?,?,?,?)
''',
tuple)
conn.commit()
Instead of inserting the same row of dataframe, assigned before the for loop, you can obtain values of a dataframe's row on each iteration in for loop and insert them to table of database using this approach:
for row in df.itertuples():
values = (row.a_timestamp, row.any_date, row.some_money, row.weird_name, row.count_of_something)
cursor.execute('''
INSERT INTO Datasets.dbo.dataset_1 (a_timestamp,
any_date,
some_money,
weird_name,
count_of_something)
VALUES (?,?,?,?,?)
''',
values)