I'm trying to get to the bottom of what I thought would be a simple problem: exporting a dataframe in Pandas into a mysql database.
There is a scraper that collates data in pandas to save the csv format
**title, summary, url** #header
abc, summary, some_url
But I'd like to send the dataframe directly to a mysql database with the same three-column format.
My code so far is:
import mysql.connector
# Connect to the database
conn = mysql.connector.connect(user='root', password='somepassword', host='localhost', port='3306', database='db')
# Write the DataFrame to the database
df.to_sql(name='table_name', con=conn, if_exists='replace', index=False)
# Close the connection
conn.close()
But this returns an error message of:
pandas.errors.DatabaseError: Execution failed on sql 'SELECT name FROM sqlite_master WHERE type='table' AND name=?;': Not all parameters were used in the SQL statement
How do I fix this?
UPDATE:
I have read that I might have to use sqlalchemy but I really want to stick with pandas' solution if possible.
You can still use pandas solution, but you have to use sqlalchemy.create_engine
instead of mysql.connector.connect
, since to_sql
expects "sqlalchemy.engine.(Engine or Connection)
or sqlite3.Connection
" as con
argument. See reference and examples there. This should be working fine:
import sqlalchemy
# Connect to the database
conn = sqlalchemy.create_engine(
'mysql+mysqlconnector://root:somepassword@localhost:3306/db')
# Write the DataFrame to the database
df.to_sql(name='table_name', con=conn, if_exists='replace', index=False)
# Close the connection
conn.close()