i'm fairly new to python with sql-alchemy and just want to insert a bunch of data and receive some values back.
my code looks something like this:
statement = text("""
with _user as (
insert into user (first_name, last_name)
values (:first, :last)
returning id, first_name, last_name
),
_student as (
insert into student (user_id, name)
select id, concat(last_name,' ', first_name) from _user
returning id, name
)
select * from _student;
""")
rows = conn.execute(statement, [{'bar': 1, 'foo': 'foo1'},{'bar': 2, 'foo': 'foo2'}]).mappings()
for row in rows:
print(f'${row}')
now i got the following error: This result object does not return rows. It has been closed automatically.
when running the same query in a console, there are some results. why sql-alchemy closes the cursor?
I don't think that this is possible for multiple parameters using SQLAlchemy at present. This limitation may be by design or (less likely I suspect), a bug.
When the statement is executed with a list of more than one parameter dictionary, SQLAlchemy uses psycopg2's executemany method to execute the statement (if you look in the postgres log, you will see that the statements are executed twice sequentially, if you are logging statements). This method discards its results and so SQLAlchemy raises the ResourceClosedError
observed by the OP.
SQLAlchemy does support psycopg2's fast execution helpers, but not for textual SQL statements. Converting the raw SQL to SQLAlchemy core statements results in the same error however.
I suspect this is because the insert-within-a-cte pattern is not something to which SQLAlchemy can apply its internal logic for using psycopg2's helpers or its native insertmanyvalues feature.
psycopg2's execute_values statement handles this case correctly, so you could just use psycopg2:
import psycopg2
from psycopg2.extras import execute_values
...
with psycopg2.connect(dbname='so') as conn, conn.cursor() as cur:
execute_values(
cur,
sql,
[dict1, dict2, ...],
template='(%(first)s, %(last)s)'
)
row = cur.fetchall()
for row in rows:
print(row)
Alternatively you could perform the inserts and selects over multiple operations, at the cost of increased network traffic and time:
I'm inclined to think that this is a limitation in the implementation of SQLAlchemy's bulk insert support rather than a bug. If you need clarification (for the core objects insert case, not for text) you could open a discussion on GitHub.
FWIW the core version would look like this:
import sqlalchemy as sa
engine = sa.create_engine(url)
# Reflect the tables.
metadata = sa.MetaData()
metadata.reflect(engine, only=['user', 'student'])
user = metadata.tables['user']
student = metadata.tables['student']
with engine.connect() as conn:
_user = (
user.insert().returning(user.c.id, user.c.first_name, user.c.last_name)
).cte('user')
_student = (
student.insert()
.from_select(
[student.c.user_id, student.c.name],
sa.select(_user.c.id, _user.c.first_name + ' ' + _user.c.last_name),
).returning(
student.c.id, student.c.name
)
).cte('student')
rows = conn.execute(
sa.select(_student),
[dict1, dict2, ...]
)
for row in rows.mappings():
print(row)