We have a process where we need to test a SQL script before it runs against a production database. The approach is to execute the script within a transaction with a rollback statement at the end, capturing before/after logs illuminating the effects of the scripts.
select now();
start transaction;
select 'data before any changes', ...;
<insert / update / delete statements>;
select 'data after changes', ...;
rollback;
We've used this approach for many years with MSSQL, however we're having trouble implementing with mysql. If everything is working to-plan, things are great and it works exactly as expected. However, if we run into any errors (ex: typo in SQL or table constraint violation), the script aborts ... but COMMITS whatever ran before the error!!
How is this remotely considered an acceptable outcome? It drastically reduces the effectiveness of transactions if an error condition results in committing changes. I'm gobsmacked at this behavior--simply cannot understand the rationale.
I've been searching all over for a resolution to this and am coming up basically empty-handed. Options I've seen:
DECLARE ... HANDLER FOR ...;
to rollback on error. This looked promising, but appears to be only supported within a stored procedure (or similar) and therefore not suitable for one-off scripts.What am I missing? Thanks
Finally figured out the answer to this -- apparently our MySQL instance is (somewhere) configured with autocommit=1
.
So now our scripts look like:
select now();
set autocommit=0;
start transaction;
select 'data before any changes', ...;
<insert / update / delete statements>;
select 'data after changes', ...;
rollback;
And if the script runs into an error (whether that's a syntax problem or something like a key violation in the update statements), the transaction is rolled back!