I have a simple table which is an email queue.
CREATE TABLE `emails_queue_batch` (
`eq_log_id` int(11) NOT NULL DEFAULT '0',
`eq_to` varchar(120) CHARACTER SET utf8 DEFAULT NULL,
`eq_bcc` varchar(80) CHARACTER SET utf8 DEFAULT '',
`eq_from` varchar(80) CHARACTER SET utf8 DEFAULT NULL,
`eq_title` varchar(100) COLLATE utf8_unicode_ci DEFAULT NULL,
`eq_headers` varchar(80) CHARACTER SET utf8 DEFAULT NULL,
`eq_content` longtext CHARACTER SET utf8,
`eq_sid` int(11) DEFAULT '0',
`eq_type` int(11) DEFAULT '0' COMMENT 'email type',
`eq_esp` int(11) DEFAULT '0',
PRIMARY KEY (`eq_log_id`),
KEY `email` (`eq_to`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci
Several threads read repeatedly 50 rows at a time and delete the rows.
To avoid double reading of the same row I used :
$db->query(" LOCK TABLE $table WRITE ");
$query= "SELECT * FROM $table LIMIT ".CHUNK_SIZE. " " ;
$emails2send=$db->get_results ($query);
if (!empty ($emails2send)){
// DELETE EMAIL
$eq_log_ids = array();
foreach ($emails2send as $email) $eq_log_ids[]= $email->eq_log_id ;
$query= "DELETE FROM $table WHERE eq_log_id IN ( ".implode(',', $eq_log_ids)." ) ";
$db->query ($query);
$db->query (" UNLOCK TABLES "); // unlock the table so other sessions can read next rows
........ code processing the read rows here .............
} else { // if !empty emails2send
// $emails2send is empty
$db->query (" UNLOCK TABLES; ");
$stop_running=true; // stop running
}
Another thread (s) write to the table at the same time. For reason I don't understand this configuration got deadlocked with locked table both for reads and for writes.
My question is: Is this locking the right solution to make sure I read each row once and only once (and delete it).
Or is this better handled as a transaction, and if so which kind? I'm not experienced with transactions.
Plan A:
This assumes you can process N rows in less than, say, 2 seconds. You have N=50 -- this may be too large.
BEGIN;
SELECT ... LIMIT 50 FOR UPDATE;
... process ...
... gather a list of ids to delete ...
DELETE ... WHERE id IN (...)
COMMIT;
The more you grab, the faster it goes, but also the more likely it is to get deadlocks. When a deadlock occurs, simply start the transaction over. Also keep track of how often deadlocks occur, in order to tune the "50".
Plan B:
This is useful when the processing of an item takes "too long" for a transaction. I say 2 seconds is probably "too long".
Grab a row to process:
with autocommit=ON ...
UPDATE ... SET who_is_processing = $me,
when_grabbed = NOW()
id = LAST_INSERT_ID(id),
WHERE when_grabbed IS NULL
AND any-other-criteria
LIMIT 1;
$id = SELECT LAST_INSERT_ID();
... process $id ... (This may or may not involve transactions)
Release the row (or, in your case, delete it):
again, autocommit=ON suffices...
DELETE ... WHERE id = $id;
"Never" use table locks with InnoDB. (There may be use cases, but this is not one.)