Bug #54201 | "SET INSERT_ID" event must be ignored if corresponding event is ignored | ||
---|---|---|---|
Submitted: | 3 Jun 2010 12:18 | Modified: | 27 Apr 2012 14:23 |
Reporter: | Andrii Nikitin | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: Replication | Severity: | S2 (Serious) |
Version: | 5.1.47 | OS: | Any |
Assigned to: | Sven Sandberg | CPU Architecture: | Any |
[3 Jun 2010 12:18]
Andrii Nikitin
[3 Jun 2010 12:19]
Andrii Nikitin
workaround: INSERT IGNORE into dummy table (with auto_inc) at the beginning of trigger body. This will reset INSERT_ID value
[29 Jun 2010 13:44]
Andrei Elkin
Sven, I'd like to follow up discussion we held today. In my opinion, a solution should follow a natural principle which is In the case of filtered main query (2) the SQL thread execution context shall remain as it was before executing {1,2} group. That is as if the group was not tried executing (I withdrew an extreme possibility for the context to be as if the group has been executed successfully). Now to its implementation, first of all, the immediate issue of the auto-inc Intvar is possible because the filtered-out (2) Query did not perform the auto-inc context cleanup (Query_log_event::do_apply_event was missed out). That hits on deploying that part of the method onto the positive filtering branch. There are two more ralative subcases (1b) Rand event (1c) User var event While there should be no difficulties with regard to (1b) to reset random seeds at the end of the positive filtering branch, (1c) might require storing the old value of the user var to restore it once the main query (2) has been found to filtering out.
[29 Jun 2010 15:43]
Andrei Elkin
More explicitly, `as if ... was not tried executing' - as if the group was skipped via @@global.slave_sql_skip_counter.
[27 Apr 2012 14:23]
Jon Stephens
Fixed in 5.1+. Documented as follows in the 5.1.64, 5.5.25, and 5.6.6 changelogs: Statements using AUTO_INCREMENT, LAST_INSERT_ID(), RAND(), or user variables could be applied in the wrong context on the slave when using statement-based replication and replication filtering server options (see <xref linkend="replication-rules"/>). Closed.
[14 Aug 2013 9:02]
Atul Mohindru
Hi Jonathan, We are facing similar kind of issue on MySQL 5.1.70. We have setup of one master and two slaves (Second slave is slave of first slave). We are replicating the Auto_increment_id using LAST_INSERT_ID function. Following is the table structure: Table: log +------------+------------+------+-----+---------------------+----------------+ | Field | Type | Null | Key | Default | Extra | +------------+------------+------+-----+---------------------+----------------+ | log_id | bigint(20) | NO | PRI | NULL | auto_increment | | session_id | bigint(20) | NO | MUL | 0 | | | agent_id | int(11) | NO | MUL | 0 | | | logtime | datetime | NO | MUL | 0000-00-00 00:00:00 | | | postdate | datetime | NO | | 0000-00-00 00:00:00 | | +------------+------------+------+-----+---------------------+----------------+ Table: away_reason +----------------+------------+------+-----+---------+-------+ | Field | Type | Null | Key | Default | Extra | +----------------+------------+------+-----+---------+-------+ | log_id | bigint(20) | NO | PRI | 0 | | | session_id | int(11) | NO | MUL | 0 | | | away_reason_id | int(11) | NO | | 0 | | +----------------+------------+------+-----+---------+-------+ INSERT INTO log (agent_id, logtime) VALUES (1, NOW()); INSERT INTO calllog_away_reason (calllog_id, away_reason_id, vcc_session_id) VALUES (LAST_INSERT_ID(), 1, 1); Second query replication failing on second slave and getting following error message: "Error 'Duplicate entry '0' for key 'PRIMARY'' on query" Please suggest the solution. Thanks, Atul Mohindru
[14 Aug 2013 9:04]
Atul Mohindru
Please ignore second query in above comment. Correct query is INSERT INTO away_reason (log_id, away_reason_id, vcc_session_id) VALUES (LAST_INSERT_ID(), 1, 1);
[26 Aug 2015 20:25]
Carroll Miller
Rather than documenting to "don't do this", could you put in this fix? In log_event.cc, there are if statements to check if a transaction should be applied based on filter criteria: if (rpl_filter->db_ok(thd->db().str)) { … if (rpl_filter->is_on() && !rpl_filter->tables_ok(thd->db().str, &tables)) { // TODO: this is a bug - this needs to be moved to the I/O thread if (net) skip_load_data_infile(net); } else { … } } else { /* We will just ask the master to send us /dev/null if we do not want to load the data. TODO: this a bug - needs to be done in I/O thread */ if (net) skip_load_data_infile(net); } The transaction prior to this might have been a SET INSERT_ID = xxxx If that was true, we have that value stored and ready to hit anything that uses an identity column in a trigger on the slave for a different table. All that needs to be done is that in the place you see the TODO comments add the following: thd->auto_inc_intervals_forced.empty(); That would clear out the override of any auto_increment value and it should be cleared because we have decided to skip the next transaction which should have been the reason it was even set.