Bug #73170 | Duplicates in Unique Secondary Index Because of Fix of Bug#68021 | ||
---|---|---|---|
Submitted: | 2 Jul 2014 7:26 | Modified: | 4 Jul 2014 18:03 |
Reporter: | Annamalai Gurusami | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: InnoDB storage engine | Severity: | S2 (Serious) |
Version: | 5.5 | OS: | Any |
Assigned to: | Annamalai Gurusami | CPU Architecture: | Any |
Tags: | locking |
[2 Jul 2014 7:26]
Annamalai Gurusami
[2 Jul 2014 7:50]
Annamalai Gurusami
Verified as described.
[4 Jul 2014 18:03]
Daniel Price
Fixed as of the upcoming 5.5.40, 5.6.21, 5.7.5 release, and here's the changelog entry: With a transaction isolation level less than or equal to "READ COMMITTED", gap locks were not taken when scanning a unique secondary index to check for duplicates. As a result, duplicate check logic failed allowing duplicate key values in the unique secondary index. Thank you for the bug report.
[30 Jul 2014 8:23]
Annamalai Gurusami
The mysql-5.5 patch that fixes this problem is: === modified file 'storage/innobase/row/row0ins.c' --- storage/innobase/row/row0ins.c revid:arun.kuruvila@oracle.com-20140702092252-09wqe9btf1chsvwy +++ storage/innobase/row/row0ins.c revid:annamalai.gurusami@oracle.com-20140703044329-227l9eqy63xhiosc @@ -1702,7 +1702,7 @@ do { const rec_t* rec = btr_pcur_get_rec(&pcur); const buf_block_t* block = btr_pcur_get_block(&pcur); - ulint lock_type; + const ulint lock_type = LOCK_ORDINARY; if (page_rec_is_infimum(rec)) { @@ -1712,16 +1712,6 @@ offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &heap); - /* If the transaction isolation level is no stronger than - READ COMMITTED, then avoid gap locks. */ - if (!page_rec_is_supremum(rec) - && thr_get_trx(thr)->isolation_level - <= TRX_ISO_READ_COMMITTED) { - lock_type = LOCK_REC_NOT_GAP; - } else { - lock_type = LOCK_ORDINARY; - } - if (allow_duplicates) { /* If the SQL-query will update or replace
[30 Jul 2014 8:48]
Annamalai Gurusami
The mysql-5.6 patch is: === modified file 'storage/innobase/row/row0ins.cc' --- storage/innobase/row/row0ins.cc revid:thirunarayanan.balathandayuth@oracle.com-20140702121529-daxjoj1o9r0sbhot +++ storage/innobase/row/row0ins.cc revid:annamalai.gurusami@oracle.com-20140703044801-iw41o3yr6wd0eogi @@ -1946,7 +1946,7 @@ do { const rec_t* rec = btr_pcur_get_rec(&pcur); const buf_block_t* block = btr_pcur_get_block(&pcur); - ulint lock_type; + const ulint lock_type = LOCK_ORDINARY; if (page_rec_is_infimum(rec)) { @@ -1956,16 +1956,6 @@ offsets = rec_get_offsets(rec, index, offsets, ULINT_UNDEFINED, &offsets_heap); - /* If the transaction isolation level is no stronger than - READ COMMITTED, then avoid gap locks. */ - if (!page_rec_is_supremum(rec) - && thr_get_trx(thr)->isolation_level - <= TRX_ISO_READ_COMMITTED) { - lock_type = LOCK_REC_NOT_GAP; - } else { - lock_type = LOCK_ORDINARY; - } - if (flags & BTR_NO_LOCKING_FLAG) { /* Set no locks when applying log in online table rebuild. */
[24 Sep 2014 7:27]
Laurynas Biveinis
revno: 4680 committer: Annamalai Gurusami <annamalai.gurusami@oracle.com> branch nick: mysql-5.5 timestamp: Thu 2014-07-03 10:13:29 +0530 message: Bug #19140907 DUPLICATES IN UNIQUE SECONDARY INDEX BECAUSE OF FIX OF BUG#68021 Problem: When a unique secondary index is scanned for duplicate checking, gap locks were not taken if the transaction had isolation level <= READ COMMITTED. This change was done while fixing Bug #16133801 UNEXPLAINABLE INNODB UNIQUE INDEX LOCKS ON DELETE + INSERT WITH SAME VALUES (rb#2035). Because of this the duplicate check logic failed, and resulted in duplicate values in unique secondary index. Solution: When a unique secondary index is scanned for duplicate checking, gap locks must be taken irrespective of the transaction isolation level. This is achieved by reverting rb#2035. rb#5910 approved by Jimmy
[8 Jun 2015 11:28]
Amit Garg
Can you please confirm which Mysql version this bug is fixed? We are on Enterprise edition 5.6.24 and are facing the exact same issue. If require we can provide more logs, please suggest. Thanks AG
[8 Jun 2015 11:34]
MySQL Verification Team
Hi Amit, this is fixed in your version. Maybe you are encountering http://bugs.mysql.com/bug.php?id=76927 ?
[20 Sep 2016 8:55]
MySQL Verification Team
Bug #83045 marked as duplicate of this