Bug #35322 Falcon duplicate primary keys on updateable views
Submitted: 16 Mar 2008 17:27 Modified: 30 Sep 2008 19:15
Reporter: Philip Stoev Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S1 (Critical)
Version:6.0-falcon-team OS:Any
Assigned to: Vladislav Vaintroub CPU Architecture:Any
Tags: triaged

[16 Mar 2008 17:27] Philip Stoev
Description:
When running a workload consisting of updates on views, at some point the base table ends up with a duplicate primary key.

How to repeat:
Test case will be attached shortly.
[16 Mar 2008 17:35] Philip Stoev
Test case for bug # 35322

Attachment: bug35322.zip (application/x-zip-compressed, text), 938 bytes.

[16 Mar 2008 17:38] Philip Stoev
Please place the .txt file in mysql-test and the .test file in mysql-test/t. Then run:

$ perl ./mysql-test-run.pl --stress  --stress-test-file=bug35322_run.txt \
--stress-test-duration=7200 --stress-threads=10    --skip-ndb \
--mysqld=--falcon_consistent_read=off

Let it run for a few minutes and disregard the output of the test. In a separate mysql client, issue:

select * from t2_eng2;

and you will see two records with primary key value of 1. To confirm that the table is broken, issue:

mysql> alter table t2_eng2 order by f2;
ERROR 1062 (23000): ALTER TABLE causes auto_increment resequencing, resulting in duplicate entry '1' for key 'PRIMARY'

This issue has also been observed without the falcon-repeatable-read option.

gcc version 4.1.2 20070925 (Red Hat 4.1.2-33)
built with BUILD/compile-pentium-debug-max-no-ndb
[21 Mar 2008 17:10] Hakan Küçükyılmaz
Verified using latest mysql-6.0-falcon-team tree on Linux 64-bit debug build:

|   1 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
|   1 | XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX |
|   2 | YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY |
|   3 | ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ |
+-----+----------------------------------+
25 rows in set (0.04 sec)
[22 Mar 2008 19:49] Philip Stoev
simpler test case for bug 35322

Attachment: bug35322_new.zip (application/x-zip-compressed, text), 698 bytes.

[22 Mar 2008 20:50] Hakan Küçükyılmaz
Re-verified with new testcase:

select * from t2;

|    2 |
|    2 |
|    3 |
+------+
190 rows in set (0.01 sec)

[21:49] root@test>show create table t2;
+-------+---------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                                                |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------+
| t2    | CREATE TABLE `t2` (
  `f2` int(11) NOT NULL AUTO_INCREMENT,
  PRIMARY KEY (`f2`)
) ENGINE=Falcon AUTO_INCREMENT=1504 DEFAULT CHARSET=latin1 |
+-------+---------------------------------------------------------------------------------------------------------------------------------------------+
[2 Apr 2008 18:28] Kevin Lewis
Vlad, Please investigate this bug soon.  We need to know if the impact of I3 (Substantial) is correct.  We should also fix this as soon as we can.
[10 Apr 2008 13:15] Philip Stoev
Perl standalone version of test for bug 35322

Attachment: bug35322.pl (text/plain), 2.15 KiB.

[10 Apr 2008 21:22] Vladislav Vaintroub
Somehow with the latest pushes today, the bug has gone.

mysql> alter table t2_eng2 order by f2;
Query OK, 1377 rows affected (7.73 sec)
Records: 1377  Duplicates: 0  Warnings: 0
mysql> select count(f2),f2 from t2_eng2  group by f2 having count(f2) > 1;
Empty set (0.02 sec)

Philip, can you try to reproduce this with the most fresh falcon-team?
[11 Apr 2008 10:11] Philip Stoev
New test case for bug35322 in perl form

Attachment: bug35322_new.pl (text/plain), 1.50 KiB.

[11 Apr 2008 10:13] Philip Stoev
I have attached bug35322_new.zip in Perl form as bug35322_new.pl . It does reproduce the problem on every run. At the moment the script reports

Duplicate entry '21' for key 'PRIMARY'

you will see one or three duplicate records in t2. Furthermore, once I also hit assertion in Table::checkUniqueRecordVersion, Table.cpp:2525.

Please edit the top of the script to match your mysql server port.
[11 Apr 2008 10:39] Philip Stoev
Simplified test case for bug 35322

Attachment: bug35322_new2.pl (text/plain), 932 bytes.

[11 Apr 2008 10:41] Philip Stoev
Please find attached a simplified version of the test case:

* Multiple-record insert is not required to reproduce the bug;
* Nested views are not required;
* A single update referencing a single view is sufficient; at the same time, without the view the bug is not reproducible;
* No extra table columns are required;
[11 Apr 2008 14:03] Kevin Lewis
It is possible that the fix for Bug#35692 fixed this also.  That change added a syncObject to the priorRecord chain to protect against taking an intermediate record out of the chain while other threads are traversing it.  However, it seems that checkUniqueRecordVersion() does not use this new syncObject.  But fetchForUpdate does.   If the bug is truly fixed, syncPrior may have affected it.
[11 Apr 2008 14:20] Kevin Lewis
My comment was late since Philip has re-verified the bug with a new test-case.  However, syncPrior may still be solution since it is not used in checkUniqueRecordVersion.  It should be locked(shared) after the fetch and before the for loop of getPriorVersion calls.  Vlad, please see if this has an affect.
[11 Apr 2008 17:37] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/45293

ChangeSet@1.2638, 2008-04-11 12:34:18-05:00, cpowers@xeno.mysql.com +1 -0
  Bug#35322, "Falcon duplicate primary keys on updateable views"
  
  Table::checkUniqueRecordVersion() locks record version chain before scanning.
  
  
  Bug#35982, "Falcon crashes on concurrent load data infile"
  
  Table::garbageCollect() checks for null record pointers before passing to Sync constructor.
[14 Apr 2008 11:14] Vladislav Vaintroub
test case simplified further

Attachment: bug35322_simplified_test_case.pl (text/plain), 738 bytes.

[14 Apr 2008 18:42] Bugs System
Pushed into 6.0.5-alpha
[15 Apr 2008 14:10] Vladislav Vaintroub
The reason for the check for duplicates + insert/update must be atomic  operation and currently it is not.
Thus it is possible that 2 threads running in parallel and inserting the same value into unique index enter Table::insert at the same time, both pass the uniqueness check and both insert the new record.

This is simply reproducible in debugger:  set breakpoint in Table::insert after checkUniqueIndexes, start 2 clients and the insert the same value from them.
When breakpoint is reached for the first time, the currently active thread and continue. Resume this thread, when breakpoint is reached second time
and continue.

To fix the situation locking is required. Either on Table level or maybe better
on index level for each unique index. I believe something like checkAndLockUniqueIndexes() before record is inserted and unlockUniqueIndexes() after it could be a good idea.
[15 Apr 2008 14:11] Vladislav Vaintroub
typo in previous comment:
...when breakpoint is reached ,*suspend* currently active thread...
[16 Apr 2008 12:15] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/45489

ChangeSet@1.2642, 2008-04-16 14:14:49+02:00, vvaintroub@wva. +3 -0
  Bug#35322 : Duplicate entries in unique index.
  Make unique-check and insert/update atomic 
  (exclusive lock on index level)
[21 Apr 2008 18:24] Kevin Lewis
Code is reviewed and some performance measurements have been taken.  As long as each thread traverses the various unique indexes in the same order, it is OK that after a duplicate conflict wait occurs, that the retry is done only on the current index.  I think this code should allow an atomic Table::checkUniqueIndex() and Index::insert().
[21 Apr 2008 19:06] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/45782

ChangeSet@1.2634, 2008-04-21 21:05:48+02:00, vvaintroub@wva. +3 -0
  Bug#35322 : Duplicate entries in unique index
  make check for uniqueness and insert atomic (exclusive lock on index 
  level)
  
  No test case for this bug: hard to make mysqltest compliant one 
  ,since it duplicates occured only under high concurrency in stress test.
[22 Apr 2008 7:52] Hakan Küçükyılmaz
Wlad, after your push
  falcon_bug_22183.test started to fail with

falcon.falcon_bug_22183        [ fail ]

mysqltest: At line 35: query 'ALTER TABLE t ADD PRIMARY KEY (s1)' failed with wrong errno 1015: 'Can't lock file (errno: 212)', instead of 1068...

Please also check pushbuild!
[22 Aug 2008 17:42] Kevin Lewis
Fix is in version 6.0.6
[30 Sep 2008 19:15] Jon Stephens
Documented as follows in the 6.0.6 changelog:

        Simultaneous inserts and updates on an updateable view referencing a
        Falcon table could sometimes cause duplicate key errors.