Bug #34351 "Record has changed since last read" error on non-overlapping transactions
Submitted: 6 Feb 2008 15:10 Modified: 1 Oct 2008 11:55
Reporter: Philip Stoev Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S1 (Critical)
Version:6.0.4 OS:Any
Assigned to: Kevin Lewis CPU Architecture:Any

[6 Feb 2008 15:10] Philip Stoev
Description:
when one inserts a unique record into a Falcon table, followed by a DELETE on that same record, "Record has changed since last read" error will repeatably happen on some of the DELETE statements.

Innodb does not exhibit that behavoir.

How to repeat:
Test case will be attached shortly.
[6 Feb 2008 15:16] Philip Stoev
Test case for bug #34351

Attachment: bug34351.zip (application/x-zip-compressed, text), 839 bytes.

[6 Feb 2008 15:20] Philip Stoev
Steps to reproduce:

1. Please unpack attached archive so that the *.txt files are placed in mysql-test and the *.test files are placed in mysql-test/t

2. Run the following command:

perl ./mysql-test-run.pl --stress --stress-init-file=bug34351_init.txt \
  --stress-test-file=bug34351_run.txt --skip-ndb\
  --stress-threads=10 --stress-test-duration=120

3. You will observe a steady stream of "Errors: No Errors. Test Passed OK", however there will also be a few lines reading "Errors: Severity S3: 1"

4. In mysql-test/var/stress/error*.txt you can see the output of the individual runs of the test case. Apart from the error complaining about a missing .result file, you will see errors like:

STDERR:mysqltest: At line 3: query 'DELETE FROM inter1 WHERE t1_uuid IN (@t1_uuid)' failed: 1020: Record has changed since last read in table 'inter1'

The test case itself is very simple:
SET @t1_uuid = UUID();
INSERT INTO inter1 (t1_uuid) VALUES (@t1_uuid);
DELETE FROM inter1 WHERE t1_uuid IN (@t1_uuid);
[6 Feb 2008 15:32] Hakan Küçükyılmaz
Related to Bug#29151
[6 Feb 2008 15:34] Ann Harrison
There's a very simple workaround: rollback and rerun the transaction.
This is a difference in behavior with InnoDB.  It doesn't lose committed
data.  It doesn't corrupt tables.  It doesn't cause the machine to catch
fire.  It doesn't leave the database in an inconsistent state.  Yes, it
would be good to figure out why it's happening and if possible change
it.

By the way what are the settings for transaction isolation for this
test?
[6 Feb 2008 15:56] Philip Stoev
A few more observations:

* falcon_debug_mask prints for this bug:

Table::fetchForUpdate: update conflict in table TEST.INTER1Exception: update conflict in table TEST.INTER1

* No assertions and crashes were observed on this bug.

* Bug #29151 involves a different error message and a different debug printout.

* Even if it can be argued that the sysbench workload is unrealistic, I believe that inserting and then deleting completely unique records is a valid worklog scenario, so this bug should probably get a higher priority compared to bug #29151.
[6 Feb 2008 16:09] Philip Stoev
Correctly named files for bug 34351

Attachment: bug34351.zip (application/x-zip-compressed, text), 839 bytes.

[20 Feb 2008 23:06] Kevin Lewis
The testcase for this bug uses a unique GUID to insert a record and then it delete that record based on the GUID field, but that field is not indexed.  If the field is indexed, then the Update Conflicts do not happen since only the correct records are locked for update.  Without the index, The server must read and lock each record in the table so that it can be checked for the GUID value.  Falcon starts reading a changing record for a lock, waits, and after the wait it tries again.  Now the record is not a consistent read.  So the workaround is to either use falcon_consistent_read=OFF or add an index to the GUID field.
[28 Feb 2008 18:21] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/43169

ChangeSet@1.2580, 2008-02-28 12:19:19-06:00, klewis@klewis-mysql. +5 -0
  Bug#34351 - A new function, Transaction::needToUpdate, is used to
  determine if Table::fetchForUpdate() should attempt to lock the 
  record.  This function looks for ANY visible record version.  It 
  is able to avoid a lock attempt for ConsistentRead transactions
  that start before a transaction that inserts a record. It also 
  avoids locks then the committed version fo the record is deleted.
  
  Also, this changeset delets the Table::syncUpdate object which
  had no effect. In the process, Table::checkUniqueIndexes and 
  its two subfunctions are redesigned to loop when a wait occurs 
  instead of looping in the caller.
[28 Feb 2008 21:12] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/43180

ChangeSet@1.2581, 2008-02-28 15:10:51-06:00, klewis@klewis-mysql. +1 -0
  Bug #34351 - Use true/false, not TRUE/FALSE.
[12 Mar 2008 23:02] Bugs System
Pushed into 6.0.4-alpha
[13 Mar 2008 15:56] Hakan Küçükyılmaz
Fix is in 6.0.5-alpha.
[16 Mar 2008 15:34] Philip Stoev
With today's 6.0-falcon-team BitKeeper tree, the issue still occurs even with a unique key on t1_uuid:

CREATE TABLE inter1 (
  t1_autoinc INTEGER NOT NULL AUTO_INCREMENT,
  t1_uuid CHAR(36) NOT NULL,
  PRIMARY KEY (t1_autoinc),
  UNIQUE (t1_uuid)
) ENGINE = Falcon;
[17 Mar 2008 15:56] Kevin Lewis
Philip, I cannot get Falcon to avoid an update conflict when falcon_consistent_read=ON and the insert and update in this test is not in the same transaction.  Even if you use a unique key on the UUID field, the optimizer choosesto use sequential access for this test.  When sequential access is used to locate the record with the correct UUID, then each recently inserted record must be locked for update so that the server can check the record.  If it is a consistent read transaction and the inserted record is being deleted, then the deleted record will not be seen after the wait. So it must be an UPDATE_CONFLICT.

As shown in falcon_bug_34351_A.test and falcon_bug_34351_C.test you can make this work (avoid UPDATE_CONFLICTs) by either setting falcon_consistent_read=OFF or putting the insert into the same transaction as the delete.
[23 Apr 2008 7:37] Hakan Küçükyılmaz
Philip,

what is the current status about this bug?
[23 Apr 2008 14:29] Philip Stoev
Hakan, please have Kevin decide on this bug. If the work inside Falcon is done, then the Optimizer portion of it is being tracked as bug #35321.
[23 Apr 2008 15:01] Kevin Lewis
Once again, Falcon has done all it can do with this as I described in my previous post.
[29 Aug 2008 2:07] Kevin Lewis
This fix is in version 6.0.5
[1 Oct 2008 11:55] Jon Stephens
Documented in the 6.0.5 changelog as follows:

        Inserting a unique record into a Falcon table, then performing a DELETE
        on the same record, resulted in the error Record has changed since last
        read.