Bug #54330 Broken fast index creation
Submitted: 8 Jun 2010 9:38 Modified: 15 Oct 2010 10:38
Reporter: 1 2 Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: InnoDB Plugin storage engine Severity:S1 (Critical)
Version:5.1.47 OS:Any
Assigned to: Jimmy Yang CPU Architecture:Any
Triage: Triaged: D1 (Critical)

[8 Jun 2010 9:38] 1 2

#password       = your_password
port            = 3306
socket          = /tmp/mysql.sock

port                      = 3306
socket                    = /tmp/mysql.sock
user                      = mysql
datadir                   = /var/db/mysql
basedir                   = /usr/local
tmpdir                    = /var/db/tmp
max_connect_errors        = 10




mysql> show create table Files2 \G

*************************** 1. row ***************************
       Table: Files2
Create Table: CREATE TABLE `Files2` (
  `ID` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `MD5Hash` char(32) NOT NULL DEFAULT '',
  `SHA256Hash` char(64) NOT NULL DEFAULT '',
  `FileSize` int(11) NOT NULL DEFAULT '0',
  `Ext` char(4) NOT NULL DEFAULT '',
  KEY `File_k_2` (`MD5Hash`(4),`SHA256Hash`(4),`FileSize`)
1 row in set (0.00 sec)

mysql> check table Files2;

| Table       | Op    | Msg_type | Msg_text |
| test.Files2 | check | status   | OK       |
1 row in set (1.23 sec)

recreating index:

mysql> alter table Files2 drop KEY `File_k_2`;

Query OK, 0 rows affected (0.02 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> alter table Files2 add KEY `File_k_2` (`MD5Hash`(4),`SHA256Hash`(4),`FileSize`);

Query OK, 0 rows affected (1.23 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> check table Files2;
| Table       | Op    | Msg_type | Msg_text                                                           |
| test.Files2 | check | Warning  | InnoDB: Index 'File_k_2' contains 20164 entries, should be 375000. |
| test.Files2 | check | error    | Corrupt                                                            |
2 rows in set (2.60 sec)

How to repeat:
I reproduce this bug on FreeBSD amd64 on mysql 5.1.47 & Gentoo Linux i386 on mysql 5.1.46, but for some reason bug not exist on random dataset. Please try my dataset from dump in private comment.
[8 Jun 2010 12:45] Sveta Smirnova
Thank you for the report.

Verified as described.
[8 Jun 2010 12:53] Sveta Smirnova
Regular InnoDB is not affected.
[9 Jun 2010 16:18] Jimmy Yang
Sveta, thanks, I am able to reproduce it.

Initial analysis shows in row_merge_sort(), row_merge() got just called for one round. We have about 39 blocks of records, each block contain 10082 records. And we just did one round of merge in row_merge() and exit. Thus, subsequent build index with row_merge_insert_index_tuples() only inserted 2 X 10082 = 20164 records

In row_merge(), the ohalf suppose to be "half the output file", however in our case it is not true:

Breakpoint 47, row_merge (trx=0x9015a28, index=0x9012f18, file=0x901f420, 
    half=0xa99c40fc, block=0xa7961000, tmpfd=0xa99c4138, table=0x9013e78)
    at row/row0merge.c:1670
1670		*half = ohalf;
8: *half = 19
(gdb) p file->offset 
$230 = 19

Which shows it is equal to the output file.

This directly result in we breaking out of merge loop in row_merge_sort().

More investigation under way.
[10 Jun 2010 10:02] Jimmy Yang
More information for this bug:

The row_merge_block_t is of 1048576 bytes, we have to write out the sort result after a run if data exceeds this size. This means as merge generate longer runs, the number of offset (one merge sort write unit) could remain to be the same, and each run could have multiple offsets, and the merge needs to make sure the right offsets are paired together for the next merge.

In our case, the index record size is 19. Initially we have 38 offsets, each offset has approximately 10084 records (that is amount to 191558 bytes, which is less than 1048576 bytes block).

After the first merge, we generate 19 offsets, each offset has 20164 records, and data occupy 383116 bytes, and still less than 1048576 .

After the second merge, it generates 10 offsets, earch has 40328 records, and takes 766232 bytes.

In the third merge, it will require two offset for a result run, as block cannot hold as much data as 2 X 766232. So we have following runs:
run        0     1        2      3     4
offsets (0, 1), (2, 3), (4, 5) (6, 7) (8, 9)

each run has two offsets, the next merge would be done with, run 0 and run 2, run 1 and run 3, and run 4. So from offset stand point, we need to point offset 0 to 4, offset 2 to 6 etc.

In the fourth merge, we generate three runs, each with 3 offsets:
run        0         1         2     
offsets (0, 1, 2), (3, 4, 5), (6)

Again we need to point offset 0 to 3 for the next merge.

So to fix this, we added a counter "num_runs" and an array run_offset[] to record the first offset for each run. We will conduct the merge until num_runs becomes 1.
[17 Jun 2010 9:46] Jimmy Yang
Fix checked in mysql-5.1-innodb:
3512 Jimmy Yang        2010-06-16
      Fix Bug #54330 Broken fast index creation. Add additional array to
      account for each merge run's start offset, so correct offsets are
      paired up for multiple merge runs.
[21 Jun 2010 2:24] Jimmy Yang
Paul, this is pushed to mysql-5.1-innodb codeline, and will be next 5.1 plugin release.
[23 Jun 2010 9:51] Shane Bester
bug #54721 is a duplicate of this.
[19 Jul 2010 14:34] Bugs System
Pushed into 5.1.49 (revid:build@mysql.com-20100719143034-omcma40sblwmay3x) (version source revid:vasil.dimov@oracle.com-20100704071244-3lo4okzels3kvy1p) (merge vers: 5.1.49) (pib:16)
[20 Jul 2010 23:05] John Russell
Added to the 5.1.49 change log:

Fast index creation could fail, leaving the new secondary index
[23 Jul 2010 12:24] Bugs System
Pushed into mysql-trunk 5.5.6-m3 (revid:alik@sun.com-20100723121820-jryu2fuw3pc53q9w) (version source revid:alik@sun.com-20100723121820-jryu2fuw3pc53q9w) (merge vers: 5.5.6-m3) (pib:18)
[23 Jul 2010 12:31] Bugs System
Pushed into mysql-next-mr (revid:alik@sun.com-20100723121929-90e9zemk3jkr2ocy) (version source revid:alik@sun.com-20100723121827-3bsh51m5sj6g4oma) (pib:18)
[4 Aug 2010 23:01] Paul Dubois
Noted in 5.5.6 changelog.
[26 Aug 2010 21:42] James Day
Please update the changelog text to say that this applies to the "InnoDB plugin". This is to help those who may and may not be affected quickly determine if it could apply to them.
[14 Sep 2010 18:24] John Russell
Noted that the fast index creation fix applies to the InnoDB Plugin.
[14 Oct 2010 8:30] Bugs System
Pushed into mysql-5.1-telco-7.0 5.1.51-ndb-7.0.20 (revid:martin.skold@mysql.com-20101014082627-jrmy9xbfbtrebw3c) (version source revid:martin.skold@mysql.com-20101014082627-jrmy9xbfbtrebw3c) (merge vers: 5.1.51-ndb-7.0.20) (pib:21)
[14 Oct 2010 8:45] Bugs System
Pushed into mysql-5.1-telco-6.3 5.1.51-ndb-6.3.39 (revid:martin.skold@mysql.com-20101014083757-5qo48b86d69zjvzj) (version source revid:martin.skold@mysql.com-20101014083757-5qo48b86d69zjvzj) (merge vers: 5.1.51-ndb-6.3.39) (pib:21)
[14 Oct 2010 9:00] Bugs System
Pushed into mysql-5.1-telco-6.2 5.1.51-ndb-6.2.19 (revid:martin.skold@mysql.com-20101014084420-y54ecj85j5we27oa) (version source revid:martin.skold@mysql.com-20101014084420-y54ecj85j5we27oa) (merge vers: 5.1.51-ndb-6.2.19) (pib:21)
[15 Oct 2010 10:38] Jon Stephens
Already documented in the 5.1.49 changelog. No new changelog entries required. Setting back to Closed.
[25 Feb 2012 8:07] Laurynas Biveinis
Just for the record, if anyone needs a short testcase for this:


       bar BIGINT(20)

SET @old_autocommit=@@AUTOCOMMIT;
let $1= 515641;
while ($1)
  eval INSERT INTO t1 (bar) VALUES (NULL);
  dec $1;
let $1= 2031;
while ($1)
  eval INSERT INTO t1 (bar) VALUES ($1);
  dec $1;
SET AUTOCOMMIT=@old_autocommit;



# With the bug present this will differ from the SELECT above!

[1 Apr 2014 16:24] Daniel Price
Due to a null merge, the patch was not applied to 5.5.6, as previous documented. For 5.5, the patch first appears in the 5.5.23 release. The following changelog entry now appears in the 5.1.49 and 5.5.23 release notes:

"Fast index creation in the InnoDB Plugin could fail, leaving the new secondary index corrupted."