Bug #44657 Got error 124 from storage engine when load concurrent data infile working
Submitted: 5 May 2009 7:14 Modified: 21 Aug 2009 14:46
Reporter: Igor Simonov Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: Partitions Severity:S1 (Critical)
Version:5.1.30, 5.1.34, 5.1.36 OS:Any (linux, windows)
Assigned to: Assigned Account CPU Architecture:Any
Tags: corruption

[5 May 2009 7:14] Igor Simonov
Description:
Any SQL queries return error 124 when "load data concurrent infile" is working

How to repeat:
creating table TABLE

time int(4) unsigned not null,
describe varchar(18) not null,
addit char(12) default null,
key ix_describe(describe),
key ix_addit(addit)
engine=MyISAM
partition by range (time)
(partition p0 values less then (600) engine = MyISAM, 
partition p1 values less then (1200) engine = MyISAM,
partition p2 values less then (1800) engine = MyISAM,
partition p3 values less then (2400) engine = MyISAM)

inserting data

load data concurrent infile '/home/igor/data.txt' into table database.TABLE;

when the loading data any simple query like as "select * from database.TABLE" will return error 124.
[5 May 2009 8:13] Igor Simonov
query must be like

select * from TABLE where describe='something';
error will be returned at the "load concurrent" starts creating indexes

if query like select * from table limit 10; all ok.
[5 May 2009 8:36] Sveta Smirnova
Thank you for the report.

I can not repeat described behavior with test data. Please provide your error log file.
[5 May 2009 8:38] Sveta Smirnova
Please also specify your operating system.
[6 May 2009 12:41] Igor Simonov
create table and load data

Attachment: create.sql (application/octet-stream, text), 706 bytes.

[6 May 2009 12:42] Igor Simonov
error file

Attachment: mysql.local.err (application/octet-stream, text), 1.44 KiB.

[6 May 2009 12:43] Igor Simonov
mysqladmin variables

Attachment: mysqladmin.vari (application/octet-stream, text), 19.73 KiB.

[6 May 2009 12:43] Igor Simonov
configuration file

Attachment: my.cnf (application/octet-stream, text), 632 bytes.

[6 May 2009 12:44] Igor Simonov
mysql console

Attachment: screen.txt (text/plain), 4.65 KiB.

[6 May 2009 13:02] Igor Simonov
tested OS Linux RHEL AS 4.6 x64, CentOS 5.3 x64
i am can not upload data file because it over 5M size.
it file is a log of tcpdump converted by perl script.

#!/usr/bin/perl

while (<STDIN>) {
    $stri=$_;  
    $stri=~m/^(\d{2}):(\d{2}):\d{2}\.(\d{6}) IP (\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.?(\d{1,5})? \> (\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})\.?(\d{1,5})?:.*/;
    print "$1$2#$4#$5#$6#$7#$3\n";
      };

#tcpdump -ieth0 -n |./tcpconv >rez.txt

total data must be minimum 2M records, according a server productivity.
[9 Jun 2009 11:55] MySQL Verification Team
i can repeat this quite easily:

090609 13:54:34 [ERROR] Got error 124 when reading table '.\test\ip20090101'
090609 13:54:34 [ERROR] Got error 124 when reading table '.\test\ip20090101'
090609 13:54:35 [ERROR] Got error 124 when reading table '.\test\ip20090101'
090609 13:54:35 [ERROR] Got error 124 when reading table '.\test\ip20090101'
090609 13:54:36 [ERROR] Got error 124 when reading table '.\test\ip20090101'
090609 13:54:36 [ERROR] Got error 124 when reading table '.\test\ip20090101'

I can make a testcase shortly.
[9 Jun 2009 13:14] MySQL Verification Team
.c testcase.  user/host/password at top of .c test.  first place the file "t1.txt" into the <datadir>/data/test directory

Attachment: bug44657.c (text/x-csrc), 6.79 KiB.

[9 Jun 2009 13:15] MySQL Verification Team
t1.txt for load data infile to use.

Attachment: t1.zip (application/x-zip-compressed, text), 243.49 KiB.

[10 Jun 2009 6:24] MySQL Verification Team
Mattias, can you tell if this bug is also affecting normal concurrent inserts (current_insert=1 or 2) ?  I couldn't repeat it yet, but just want to know if it's limited only to load data infile ...
[10 Aug 2009 22:56] Roel Van de Paar
Also see bug #46639
[18 Aug 2009 7:54] Mattias Jonsson
May be a duplicate of bug#46639, which has a simpler testcase.
[19 Aug 2009 3:57] Roel Van de Paar
Customer indicates that they see both these bugs (bug #44657 and bug #46639) in very different scenarios. Hence, this bug should not be marked as duplicate, or, if there is indeed one fix for both bugs, both should be verified as now working correctly.
[21 Aug 2009 14:46] Mattias Jonsson
Was finally able to verify this by:

prepare a file by this perl-script:

while (true)
{
  printf("%2d%02d\t%d.%d.%d.%d\t%d\n", rand(24), rand(60), rand(255) + 1, rand(256), rand(256), rand(256), rand(65000));
}

about 1-2 million rows into file 't1.txt' in the data/test directory.

running on client 1:
create table t1 (time int(4) unsigned not null,  addr varchar(18) not null,  addit char(12) default null,  key ix_addr(addr),  key ix_addit(addit)) engine=MyISAM partition by range (time) (partition p0 values less than (600) engine = MyISAM,   partition p1 values less than (1200) engine = MyISAM,  partition p2 values less than (1800) engine = MyISAM,  partition p3 values less than (2400) engine = MyISAM);

load data concurrent infile 't1.txt' into table t1;

When the load data statement is running repeat the following query in another client:
select * from t1 where addr = '127.0.0.1' limit 10;

and it will fail as described.

This was done in the latest mysql-5.1-bugteam tree.

The patch for bug#46639 solves this too.

=== modified file 'storage/myisam/mi_search.c'
--- storage/myisam/mi_search.c	2009-02-13 16:41:47 +0000
+++ storage/myisam/mi_search.c	2009-08-21 14:39:33 +0000
@@ -28,11 +28,18 @@
 {
   if (inx == -1)                        /* Use last index */
     inx=info->lastinx;
-  if (inx < 0 || ! mi_is_key_active(info->s->state.key_map, inx))
+  if (inx < 0)
   {
     my_errno=HA_ERR_WRONG_INDEX;
     return -1;
   }
+  if (!mi_is_key_active(info->s->state.key_map, inx))
+  {
+    my_errno= info->s->state.state.records ? HA_ERR_WRONG_INDEX :
+                                             HA_ERR_END_OF_FILE;
+    return -1;
+  }
+
   if (info->lastinx != inx)             /* Index changed */
   {
     info->lastinx = inx;

closing as a duplicate of bug#46639.