Bug #1094 Empty MYI file after insert records
Submitted: 19 Aug 2003 6:21 Modified: 22 Aug 2003 4:05
Reporter: Seokhee Kim Email Updates:
Status: Can't repeat Impact on me:
Category:MySQL Server: MyISAM storage engine Severity:S1 (Critical)
Version:4.0.14 OS:Linux (Linux)
Assigned to: CPU Architecture:Any

[19 Aug 2003 6:21] Seokhee Kim
After create tables which have index, insert big data(? number of records is
about 30M and size is less than 1G).
Repeat this.
Sometime mysql server does not write anything into index file.
So tables look like lost data. With myisamchk, I could recover index
But this is not good way for online applications. Finally, I added "flush
tables" after "create table". So far so good. 
I am using SMP servers with different linux kernel versions. Problems are same.


How to repeat:
After create tables which have index, insert big data(? number of records is
about 30M and size is less than 1G).
Repeat this.
[20 Aug 2003 5:25] Alexander Keremidarski
Not enough information was provided for us to be able
to handle this bug. Please re-read the instructions at

If you can provide more information, feel free to add it
to this bug and change the status back to 'Open'.

Thank you for your interest in MySQL.

Please provide repeatable test case.
[22 Aug 2003 4:05] Michael Widenius
Please send us the output for the following commands in the mysql client:

show variables like "delay_key_write";
show create table table_name_that_causes_you_problems;

Are you running a MySQL binary that you got from www.mysql.com?
If not, please verify if you can get the same error with this binary.

As this is a repeatable thing, can you produce a short perl/php/C program that reproduces this problem ?

[22 Aug 2003 6:46] Seokhee Kim
Here is more information.

* show variables like "delay_key_write";
| Variable_name   | Value |
| delay_key_write | ON    |

* create query is ( I renamed field names for security reason )
CREATE TABLE sales_data_2002_1 (
  store_id int(11) NOT NULL default '0',
  month_year date NOT NULL default '2002-01-01',
  din bigint(20) unsigned NOT NULL default '0',
  aaa mediumint(8) unsigned default NULL,
  bbb mediumint(8) unsigned default NULL,
  ccc float default NULL,
  ddd mediumint(8) unsigned default NULL,
  eee mediumint(8) unsigned default NULL,
  fff float default NULL,
  ggg int(11) default NULL,
  hhh int(11) default NULL,
  iii float default NULL,
  jjj mediumint(8) unsigned default NULL,
  kkk mediumint(8) unsigned default NULL,
  lll float default NULL,
  mmm mediumint(8) unsigned default NULL,
  nnn mediumint(8) unsigned default NULL,
  ooo float default NULL,
  ppp int(11) default NULL,
  qqq int(11) default NULL,
  rrr float default NULL,
  KEY store_id (store_id),
  KEY din (din)
) TYPE=MyISAM data directory="/DATA/sales";

*index size is 62M. data is 244M. number of records is 3.2M.
I created 17 more tables with different table name and same schema sequentially.
same problem were there.
I am really sorry about that I can't give data which is very very confidential.(My boss said.)

* I used my own compile version on two differnet linux kernel version but they are SMP. I didn't test binary version yet. 
Do I have to that? 
Can't I compile stable server on my server with proper compile options? If yes. Even if there are some overhead for my server, I will try it.

*Configuration info
CFLAGS="-O3 -mcpu=pentium4" CXX=gcc CXXFLAGS="-O3 -mcpu=pentium4 -felide-constructors -fno-exceptions -fno-rtti" ./configure --prefix=/www/mysql --enable-assembler --with-mysqld-ldflags=-all-static --without-innodb --with-embedded-server

*compiler info
* server 1 => gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/3.2.2/specs
Configured with: ../configure --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --enable-shared --enable-threads=posix --disable-checking --with-system-zlib --enable-__cxa_atexit --host=i386-redhat-linux
Thread model: posix
gcc version 3.2.2 20030222 (Red Hat Linux 3.2.2-5)

*server 2 => gcc -v
Reading specs from /usr/lib/gcc-lib/i386-redhat-linux/2.96/specs
gcc version 2.96 20000731 (Red Hat Linux 7.3 2.96-110)

*First I had this problem in PHP code, so I tested it mysql prompt manually.
But It had same problem.

** To repeat this situation, I have to stop servers minimum 2 hours. because data is big. Also I have to keep servers alive. Anyway I solved(?) this problem with "flush tables".

If need more information, please let me know.