Bug #10161 Error 136 (Index file is full) when index file does not actually appear as full
Submitted: 25 Apr 2005 21:12 Modified: 9 Sep 2005 14:42
Reporter: Scott Nebor Email Updates:
Status: Can't repeat Impact on me:
None 
Category:MySQL Server Severity:S1 (Critical)
Version:4.1.11 OS:Linux (Red Hat 9)
Assigned to: CPU Architecture:Any

[25 Apr 2005 21:12] Scott Nebor
Description:
I have the following table definition
CREATE TABLE TransactionDetail (
  ID bigint(20) unsigned NOT NULL auto_increment,
  TransactionID bigint(20) unsigned NOT NULL default '0',
  CorrectedTransactionDetailID bigint(20) unsigned default NULL,
  State enum('Visible','Hidden') NOT NULL default 'Visible',
  OfferID int(10) unsigned NOT NULL default '0',
  PointsIssued int(10) NOT NULL default '0',
  OfferOccurrence int(10) NOT NULL default '0',
  PRIMARY KEY  (ID),
  KEY TransactionIDIndex (TransactionID),
  KEY OfferIDIndex (OfferID)
) TYPE=MyISAM;

I find that whenever I have around 1.4 to 1.5 million rows, I get the following error when trying to issue inserts or updates:
Errno=1030 -> Got error 136 from storage engine

How to repeat:
Create the above table
Run the following query many times:
INSERT INTO TransactionDetail SET TransactionID=(rand()*1000000), OfferID=(rand()*10),PointsIssued=1, OfferOccurrence=1;

I have seen this problem in two instances.  Once was on our production server and the other was on a test server.  In the first case, we started seeing errors when we hit about 1.45 million records.  In the second case, we started seeing errors at around 1.32 million records.  For the moment, we have 2 workarounds
1) Changing the big int columns to ints
2) Running ALTER TABLE TransactionDetail MAX_ROWS=1000000000 AVG_ROW_LENGTH=38

Suggested fix:
Both the index file and the data file were around 50-60 megs at the time of failure, so there is no reason that the storage engine should report that the index file is full.  Our system has many other tables that are much larger than this and have more rows than this, and we don't see the same behavior.  The only difference between those tables and this one is that this table has a fixed row format while the rest all have dynamic.
[25 Apr 2005 21:40] Scott Nebor
Correction, my first workaround (Changing the big int columns to ints
) does not seem to fix the problem.  It only delays the problem until we get reach a couple million rows
[18 Jun 2005 17:17] David Sparks
I ran into the same bug with a very small table:

-rw-rw----  1 mysql mysql  19M Jun 18 10:16 employee_domains.MYD
-rw-rw----  1 mysql mysql  41M Jun 18 10:16 employee_domains.MYI
-rw-rw----  1 mysql mysql 8.5K Jun 18 10:14 employee_domains.frm
[9 Sep 2005 14:42] MySQL Verification Team
I was not unable to repeat the behavior reported with current
source server. I inserted 2 millions records with the query you
reported:

Record No. 1999998
Record No. 1999999
Record No. 2000000
miguel@hegel:~/dbs/4.1> 

mysql> select count(*) from  TransactionDetail;
+----------+
| count(*) |
+----------+
|  2000000 |
+----------+
1 row in set (0.00 sec)

mysql> select version();
+------------------+
| version()        |
+------------------+
| 4.1.15-debug-log |
+------------------+
1 row in set (0.03 sec)

mysql>