Description:
This may be a documentation bug, or it may be a feature request. Perhaps it's a real bug; I'm not sure.
On an insert, I got: "ERROR 1030: Got error 136 from table handler."
Research indicated my insert would exceed Max keyfile length:
"/usr/local/mysql/bin/myisamchk -divv /data/mysql/irt/inv0" yields, among other things:
Data records: 408234135 Deleted blocks: 0
Datafile parts: 408234135 Deleted data: 0
Datafile pointer (bytes): 4 Keyfile pointer (bytes): 3
Datafile length: 7756448565 Keyfile length: 17179835392
Max datafile length: 81604378622 Max keyfile length: 17179868159
Recordlength: 19
In my research, I saw people reporting myisamchk output with substantially different numbers (by 1 and 2 orders of magnitude smaller) for Max keyfile length, so it seems to be least somewhat dependent on configuration or installation.
The questions are:
1. What determines Max keyfile length?
2. Can it be set? How?
I'd recommend that no matter what the answer, something about Max keyfile length should be added to the manual under "1.2.4 How Big Can MySQL Tables Be?"
How to repeat:
This is 4.0.15 on a dual Xeon box with 12GB of RAM and mysql's ./data on a TB RAID. The kernel is RedHat's "2.4.20-20.9bigmem #1 SMP"; large file support is in. Built from source:
CXX=gcc ./configure --prefix=/usr/local/mysql --with-gnu-ld -- with-extra-charsets=complex --enable-thread-safe-client --enable-local-infile
my.cnf was modified from the "huge" example to add a lot more headroom for sorting etc.
The database is irt, it has one table, inv0:
create table inv0 (
docid int unsigned not null,
offset smallint unsigned not null,
termid int unsigned not null,
taglistid smallint unsigned,
whichpara smallint unsigned,
weight_in_subdoc float,
primary key (docid,offset));
create index termid_docid_para_index on inv0(termid,docid,whichpara);
Insertions look like this:
"insert into inv0 (docid, offset, termid, whichpara, taglistid, weight_in_subdoc) values(1009823, 93, 102635, 21, 1373, 0.544729)"
Interaction is via MySQL++... just insert 408M or so records, and the error starts (it took about 1 day wall clock time to break)! There was no other load on the server. This is for a text retrieval application.
Suggested fix:
Although I don't know the source of the limitation, it's clearly a limitation related to table size. So, a documentation fix is in order.
If it's set-able, it could show up in "show global variables" as max_keyfile_length or somesuch. Or, as a template item in /etc/my.cnf. Maybe it's set-able by table - I looked extensively, even doing some "grep"s on the source code, but couldn't find any way of setting it.
BTW, I previously tried to create the index AFTER the table was populated, but the "create index" never finished (after days; it looked like it was thrashing). Maybe this limitation is related, but without getting the error message. Dunno, that's for another bug report.