Bug #4479 | all nodes in cluster crash from 'load data infile' with large insert statement | ||
---|---|---|---|
Submitted: | 9 Jul 2004 0:38 | Modified: | 30 Jul 2004 16:14 |
Reporter: | Devananda v | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Cluster: Cluster (NDB) storage engine | Severity: | S2 (Serious) |
Version: | 4.1.3 | OS: | Linux (Mandrake 9.2) |
Assigned to: | Magnus Blåudd | CPU Architecture: | Any |
[9 Jul 2004 0:38]
Devananda v
[9 Jul 2004 0:43]
Devananda v
NDB_Trace
Attachment: NDB_TraceFile_3_cropped.trace (application/octet-stream, text), 142.42 KiB.
[16 Jul 2004 11:25]
Magnus Blåudd
Hi, thats a really large insert. :) As a workaround I would suggest that the INSERT are divided into smaller pieces. Recomended number of VALUES are aprox 1024. If that does not work for you, could you please upload a test script to our upload area (ftp://ftp.mysql.com/pub/mysql/upload/) so that we can reproduce the problem. Please include table creation script as well as the data you are trying to insert. There is no need to upload the tracefile since we know from your bug report where the crash occured. Thanks!
[16 Jul 2004 17:26]
Devananda v
Hi! :) I have written a little script to extract the values from our dumpfile into a '1 value per line' file, and another script to create arbitrary length insert statements from that file. So far, this has worked fine with 1,000 values per insert statement. I have also been successful splitting the dumpfile into as many pieces as I have API nodes and running this script simultaneously on all servers. The limit at this point is simply how much RAM I have (and the DataMemory and IndexMemory settings)! This is a rather long process to have to go through, and I'd hate to have had to learn all this when I needed to restore lost data in a timely way! I'd be happy to post those scripts if they would be helpful to anyone else. If I may make a suggestion, perhaps one solution is to add an option to mysqldump to specify the number of values per insert statement generated ;) Devananda
[17 Jul 2004 19:50]
Mikael Ronström
I have now committed a test case that reproduces this bug (mikael:1.1981)
[30 Jul 2004 16:14]
Mikael Ronström
Bug fixed and patch pushed to 4.1 together with a test program Later an improvement will be made for the fix which decreases the amount of log information.