Bug #10694 LOAD DATA FROM INFILE fails with 'Out of operation records'
Submitted: 17 May 2005 20:29 Modified: 24 Sep 2005 3:38
Reporter: Jonathan Miller Email Updates:
Status: Closed Impact on me:
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S2 (Serious)
Version:4.1 OS:Linux (Linux)
Assigned to: Tomas Ulin CPU Architecture:Any

[17 May 2005 20:29] Jonathan Miller
I had created a file with over 100,000 record and a table to load them into using the ndb engine. I tried to load them using  "load data infile '/home/ndbdev/jmiller/test.txt' into table t6;" and got:
ERROR 1297 (HY000): Got temporary error 233 'Out of operation records in transaction coordinator (increase MaxNoOfConcurrentOperations)' from ndbcluster

I had the bank test running, so I stopped it and retried the load, and got the same message. According to our documents, I don't need to increase this value because we break this up into smaller transactions.
"In previous versions ALTER TABLE, TRUNCATE TABLE, and LOAD DATA were performed as one big transaction. In this version, all of these statements are automatically separated into several distinct transactions.
This removes the limitation that one could not change very large tables due to the MaxNoOfConcurrentOperations parameter." 

I increased it anyway for each data node, and still recieved the same error.

How to repeat:
run this script to create the data file.

while [ "$i" -lt 100000 ]
        let "i += 1"
        j=`expr $i + 1`

        echo $i $j $filler >> ./test.txt
create the table
  `c1` int(11) NOT NULL default '0',
  `c2` int(11) default NULL,
  `c3` char(5) default NULL,
  PRIMARY KEY  (`c1`)
) ENGINE=ndbcluster DEFAULT CHARSET=latin1 
try to load
load data infile '/home/ndbdev/jmiller/test.txt' into table t6;
[17 May 2005 20:39] Jonathan Miller
I forgot to add that this is setup as a cluster with replication. I was trying to load the master cluster to ensure replication was done correctly on a LOAD DATA FROM INFILE.
[22 Aug 2005 10:39] Tomas Ulin
Bug is there in 4.1 since late 2004.

Workaround is to "set ndb_use_transactions=0;"
prior to "big" load statemtent and then resetting value to 1.

Other workaround is for first load data into myisam and then altering the table to ndb
[20 Sep 2005 10:22] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

[20 Sep 2005 11:10] Tomas Ulin
fixed in 4.1.15, 5.0.14
[24 Sep 2005 3:38] Paul DuBois
Noted in 4.1.15, 5.0.14 changelogs.