Bug #42818 Batching to much kills data nodes with temporary error
Submitted: 13 Feb 2009 9:15 Modified: 13 Feb 2009 9:20
Reporter: Geert Vanderkelen Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Cluster: NDB API Severity:S3 (Non-critical)
Version:mysql-5.1-telco-6.3 OS:Linux
Assigned to: CPU Architecture:Any
Tags: mysql-5.1.31-telco-6.3.22
Triage: Triaged: D2 (Serious) / R6 (Needs Assessment) / E6 (Needs Assessment)

[13 Feb 2009 9:15] Geert Vanderkelen
Description:
Using hugoLoad I'm inserting data, witch batch size set to 1990. It kills the data with a temporary error. When data node not again started, it eventually kills the other data node(s).

(FYI, hugoLoad is a tool in the MySQL source: cd storage/ndb/test/ ; make)

How to repeat:
CREATE TABLE t1 (
 id INT UNSIGNED NOT NULL AUTO_INCREMENT,
 i1 INT,
 i2 INT,
 i3 INT,
 txt VARCHAR(60),
 txt2 VARCHAR(60),
 hugo_counter INT UNSIGNED NOT NULL,
 PRIMARY KEY (id),
 UNIQUE KEY (i1),
 UNIQUE KEY (i3,txt)
) ENGINE=ndbcluster;

shell> NDB_CONNECTSTRING=localhost:1406 time ./tools/hugoLoad -r 4000000 -b 1999 -d test t1
batch = 1999 rowsize = 188 -> rows/commit = 2722
|- Inserting records...
ERROR: 4031 Node failure caused abort of transaction
           Status: Temporary error, Classification: Node Recovery error
           File: HugoTransactions.cpp (Line: 665)

Suggested fix:
* Don't batch like I did.
* Might be a configuration issue? But then again, would be great not to kill data nodes with an NDB API application.
[13 Feb 2009 9:18] Geert Vanderkelen
Removed ndb_3_signal.log to make package smaller

Attachment: ndb_error_report_20090213100711.tar.bz2 (application/x-bzip, text), 411.07 KiB.

[13 Feb 2009 9:20] Geert Vanderkelen
Verified using MySQL Cluster 6.3.20, 6.3.22 and 6.4bzr