Bug #63118 Mysql-cluster cannot ingest many rows
Submitted: 6 Nov 2011 16:01
Reporter: Philip Orleans Email Updates:
Status: Open Impact on me:
Category:MySQL Cluster: Disk Data Severity:S1 (Critical)
Version:7.1 OS:Any
Assigned to: CPU Architecture:Any
Tags: cluster error 410 redo log overload

[6 Nov 2011 16:01] Philip Orleans
I have been trying for several days to load a 350 million rows table, with only 3 fields, 2 Bigint and one varchar(4). No matter how you configure mysql-cluster NDB storage engine, it never ingests more than 3,x million at the time before braking. The issue is impossible to overcome by any tricks, since even if you break the large file into smaller files, it loads a few files and breaks again. The same error: 410 redo log overload.

I even tried loading the table as MyIsam and the converting the storage engine, but it fails with the same error after a minute or so.
I conclude that there something seriously wrong in that software. My table is mostly read-only, and I need predictable response time. The server has 128 GB of RAM and the best SAS drives that Dell sells. I need way to load the table into NDB storage engine with disk persistence. It loads fine with diskless=1 and no logging=1, but the table is so large that I cannot load it every time  I restart the service.
If a developer wants to test uploading my file, I can supply the data, and let's see if somebody can actually upload into mysql-cluster. I even downloaded the beta version but all of the versions fail with the same error. I tried all possible combinations of parameters and the error stays.

How to repeat:
Please contact me at venefax at gmail if you want to try using my data file. I also have a 5 million row subset that generates the same error.