Bug #43320 Option To Control Commit Size For LOAD DATA Command
Submitted: 3 Mar 2009 5:55 Modified: 25 Sep 2009 3:40
Reporter: Mikiya Okuno Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: DML Severity:S4 (Feature request)
Version:any OS:Any
Assigned to: CPU Architecture:Any

[3 Mar 2009 5:55] Mikiya Okuno
Description:
When loading data using LOAD DATA command to a transactional table, a commit will be done after loading whole data. It is not efficient when a data file is very large, for example, a data file has 100M rows. Loading 100M rows within a single transaction is not practical, because it causes large amount of undo logs inside rollback segment for InnoDB for example. MySQL Cluster cannot load more than  (MaxNoOfConcurrentOperations * # of data nodes) rows at a time.

How to repeat:
n/a

Suggested fix:
If the LOAD DATA command has an option like below, it will be appreciated by many people.

mysql> LOAD DATA IN FILE 'filename' INTO TABLE tbl COMMITS EVERY 10000 ROWS;

Or, have an session variable to control commit size during LOAD DATA command.
[25 Sep 2009 2:14] Trent Lloyd
Pretty sure this is a duplicate of:
http://bugs.mysql.com/bug.php?id=24313
[25 Sep 2009 3:40] Valeriy Kravchuk
Duplicate of Bug #24313.