Bug #43882 | ndb GCP stop(3) while importing tables with big (till 470KB) longtext columns | ||
---|---|---|---|
Submitted: | 26 Mar 2009 13:25 | Modified: | 6 Oct 2009 13:18 |
Reporter: | Jos? Luis Gordo Romero | Email Updates: | |
Status: | Verified | Impact on me: | |
Category: | MySQL Cluster: Cluster (NDB) storage engine | Severity: | S1 (Critical) |
Version: | mysql-5.1-telco-6.3 | OS: | Linux (ubuntu server 7.04) |
Assigned to: | Assigned Account | CPU Architecture: | Any |
Tags: | hang, longtext, mysql-5.1-telco-6.3.20, ndb |
[26 Mar 2009 13:25]
Jos? Luis Gordo Romero
[8 Apr 2009 16:03]
Magnus Blåudd
We would be very happy if you could provide some additional information about your table schema, how many rows you loaded etc. Have it occured more than once? Please also use ndb_error_reporter script to create a tarfile and upload it to this bug. http://dev.mysql.com/doc/refman/5.1/en/mysql-cluster-utilities-ndb-error-reporter.html
[16 Apr 2009 12:56]
Jonathan Miller
Please respond to Magnus's request. Also how is table imported (mysql load, restore) transaction on or off? Possible workaround.
[16 Apr 2009 13:28]
Jonathan Miller
http://bugs.mysql.com/bug.php?id=39498 http://bugs.mysql.com/bug.php?id=37227
[22 Apr 2009 8:48]
Jos? Luis Gordo Romero
First sorry for the delay, we have the system on production (we be able to import the data with a custom script with minimal delays between inserts), and the development machine was for other tests. I reinstall the test enviroment and I'm having other non-ndb problems with it and I'm not able to reproduce the problem, sorry. Some info: the table has 11.000 records with a longtext column till 470KB (200KB average): CREATE TABLE `x` ( `id` int(11) NOT NULL AUTO_INCREMENT STORAGE MEMORY, `document_id` int(11) DEFAULT NULL STORAGE MEMORY, `version` int(11) DEFAULT NULL STORAGE MEMORY, `name` varchar(255) DEFAULT '' STORAGE MEMORY, `creator_id` int(11) DEFAULT NULL STORAGE MEMORY, `content` longtext STORAGE DISK, `created_at` datetime DEFAULT NULL STORAGE MEMORY, `updated_at` datetime DEFAULT NULL STORAGE MEMORY, `editor_id` int(11) DEFAULT NULL STORAGE MEMORY, `reference_id` int(11) DEFAULT NULL STORAGE MEMORY, `digest` varchar(40) DEFAULT NULL STORAGE MEMORY, `locked` tinyint(1) DEFAULT NULL STORAGE MEMORY, PRIMARY KEY (`id`);
[28 Apr 2009 8:55]
Magnus Blåudd
Thanks for that info. Unfortunately is is currently possible to cause a "GCP stop"(i.e it takes too long for the global checkpoint protocol to complete) by committing a large number of rows in one transaction, see BUG#43069. Especially when having a blob on disk like you do. The INSERT to the blob column is behind the scenes split into 8k INSERTS into a blob table so all in all it adds up to quite a large transaction just because of that. We have some ideas how to fix it, but for now I would suggest you configure the system differently to cope with the load or rewrite the INSERTS slightly. / Magnus