Bug #29127 Truncation of tables with large data leads to DN failure using DISK DATA
Submitted: 15 Jun 2007 3:28 Modified: 21 Jun 2007 9:22
Reporter: Jonathan Miller Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Cluster: Disk Data Severity:S1 (Critical)
Version:mysql-5.1.19 ndb-6.2.3 OS:Linux
Assigned to: CPU Architecture:Any

[15 Jun 2007 3:28] Jonathan Miller
Description:
Hi,

I was trying to reproduce csc 17273.

I first started by taking most of the customer config.ini with exception of memory data and index size and data path.

I added Logfile groups and table spaces.

Then I used the customer schema for the tables using disk data, after adding to INT fields that the end of each table for hugoLoad.

I then used hugoLoad to load each table. 1/2 with 60,000 row and the other 1/2 with 600,000 rows.

After I started truncating all the tables. On the very last table we had a DN failure. 

I tried to just restart the DN. It was stuck in phase 2.

I then tried to restart the DN again using --initial, and again it stuck in phase 2.

I then deleted the FS and tried again to restart the DN. This time it stuck in phase 3

I the took the cluster all the way down and did a restart with a --initial on the affected DN and a regular start on the good DN. The cluster then came back, but the mysqld was having troubles accessing tables and could no longer see the table space of log file groups.

The I restart the MYSQLD and it crashed.

To be continued ....

How to repeat:
1) Start cluster from the config.ini file

2) Create LG
CREATE LOGFILE GROUP lg1 ADD UNDOFILE './lg1/undofile.dat' INITIAL_SIZE 2500M UNDO_BUFFER_SIZE = 8M ENGINE=NDB;
ALTER LOGFILE GROUP lg1 ADD UNDOFILE './lg1/undofile1.dat' INITIAL_SIZE 2500M ENGINE=NDB;

3) Create ts
 CREATE TABLESPACE ts_subsdata ADD DATAFILE './ts_subsdata/datafile.dat' USE LOGFILE GROUP lg1 INITIAL_SIZE 8000M ENGINE=NDB;
ALTER TABLESPACE ts_subsdata ADD DATAFILE './ts_subsdata/datafile2.dat' INITIAL_SIZE 8000M ENGINE=NDB;

4) load up disk data tables shcema

5) Using hugoLoad load 1/2 tables with 60,000 rows, other 1/2 with 600,000 rows

6) start truncating tables
[21 Jun 2007 9:22] Jonas Oreland
duplicate of bug#29229 (or other way around)