Bug #25598 Data node crashed during replication ndb_dd tables (dbtup/DbtupDiskAlloc.cpp)
Submitted: 12 Jan 2007 22:26 Modified: 23 Mar 2007 6:18
Reporter: Serge Kozlov Email Updates:
Status: Closed Impact on me:
Category:MySQL Cluster: Replication Severity:S2 (Serious)
Version:5.1.17-bk OS:Linux (Linux FC4 64bit)
Assigned to: Guangbao Ni CPU Architecture:Any

[12 Jan 2007 22:26] Serge Kozlov
Testing of disaster recovery for cluster replication.
The error described below isn't appear if both nodes on slave are up during replication.

Non-master data node of slave cluster stopped during replication of large ndb_dd tables (millions of rows) and then started again when replication still goes. But it couldn't start successfully and crashed  with error:

Current byte-offset of file-pointer is: 568

Time: Friday 12 January 2007 - 22:52:21
Status: Temporary error, restart node
Message: Internal program error (failed ndbrequire) (Internal error, programming
 error or missing error message, please report a bug)
Error: 2341
Error data: dbtup/DbtupDiskAlloc.cpp
Error object: DBTUP (Line: 804) 0x0000000a
Program: ./builds/libexec/ndbd
Pid: 6881
Trace: /space/run/ndb_3_trace.log.1
Version: Version 5.1.15 (beta)

How to repeat:
1. Start cluster M (master).
2. Start cluster S (Slave).
3. Run script from attached file on machine M:
./sqe.pl -q aa.txt -p:
4. Run mysql on machine S and wait while some data will insert into t1 and t2.
5. Run ndb_mgm on machine S and stop non-master data node.
6. Please wait while data node finished successfully.
7. Now start this data node.
8. Look at error.log on machine S.
[22 Feb 2007 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[2 Mar 2007 17:20] Steve Edwards
we seem to have identified the same bug - http://bugs.mysql.com/bug.php?id=26758
[20 Mar 2007 10:47] Jonathan Miller

Do we know why it takes sooo long? An hour for 5 million records seems like an awful long time. Would like to know where all this time is being spent. Who long does it take to write the 5 million records?