Bug #10711 More than 4GB datamemory kills cluster
Submitted: 18 May 2005 14:27 Modified: 14 Jun 2005 3:27
Reporter: Petr Steckovic Email Updates:
Status: Closed Impact on me:
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S1 (Critical)
Version:4.1.11 OS:Linux (Fedora 64bit)
Assigned to: Jonas Oreland CPU Architecture:Any

[18 May 2005 14:27] Petr Steckovic
Database contains one table
create table test (x integer) engine=ndbcluster;

DataMemory = 4000MB
IndexMemory = 1024MB

ndb nodes 2x Dell 64bit + 64bit fedora
management P4 32bits fedora
2 sql clients on 64bit Dells with 64 bit fedora.

This configuration is working.

If DataMemory is changed to more than 4GB (i tried 6GB) . Cluster cannot start and goes down with following errors:

Date/Time: Wednesday 18 May 2005 - 15:34:39
Type of error: error
Message: Pointer too large
Fault ID: 2306
Problem data: DbtupExecQuery.cpp
Object of reference: DBTUP (Line: 604) 0x0000000a
ProgramName: ndbd
ProcessID: 1271
TraceFile: /home/mysql-cluster/ndb_3_trace.log.3

How to repeat:
Create any table with configuration DataMemory more than 4000MB and restart cluster.

Suggested fix:
[9 Jun 2005 5:25] Jonas Oreland
Pushed to 4.1.13 and 5.0.8
[14 Jun 2005 3:27] Jon Stephens
Thank you for your bug report. This issue has been committed to our
source repository of that product and will be incorporated into the
next release.

If necessary, you can access the source repository and build the latest
available version, including the bugfix, yourself. More information 
about accessing the source trees is available at

Additional info:

Documented bugfix in change history for 4.1.13 and 5.0.8. Marked closed.