Bug #20473 Too low DataMemory ?
Submitted: 14 Jun 2006 23:12 Modified: 16 Jun 2006 16:19
Reporter: Pavol Luptak Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S3 (Non-critical)
Version:5.0.22 OS:Linux (Debian/Sarge)
Assigned to: CPU Architecture:Any

[14 Jun 2006 23:12] Pavol Luptak
Description:
Hello,
maybe this is not a bug, but only a strange behaviour.
I have configured 4 nodes (2 NDBDs, 1 MGM and 1 API SQL node) on 4 servers (each server has 5 GB RAM on 32-bit Intel platform - I use CONFIG_HIGHMEM64G option in kernel).

I have the following configuration on the MGM node:
-------------------------------------------------------------------------------------------------------
# Options affecting ndbd processes on all data nodes:
[NDBD DEFAULT]    
NoOfReplicas=2    # Number of replicas
DataMemory=1500M    # How much memory to allocate for data storage
IndexMemory=512M   # How much memory to allocate for index storage
                  # For DataMemory and IndexMemory, we have used the
                  # default values. Since the "world" database takes up
                  # only about 500KB, this should be more than enough for
                  # this example Cluster setup.
MaxNoOfAttributes = 1000
MaxNoOfOrderedIndexes = 1000
MaxNoOfUniqueHashIndexes = 5000
NoOfFragmentLogFiles = 64

# TCP/IP options:
[TCP DEFAULT]     
portnumber=2202   # This the default; however, you can use any
                  # port that is free for all the hosts in cluster
                  # Note: It is recommended beginning with MySQL 5.0 that
                  # you do not specify the portnumber at all and simply allow
                  # the default value to be used instead

# Management process options:
[NDB_MGMD]                      
hostname=192.168.1.8           # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster  # Directory for MGM node logfiles

# Options for data node "A":
[NDBD]                          
                                # (one [NDBD] section per data node)
hostname=192.168.1.10           # Hostname or IP address
datadir=/var/lib/mysql-cluster   # Directory for this data node's datafiles

# Options for data node "B":
[NDBD]                          
hostname=192.168.1.11           # Hostname or IP address
datadir=/var/lib/mysql-cluster   # Directory for this data node's datafiles

# SQL node options:
[MYSQLD]                        
hostname=192.168.1.9           # Hostname or IP address
                                # (additional mysqld connections can be
                                # specified for this node for various
                                # purposes such as running ndb_restore)
------------------------------------------------------------------------------------------------------
I use 1500 MB for DataMemory and 512 MB for IndexMemory. 
When I try to increase this limit (e.g. to 1600 MB or 2000 MB), NDBD nodes fail with error:

Connected to Management Server at: localhost:1186
Node 2: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 0. Caused by error 6050: 'WatchDog terminate, internal error or massive overload on the machine running this node(Internal error, programming error or missing error m
Node 3: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 0. Caused by error 6050: 'WatchDog terminate, internal error or massive overload on the machine running this node(Internal error, programming error or missing error m

When I use 1500 MB for DataMemory (or less), restoring some tables fails with
error:

ERROR 1114 (HY000) at line 998: The table 'tblXYZ' is full

according to:

"If you get the "table is full" error and you are using the NDB cluster engine, you need to increase your settings for "DataMemory" and "IndexMemory" in the config.ini for your cluster."

but I cannot increase DataMemory  and IndexMemory because above-mentioned error.

How is memory managed by MySQL 5.0.22 ?
Is it a normal behaviour that it is possible to use only 1500 MB for DataMemory Variable on server with 5 GB physical memory?

The size of my used DB is approximately 4.6 GB (1.1 x less than the size of physical memory).

Should I need more memory or wait for the stable MySQL 5.1.x ?

Pavol

How to repeat:
When I try to increase DataMemory (from 1500 MB to 1600 MB), NDBDs nodes crash with errror:

Node 2: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 0. Caused by error 6050: 'WatchDog terminate, internal error or massive overload on the machine running this node(Internal error, programming error or missing error m
Node 3: Forced node shutdown completed. Occured during startphase 0. Initiated by signal 0. Caused by error 6050: 'WatchDog terminate, internal error or massive overload on the machine running this node(Internal error, programming error or missing error m

All servers have 5 GB physical RAM.
[16 Jun 2006 7:17] Hartmut Holzgraefe
Each process on a 32bit system can only address 4GB (2^32 bytes)
which is further devided into user and kernel space. Depending on
kernel version and configuration only 2 or 3GB are actually available
as user address space for processes.
[16 Jun 2006 16:19] Pavol Luptak
Is it possible to set up handling of large databases with multiple processes?
It is quite strange that only one MySQL process handles whole database and consumes all available memory.

Do you have any experience with "splitting patch" (split RAM funcionality is now integrated in Linux kernel 2.6.16) ? How does it impact MySQL?
With this patch one process can use up to 4 GB RAM.

http://kerneltrap.org/node/2891