Bug #64501 every data node belongs to "Nodegroup: 0"
Submitted: 1 Mar 2012 2:55 Modified: 1 Mar 2012 5:10
Reporter: Yoshiaki Tajika (Basic Quality Contributor) Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S3 (Non-critical)
Version:7.2.4 OS:Linux (RHEL5.7(x86_64))
Assigned to: CPU Architecture:Any

[1 Mar 2012 2:55] Yoshiaki Tajika
Description:
Every data node belongs to "Nodegroup: 0", though I expect that 
DATA5 and DATA6 make node group #1, and DATA7 and DATA8 make node group #2
in such environment below.

192.168.39.1   MGM
192.168.39.5   DATA5
192.168.39.6   DATA6
192.168.39.7   DATA7
192.168.39.8   DATA8

How to repeat:
Login to MGM.
# cd /usr/local/mysql/data
# cat config.ini
[ndb_mgmd]
NodeId: 1
hostname: 192.168.39.1
datadir: /usr/local/mysql/data

[ndbd default]
NoOfReplicas: 2
DataDir: /usr/local/mysql/data
DataMemory: 300M
IndexMemory: 100M

[ndbd]
NodeId: 25
hostname: 192.168.39.5
[ndbd]
NodeId: 26
hostname: 192.168.39.6
[ndbd]
NodeId: 27
hostname: 192.168.39.7
[ndbd]
NodeId: 28
hostname: 192.168.39.8

[api]
NodeId: 31
hostname: 192.168.39.1
[api]
NodeId: 32
hostname: 192.168.39.1
[api]
NodeId: 33
hostname: 192.168.39.1

# \rm ndb_1*.log
# ndb_mgmd -f ./config.ini --initial --verbose
MySQL Cluster Management Server mysql-5.5.19 ndb-7.2.4
2012-03-01 10:24:03 [MgmtSrvr] DEBUG    -- Got nodeid: 1 from searching in configdir
2012-03-01 10:24:03 [MgmtSrvr] DEBUG    -- Deleting binary config file '/usr/local/mysql/mysql-cluster/ndb_1_config.bin.1'

Then, login to four DATAs, respectively.
# cat /etc/my.cnf
[mysql_cluster]
ndb_connectstring = 192.168.39.1
# ndbd --initial

As for the attached ndb_1_cluster.log, I started ndbd 
on 192.168.39.5 at 10:25, 
on 192.168.39.6 at 10:26, 
on 192.168.39.7 at 10:27, 
on 192.168.39.8 at 10:28.

Then, go back to MGM.
# ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=25   @192.168.39.5  (mysql-5.5.19 ndb-7.2.4, starting, Nodegroup: 0)
id=26   @192.168.39.6  (mysql-5.5.19 ndb-7.2.4, starting, Nodegroup: 0)
id=27   @192.168.39.7  (mysql-5.5.19 ndb-7.2.4, starting, Nodegroup: 0)
id=28   @192.168.39.8  (mysql-5.5.19 ndb-7.2.4, starting, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.39.1  (mysql-5.5.19 ndb-7.2.4)

[mysqld(API)]   3 node(s)
id=31 (not connected, accepting connect from 192.168.39.1)
id=32 (not connected, accepting connect from 192.168.39.1)
id=33 (not connected, accepting connect from 192.168.39.1)

Here, you can see every data node belongs to "Nodegroup: 0",
which I cannot understand why.

# cat ndb_1_out.log
==INITIAL==
==CONFIRMED==

# cat ndb_1_cluster.log
See attached.
[1 Mar 2012 2:59] Yoshiaki Tajika
ndb_1_cluster.log with verbose mode.

Attachment: ndb_1_cluster.log (application/octet-stream, text), 39.70 KiB.

[1 Mar 2012 3:37] Jon Stephens
I don't believe that this is a bug.

It can take some time for a MySQL Cluster to start. From the output of your SHOW command it can be seen that the data nodes are still starting, and have reached only start phase 0. Data nodes are not assigned to node groups until start phase 6 (see http://dev.mysql.com/doc/refman/5.5/en/mysql-cluster-start-phases.html). Note that it is also possible for all nodes to belong to a single node group even in a running cluster, depending on the configuration.

You can also use the ALL STATUS command in the ndb_mgm client to get the curent status of the data nodes and verify that they have completely started.

You are also welcome to the join the discussion in the MySQL Cluster Forum at http://forums.mysql.com/list.php?25.

Thanks!
[1 Mar 2012 5:10] Yoshiaki Tajika
Jon Stephens, thank you very much. I just figured out. 
Data nodes could not communicate with each other because of firewall, 
and got stuck. After disabling firewalls,

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     4 node(s)
id=25   @192.168.39.5  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 0, Master)
id=26   @192.168.39.6  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 0)
id=27   @192.168.39.7  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 1)
id=28   @192.168.39.8  (mysql-5.5.19 ndb-7.2.4, Nodegroup: 1)

Sorry to bother you, and thanks a lot!!