Bug #50417 Unable to Remote Login into MySQL Cluster
Submitted: 18 Jan 2010 14:52 Modified: 15 May 2012 14:00
Reporter: Kriss Parker Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S1 (Critical)
Version:mysql-5.1-telco-7.0 OS:Windows (Windows XP Professional)
Assigned to: CPU Architecture:Any
Tags: 7.0.9-win32., API Node, cluster, MySQL Cluster, MySQL Node, windows

[18 Jan 2010 14:52] Kriss Parker
Description:
Hi, I am having an issue where I am not able to Enable Remote Access to MySQL Cluster on Windows.

I am using the binaries, from the MySQL Website and put them in place, and setup the server correctly (or from what I think is correct).

I have 1 Management Node, 2 ndb Nodes, and 3 MySQL Nodes. These are all running fine, and if i make a change to the database from one MySQL Nodes, it makes the change to all of them.

What i can’t seem to be able to do is Remote Access to MySQL Cluster databases. The 3 MySQL Nodes are running on 10.0.10.1, 10.0.10.2, 10.0.10.3 and when i try connecting to them from a remote machine I get the following error:

MySQL Error Number 1045
Access denied for user 'root'@'localhost' (using password: NO)
If you want to check the network connection, please click the Ping button.

I have tried with and without a password, still no luck. I have added the user and done the following quiries in command prompt:

1. Open a DOS command prompt on the server.
2. Run the following command from the mysql\bin directory:
mysql -u root --password=
3. A mysql> prompt should be displayed.
4. To create a remote user account with root privileges, run the following commands:
GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'IP' IDENTIFIED BY 'PASSWORD';
5. mysql> FLUSH PRIVILEGES;
6. mysql> exit; 

I have also tried:

3.mysql -u root -p
4. use mysql;
5. select * from user \G
GRANT ALL PRIVILEGES ON *.* TO 'USERNAME'@'IP' IDENTIFIED BY 'PASSWORD';
mysql> flush privileges;

I have tried adding a user with both '%' and with the actual IP Address which I am trying to connect from and still no luck.

I have added the ports to the fire wall, and also disabled the fire wall, and still unable to connect from the remote computer.

(Reminder I am using the binaries)

Next Step:

I then install mysql cluster using the .exe "mysql-cluster-gpl-7.0.6-win32.msi" and ticked the option allow remote access during the installation, and 'THIS WORKS', I can access the database fine, but I am wanting to use the binaries as the server is already setup using them.

If there an issue with the binaries with remote access? or am i missing a file or something not quite right in my setup?

An urgent response would be must appreciated.

How to repeat:
Management Node: Copy the binaries on to the management Node.

The config.ini file needs to be created with the following;

[ndbd default]
noofreplicas=2
datadir=C:\MySQL_Cluster\My_Cluster\data

[ndbd]
hostname=10.0.10.4
id=2

[ndbd]
hostname=10.0.10.5
id=3

[ndb_mgmd]
id=1
hostname=10.0.10.4

[mysqld]
id=101
hostname=10.0.10.1

[mysqld]
id=102
hostname=10.0.10.2

[mysqld]
id=103
hostname=10.0.10.3

Each machine needs the binaries installed, but for the MySQL Nodes you need to creat the .cnf file for each MySQL Node in this case, 101.cnf, 102.cnf and 103.cnf and these are place into the conf folder on the MySQL Nodes.

For example for IP Address 10.0.10.1 within the conf folder there should be 101.cnf.

When you lunch the MySQL Node use this query:10.0.10.1: mysqld --defaults-file=conf\my.101.cnf, do the same for the second and the third MySQL Nodes. (Do this after running the Managment Node).

Query to start the management node: 
ndb_mgmd --initial -f conf/config.ini --configdir=./conf

Second NDB Node:
10.0.10.5: ndbd -c 192.168.0.19:1186 --initial

I Run the 'SHOW' command on the management node and gett the following;

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @10.0.10.4  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=3    @10.0.10.5  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.10.4  (mysql-5.1.39 ndb-7.0.9)

[mysqld(API)]   3 node(s)
id=101  @10.0.10.1  (mysql-5.1.39 ndb-7.0.9)
id=102  @10.0.10.2  (mysql-5.1.39 ndb-7.0.9)
id=103  @10.0.10.3  (mysql-5.1.39 ndb-7.0.9)

So as you can see, it does all work just unable to remote access the MySQL. 

Any idea's?

Many thanks in advanced.

Suggested fix:
Using the mysql-cluster.exe file remote access works.
[18 Jan 2010 20:22] Sveta Smirnova
Thank you for the report.

Please provide command line you use to connect from the remote server.
[19 Jan 2010 9:36] Kriss Parker
Commands I've used to try and connect to MySQL API Nodes:

mysql -u root -p

mysql -h 10.0.10.1 -u root -p 

I have also tried connecting use the MySQL Query Browser from the remote computer, but this dosn't seem to be working with the binaries.

As far as I can see i have added 'root'@'localhost' and 'root'@'%' and 'root'@'IP_OF_REMOTE_COMPUTER'
[19 Jan 2010 10:18] Sveta Smirnova
Thank you for the feedback.

Error you get:

> Access denied for user 'root'@'localhost' (using password: NO)
> If you want to check the network connection, please click the Ping button.

indicates you are connecting to MySQL server located at localhost. This is expected for "mysql -u root -p" as by default this uses sockets, but not expected for "mysql -h 10.0.10.1 -u root -p " unless you issue this command on server with IP address 10.0.10.1. Please check if you are trying to connect to the same server. If this is not the case please provide output of "mysql -h 10.0.10.1 -u root -p "
[19 Jan 2010 10:24] Kriss Parker
Error recieved when using:

mysql -h 10.0.10.1 -u root -p

ERROR 1042 (HY000): Can't get hostname for your address

I am able to ping all servers in the MySQL Cluster, from the remote server with no problems.
[19 Jan 2010 11:22] Kriss Parker
Just a guess, but is there anything i need to be adding within the 101.cnf file to allow remote access to the MySQL Node for IP 10.0.10.1?

Of course if there is I would need to do the same for all the MySQL Nodes.
[19 Jan 2010 11:30] Kriss Parker
I have just tried to connect to one of the MySQL Nodes from the management Node (IP 10.0.10.4), and I get the following error message:

ERROR 2003 (HY000): Can't connect to MySQL server on '10.0.10.1' (10060)

Now of course the MySQL Nodes and Management Node are connected in the cluster, but still unable to connect to the MySQL Database using a command query or the MySQL Query Broswer 

Cluster still working, and if I make a change to the database from MySQL Node, it makes that change to all of them.

ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @10.0.10.4  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=3    @10.0.10.5  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @10.0.10.4  (mysql-5.1.39 ndb-7.0.9)

[mysqld(API)]   3 node(s)
id=101  @10.0.10.1  (mysql-5.1.39 ndb-7.0.9)
id=102  @10.0.10.2  (mysql-5.1.39 ndb-7.0.9)
id=103  @10.0.10.3  (mysql-5.1.39 ndb-7.0.9)
[20 Jan 2010 15:16] Kriss Parker
Has anyone got an update regarding this issue?

Remote access to MySQL Cluster using binaries.

Many thanks in advanced,

Kriss
[25 Jan 2010 9:41] Bernd Ocklin
Somehow to me you description reads as if you are successful with *installed* binaries and not successful with just *unpacked and copied* files? In the latter case you missed to run the database installer which also sets up basic user grants. You should run the setup tool then separately.
[25 Jan 2010 9:50] Kriss Parker
Which setup tool are you referring too? and do I run this after the binaries are inplace?

I have successfully setup MYSQL Cluster with binaries no problems. 

The only problem is that I am unable to remote access my database using the binaires, from wither a query browser or command prompt from any remote computer, even after I have added the user in the table to allow remote access.

Many thanks, for your reply.
[25 Jan 2010 14:00] Andrew Morgan
I tried this with a fresh Windows VM and got the same error (even with the mysql database created)...

ERROR 1042 (HY000): Can't get hostname for your address

I resolved it by adding the hostnames/IP addresses to C:\Windows\System32\drivers\etc\hosts

Of course, if DNS were set up for these hosts then that wouldn't have been needed.
[25 Jan 2010 14:04] Andrew Morgan
Forgot to mention that I set up (as I expect Kriss did) by copying over the mysql db from the zip file rather than going to the trouble of installing Perl and then running the mysql_install_db.pl script.
[25 Jan 2010 15:37] Kriss Parker
Hi Andrew, thank you very much for your comment.

This has resolved the issue, and I can now remote access the database.

I understand that this is a work around and it does resolve the issue so thank you very much, but are you aware if there is any update to resolve this? As surely updating the table should allow remote access, and not having to edit the 'Hosts' file.

A fix from MySQL to resolve this would be nice  :)

But again many thanks Andrew.
[26 Jan 2010 12:04] Kriss Parker
When i shutdown the management node my data node also shuts down. 

I run One computer with both a Management Node and a Data Node, and a Second computer with just a Data Node.

If i shut down the Management computer completely, my secondary Data Node also shuts down.

When i do run the second data node i use the query:
ndb -c 10.0.10.4:1186

Shut down the management node and then both DATA Nodes come offline, and then because of this i can’t access the MySQL Nodes (Like a chain reaction)

Any reason for this? Or am I pointing them all to the wrong Data Node? or are they able to point to both NDB Nodes?
[26 Jan 2010 12:44] Hartmut Holzgraefe
Adding skip-name-resolve to the [mysqld] section of my.cnf might help to resolve the DNS lookup problem?

That the 2nd data node goes down when shutting down the machine running both the mgm and the 1st data node is expected behavior. As the management node that acts as arbitrator in this case goes down at the same time as the 1st data node (which is half of the nodes in this case) the 2nd node can't know whether a network split has occurred or whether it just lost connection to the other nodes that may still be running. In absence of an arbitrator vote in its favor it will decide to play safe and shut itself down to prevent any inconsistencies. See also:

   http://dev.mysql.com/doc/refman/5.1/en/faqs-mysql-cluster.html#qandaitem-23-10-1-3

When starting up the management node you should even see warning about ndb_mgmd and ndbd on the same host being problematic ...
[27 Jan 2010 14:55] Kriss Parker
Adding the IP address of the remote computer to the Hosts file fixed the major issue.

But surely this should be updated for MySQL Cluster that editing the table to allow remote access should be enough? (maybe that could be an enhancement).

The sudgestion to the DNS didn't work as well as i'd thought it might there were some issue in doing this.

And to answer you, yes running a Data Node on a Management Node on the same host isn't a good idea, as if this machine fails the cluster can fail.

Are you aware of any way to keep the Cluster Server running using only one Data node and one MySQL Node? (from my work with cluster I carn't find any way around this)

As far as I can see you always need two Data Nodes running otherwise the cluster will shutdown as one Data node is unable to keep the cluster alive.

At present I am using 5 host using replication (not cluster) and if any machine fails there is no action needed, I can just bring it back online when I see it has failed. My replication server can go down to One Server, meaning four servers can go down and the service will still be alive, and working.

Where as with cluster it looks like you always need two or three computer running in a cluster to keep it running for example one MySQL Node and two Data Nodes each one a single host, or MySQL Node and a Data node on the same Host and another data node.

But again any host failed then the cluster fails, which doesn’t offer me a better system to what I am using now.

Is there any way to allow cluster to run on a single MySQL Node and Data Node.

Many thanks,
[8 Feb 2010 20:35] Matthew Bilek
Have the same problem with the mysql-cluster-gpl-7.0.9.tar.gz version.  Previous versions (7.0.6) can handle IP configurations without relying on DNS (Domain Name resolution).  Why is this?
[15 Feb 2010 14:16] Hartmut Holzgraefe
Hello Kris, 

please take your questions regarding node failures and keeping the cluster alive to either the cluster mailing list (subscribe on http://lists.mysql.com/cluster )
or in the cluster forum on http://forums.mysql.com/list.php?25 . 

The bug system is not the right place to discuss this kind of questions ...

-- 
hartmut
[15 Feb 2010 14:16] Hartmut Holzgraefe
To Matthew:

are you experiencing this on Windows, too, or on another platform?
[16 Mar 2010 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[6 Nov 2010 18:52] Wagner Bianchi
hI folks,

I am getting the same error when I try to access another server remotly. After to set the server privilges, I am trying to connect from 192.168.1.101 to 192.168.1.200 and I am getting:

[root@localhost ~]# mysql -u root -p -h 192.168.1.101
Enter password: 
ERROR 1042 (HY000): Can't get hostname for your address

This error was commented on this bug report, but, what I want to aware users is, I downloaded MySQL 5.5.6 RPM packages and made their installation. Even after to put new section [server] into configuration file and the option skip-name-resolve and restart mysqld, I am getting the same error:

[root@localhost ~]# mysql -u root -p -h 192.168.1.101
Enter password: 
ERROR 1042 (HY000): Can't get hostname for your address

I am using Mysql 5.5.6 on a CentOS box and I am trying to access remotly a Windows XP box with the same Mysql version installed. I am sure that there is privileges to connect remotly on both servers.

If anyone here has a solution for this problems, I'll be so thankfull.

Best Regards.
[6 Nov 2010 19:07] Wagner Bianchi
I am glad to inform you that problem was solved on my environment. I added skip-name-resolve on the target MYSQL's configuration file and now remotely connections by both servers are business as usual.

Best regards.
[9 Nov 2010 16:31] Sveta Smirnova
Status set back to "Need Feedback" as we still need feedback from earlier reporters.
[10 Dec 2010 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[13 Apr 2012 6:12] Rajnesh Kumar
hI folks,

I am getting the same error when I try to access another server remotly. After to set the
server privilges, I am trying to connect from 192.168.16.251 to 192.168.16.130 and I am
getting:

[root@localhost ~]# mysql -u root -p -h 192.168.16.130
Enter password: 
ERROR 1042 (HY000): Can't get hostname for your address

This error was commented on this bug report, but, what I want to aware users is, I
downloaded MySQL 5.5.6 RPM packages and made their installation. Even after to put new
section [server] into configuration file and the option skip-name-resolve and restart
mysqld, I am getting the same error:

[root@localhost ~]# mysql -u root -p -h 192.168.16.130
Enter password: 
ERROR 1042 (HY000): Can't get hostname for your address

I am using Mysql 5.5.6 on a CentOS box and I am trying to access remotly a Windows XP box
with the same Mysql version installed. I am sure that there is privileges to connect
remotly on both servers.

If anyone here has a solution for this problems, I'll be so thankfull.

Best Regards.

Rajnesh Kumar
[15 Apr 2012 14:00] Valeriy Kravchuk
Rajnesh,

Please, report a new bug (and make sure you use recent server version, 5.5.23). Your problem has nothing to do with Cluster in any case.
[16 May 2012 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".