Bug #58662 | File descriptor leak in mysql_real_connect? (replication related?) | ||
---|---|---|---|
Submitted: | 2 Dec 2010 15:18 | Modified: | 30 Dec 2012 10:12 |
Reporter: | Hartmut Holzgraefe | Email Updates: | |
Status: | Can't repeat | Impact on me: | |
Category: | MySQL Server: C API (client library) | Severity: | S3 (Non-critical) |
Version: | mysql-5.1 | OS: | Any |
Assigned to: | CPU Architecture: | Any |
[2 Dec 2010 15:18]
Hartmut Holzgraefe
[2 Dec 2010 15:40]
Hartmut Holzgraefe
I have tried to find a code path in mysql_real_connect() that could explain this, but everything looks ok to me in there. The fact that "error connection ... Error_code: 2004" is logged also indicates that everything is ok on the mysqld side. As far as i can tell Error code 2004 (CR_IPSOCK_ERROR) is only raised if the socket() system call returns -1 it looks as if it's actually the system call or libc that is leaking the file descriptor already ...?
[30 Dec 2012 10:12]
MySQL Verification Team
Setting to "can't repeat" for now - haven't ever seen this happen myself, nor on other servers. Please comment if the problem is seen again and include exact OS details too.
[11 Feb 2013 11:51]
Muzaffer Peynirci
I think I have hit an issue similar to the one explained here; I have two NDB clusters (server1 and server2) and a master-master replication between them. A few days ago I started to get "too many open files" error from server2. At the same time server1 wasn't able to connect to server2 for replication; it was displaying error code Error_code: 2004 as the reason when I run ‘SHOW SLAVE STATUS\G’. I thought replication was down because mysql server on server2 was not accepting any connection and restarted server2. After restart, it was accepting connections but replication was still down. When I tried to connect server2 from server1 using the replication user, I was able to connect such as; # mysql –h<IP_OF_SERVER2> -u<REP_USER> -p<REP_PWD> -P<3306> But even though I stopped and started the slave on server1 several times, it didn’t connect. It was always giving 2004 error. I checked number of files opened by mysql on server2 using pfiles command and pid of mysqld; it was increasing gradually each time I stopped and started the slave on server2. Now there are two things I don't understard here; first of all why I get error code 2004 while I am able to connect from server1 to server2 using the same user/password/IP/port? Secondly, why getting error 2004 causes mysql to hit "max open file" limit. Is it possible that it doesn't close some files when it gets error 2004? My mysql version is 5.1.30-ndb-6.3.20-cluster-gpl-log MySQL Cluster Server (GPL)