Bug #51954 | Connection attempts alternately fail | ||
---|---|---|---|
Submitted: | 11 Mar 2010 14:50 | Modified: | 17 Oct 2010 11:35 |
Reporter: | Sam Bashton | Email Updates: | |
Status: | No Feedback | Impact on me: | |
Category: | MySQL Proxy: Core | Severity: | S2 (Serious) |
Version: | 0.8 | OS: | Linux |
Assigned to: | CPU Architecture: | Any |
[11 Mar 2010 14:50]
Sam Bashton
[2 Apr 2010 6:33]
Sveta Smirnova
Thank you for the report. I can not repeat described behavior with loop while mysql -h127.0.0.1 -P4040 -e "select 1"; do echo 1; done running in 10 separate terminals. Please provide more details how to repeat problem behavior: maybe you are using particular queries?
[2 May 2010 23:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[4 Aug 2010 0:10]
Reliam Engineer
We have a similar issue. /opt/mysql-proxy-0.8.0/bin/mysql-proxy --proxy-address=:3309 --proxy-backend-addresses=x.x.x.212:3309 --proxy-read-only-backend-addresses=HOSTSLAVE:3309 --proxy-lua-script=/opt/mysql-proxy-0.8.0/share/doc/mysql-proxy/rw-splitting.lua --log-level=debug [root@XXXXX ~]# mysql -u XXXXX -P3309 -pXXXXX -h 127.0.0.1 XXXXX -e 'select 1' +---+ | 1 | +---+ | 1 | +---+ [root@XXXXX ~]# mysql -u XXXXX -P3309 -pXXXXX -h 127.0.0.1 XXXXX -e 'select 1' +---+ | 1 | +---+ | 1 | +---+ [root@XXXXX ~]# mysql -u XXXXX -P3309 -pXXXXX -h 127.0.0.1 XXXXX -e 'select 1' +---+ | 1 | +---+ | 1 | +---+ [root@XXXXX ~]# mysql -u XXXXX -P3309 -pXXXXX -h 127.0.0.1 XXXXX -e 'select 1' +---+ | 1 | +---+ | 1 | +---+ [root@XXXXX ~]# mysql -u XXXXX -P3309 -pXXXXX -h 127.0.0.1 XXXXX -e 'select 1' ERROR 1047 (08S01): Unknown command 2010-08-03 17:06:38: (message) mysql-proxy 0.8.0 started 2010-08-03 17:06:38: (debug) chassis-limits.c:75: current RLIMIT_NOFILE = 1024 (hard: 25000) 2010-08-03 17:06:38: (debug) chassis-limits.c:79: trying to set new RLIMIT_NOFILE = 8192 (hard: 25000) 2010-08-03 17:06:38: (debug) chassis-limits.c:87: set new RLIMIT_NOFILE = 8192 (hard: 25000) 2010-08-03 17:06:38: (message) proxy listening on port :3309 2010-08-03 17:06:38: (message) added read/write backend: X.X.X.212:3309 2010-08-03 17:06:38: (message) added read-only backend: HOSTSLAVE:3309 [connect_server] 127.0.0.1:57651 [1].connected_clients = 0 [1].pool.cur_idle = 0 [1].pool.max_idle = 1 [1].pool.min_idle = 1 [1].type = 1 [1].state = 0 [1] idle-conns below min-idle [read_query] 127.0.0.1:57651 current backend = 0 client default db = XXXXX client username = XXXXX query = select @@version_comment limit 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : true in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57651 current backend = 0 client default db = XXXXX client username = XXXXX query = select 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : false in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57651 current backend = 0 client default db = XXXXX client username = XXXXX (QUIT) current backend = 0 [disconnect_client] 127.0.0.1:57651 2010-08-03 17:06:43: (debug) abs wait-for-event::done usec= 0 2010-08-03 17:06:43: (debug) abs lua-exec::done usec= 0 [connect_server] 127.0.0.1:57653 [1].connected_clients = 0 [1].pool.cur_idle = 0 [1].pool.max_idle = 1 [1].pool.min_idle = 1 [1].type = 1 [1].state = 1 [1] idle-conns below min-idle [read_query] 127.0.0.1:57653 current backend = 0 client default db = XXXXX client username = XXXXX query = select @@version_comment limit 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : true in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57653 current backend = 0 client default db = XXXXX client username = XXXXX query = select 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : false in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57653 current backend = 0 client default db = XXXXX client username = XXXXX (QUIT) current backend = 0 [disconnect_client] 127.0.0.1:57653 2010-08-03 17:06:45: (debug) abs wait-for-event::done usec= 0 2010-08-03 17:06:45: (debug) abs lua-exec::done usec= 0 [connect_server] 127.0.0.1:57655 [1].connected_clients = 0 [1].pool.cur_idle = 2 [1].pool.max_idle = 1 [1].pool.min_idle = 1 [1].type = 1 [1].state = 1 [2].connected_clients = 0 [2].pool.cur_idle = 0 [2].pool.max_idle = 1 [2].pool.min_idle = 1 [2].type = 2 [2].state = 0 [2] idle-conns below min-idle [read_query] 127.0.0.1:57655 current backend = 0 client default db = XXXXX client username = XXXXX query = select @@version_comment limit 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : true in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57655 current backend = 0 client default db = XXXXX client username = XXXXX query = select 1 sending to backend : HOSTSLAVE_IP:3309 is_slave : true server default db: XXXXX server username : XXXXX in_trans : false in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57655 current backend = 0 client default db = XXXXX client username = XXXXX (QUIT) current backend = 0 [disconnect_client] 127.0.0.1:57655 2010-08-03 17:06:46: (debug) abs wait-for-event::done usec= 0 2010-08-03 17:06:46: (debug) abs lua-exec::done usec= 0 [connect_server] 127.0.0.1:57657 [1].connected_clients = 0 [1].pool.cur_idle = 2 [1].pool.max_idle = 1 [1].pool.min_idle = 1 [1].type = 1 [1].state = 1 [2].connected_clients = 0 [2].pool.cur_idle = 0 [2].pool.max_idle = 1 [2].pool.min_idle = 1 [2].type = 2 [2].state = 1 [2] idle-conns below min-idle [read_query] 127.0.0.1:57657 current backend = 0 client default db = XXXXX client username = XXXXX query = select @@version_comment limit 1 sending to backend : X.X.X.212:3309 is_slave : false server default db: XXXXX server username : XXXXX in_trans : true in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57657 current backend = 0 client default db = XXXXX client username = XXXXX query = select 1 sending to backend : HOSTSLAVE_IP:3309 is_slave : true server default db: XXXXX server username : XXXXX in_trans : false in_calc_found : false COM_QUERY : true [read_query] 127.0.0.1:57657 current backend = 0 client default db = XXXXX client username = XXXXX (QUIT) current backend = 0 [disconnect_client] 127.0.0.1:57657 2010-08-03 17:06:47: (debug) abs wait-for-event::done usec= 0 2010-08-03 17:06:47: (debug) abs lua-exec::done usec= 0 [connect_server] 127.0.0.1:57659 [1].connected_clients = 0 [1].pool.cur_idle = 2 [1].pool.max_idle = 1 [1].pool.min_idle = 1 [1].type = 1 [1].state = 1 [2].connected_clients = 0 [2].pool.cur_idle = 2 [2].pool.max_idle = 1 [2].pool.min_idle = 1 [2].type = 2 [2].state = 1 [1] taking master as default using pooled connection from: 1 2010-08-03 17:06:47: (debug) [network-mysqld.c:817]: error on a connection (fd: -1 event: 0). closing client connection. [disconnect_client] 127.0.0.1:57659
[15 Sep 2010 6:30]
Zhang Guangpei
me too. I met the problem recently. when we use the connection from the pool,some errors happened: mysql-proxy --daemon --admin-address=:4401 --proxy-address=:4040 --log-level=debug --proxy-backend-addresses=192.168.0.2:3306 --proxy-backend-addresses=192.168.0.3:3306 --proxy-backend-addresses=192.168.0.4:3306 --proxy-lua-script=/home/zhanggp/mysql-proxy-0.8.0/examples/tutorial-keepalive.lua the debug info: [connect_server] [1].connected_clients = 0 [1].idling_connections = 3 [1].type = 1 [1].state = 1 [2].connected_clients = 0 [2].idling_connections = 2 [2].type = 1 [2].state = 1 [3].connected_clients = 0 [3].idling_connections = 2 [3].type = 1 [3].state = 1 using pooled connection from: 1 2010-09-15 14:21:09: (debug) [network-mysqld.c:817]: error on a connection (fd: -1 event: 0). closing client connection. [disconnect_client]
[15 Sep 2010 17:10]
Reliam Engineer
Our problem was we were using old passwords. IE show variables where Variable_name = 'old_passwords'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | old_passwords | ON | +---------------+-------+ VS show variables where Variable_name = 'old_passwords'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | old_passwords | OFF | +---------------+-------+ grep old /etc/my.cnf old_passwords=0
[17 Sep 2010 3:01]
Zhang Guangpei
I set old_passwords OFF, but the same thing happened: [rhel542]:/home/zhanggp#mysql -uroot -pabc789 -P4040 -h192.168.0.1 -D test -e "use test" ERROR 1047 (08S01): Unknown command [connect_server] [1].connected_clients = 0 [1].idling_connections = 2 [1].type = 1 [1].state = 1 [2].connected_clients = 0 [2].idling_connections = 1 [2].type = 2 [2].state = 1 [3].connected_clients = 0 [3].idling_connections = 1 [3].type = 2 [3].state = 1 using pooled connection from: 1 2010-09-17 10:56:40: (debug) [network-mysqld.c:937]: error on a connection (fd: -1 event: 0). closing client connection. [disconnect_client]
[17 Sep 2010 4:15]
Reliam Engineer
You may need to now update the password, if its ~16-17 characters long, its still using the old password. If it starts with * and is 40+ chars long its the new password, and then I don't know whats wrong. SET PASSWORD FOR 'some_user'@'some_host' = PASSWORD('newpwd');
[17 Sep 2010 5:45]
Zhang Guangpei
you are right,thank you! but now I want to ask one question more about the connection pool. when the clients sends many connections such as 1000 at the same time,but I just have 100 connections in the pool. if there are no idle connection in the pool,Can I use mysql-proxy to make the connection wait,not open new connection
[17 Sep 2010 11:35]
Sveta Smirnova
Thank you for the feedback. If old-passwords is cause of the problem this is duplicate of bug #31574 Sam, please confirm if you use option old-passwords.
[17 Oct 2010 23:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".