Bug #54975 MySQL Proxy Memory Leaking
Submitted: 3 Jul 2010 3:37 Modified: 15 Jan 2015 23:57
Reporter: Wan Tao LUO Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Proxy Severity:S2 (Serious)
Version:0.80 OS:Linux (RHEL AS 5)
Assigned to: Jan Kneschke CPU Architecture:Any
Tags: memory leaks

[3 Jul 2010 3:37] Wan Tao LUO
Description:
Sorry about my poor Chinese English.

I have deployed a MySQL Proxy.
What connect to MySQL Server via the MySQL Porxy are PHP scripts. 
It's very stable for about already 2 weeks but it seems memory leaking.

There are about 100 mysql connections from MySQL Proxy to MySQL Server generated on my production application server due to the users' visiting. And those connections are never "wait_timeout-ed" by MySQL Server because the visiting frequncy is high.

The started memory occupation is about 100M, but now 800M by MySQL Proxy, without mysql connections number increased.

PHP scripts have no responsibilities for the leaks, because they close the mysql connections automatically when they die.

Thank you!

How to repeat:
Deploying a MySQL Proxy.
Run testing PHP scripts again and again for a long time.
Record memory occupation at the begin and end of testing.
[3 Jul 2010 10:30] Wan Tao LUO
I switched the application to a backup-proxy, waiting for the mysqld to disconnect the connections from the production-proxy( about 120s ). When all connections sleep enougth(120s) and disconnected by mysqld, I found that proxy didn't released memory when I use system command "free" to take a look at the memory occupation status.

So, I restarted the production-proxy, and the memory was released finally. Then I switched the applications back to it again.

This is just a temp-method to resolve the memory leaks problem, do it a few weeks and so on. It's confused.
[3 Jul 2010 15:35] Enterprise Tools JIRA Robot
Mark Leith writes: 
Do you have any custom scripts deployed that you are using within Proxy, or is all traffic just being passed through?

What kind of traffic does the Proxy handle - i.e. does it handle large result sets, or large query statements (such as large multi-row INSERT statements, or statements with very large IN (...) lists)?
[4 Jul 2010 1:42] Wan Tao LUO
Thanks for reply from Mark Leith.

##########################################################################

I have no custom lua scripts, I used the default rw-splitting.lua(cp-ed to my mysql-proxy install-path) , and my start.sh contents are:

./mysql-proxy --defaults-file=mysql-proxy.ini &

and contents in mysql-proxy.ini are:

[mysql-proxy]
keepalive = true
event-threads = 4
max-open-files = 65535
log-level = message 
log-file = mysql-proxy.log
proxy-backend-addresses = <master-ip>:3306
proxy-read-only-backend-addresses = <slave-ip>:3306
proxy-lua-script = rw-splitting.lua
proxy-pool-no-change-user = true
proxy-fix-bug-25371 = true

###################################################################

I'm sure that all traffic is being passed through the proxy by looking at error log-file generated by our custom basic-class-of-mysql using in our PHPs. If there are any mysql errors, the log will record them.

###################################################################

Yes, the proxy often handles large result sets( viewing mysql administrator, the average traffic is about 500k bytes/s ), and there are statements with
very large IN (...) lists sometimes, what in "IN()" are sometimes hundreds even thousands.
[5 Jul 2010 8:01] Sveta Smirnova
Thank you for the feedback.

I had issues trying to repeat described problem.

Please try if problem repeatable without options

proxy-pool-no-change-user = true
proxy-fix-bug-25371 = true

in your environment.
[7 Jul 2010 14:16] Wan Tao LUO
Thank you, Sveta.
Sorry about replying so lately because of my other affairs.

It's repeated, and I deleted: 
    proxy-pool-no-change-user = true
    proxy-fix-bug-25371 = true
from my config file and restarted proxy now waiting for good news, hoho.

I will report again within this weeked or earlier the next week.

Thank all of you again.
[12 Jul 2010 2:30] Wan Tao LUO
There are still memory leaks.

I have 2 proxies, each in 1 server; both are started with options:

[mysql-proxy]
keepalive = true
event-threads = 4
max-open-files = 65535
log-level = message 
log-file = mysql-proxy.log
proxy-backend-addresses = <master-ip>:3306
proxy-read-only-backend-addresses = <slave-ip>:3306
proxy-lua-script = rw-splitting.lua

【One proxy died 2 times according to the log, and therefore it released memory at each time it died:】

2010-07-07 21:50:08: (message) mysql-proxy 0.8.0 started
2010-07-07 21:50:08: (message) proxy listening on port :4040
2010-07-07 21:50:08: (message) added read/write backend: <master-ip>:3306
2010-07-07 21:50:08: (message) added read-only backend: <slave-ip>:3306
2010-07-07 21:50:08: (message) chassis-event-thread.c:373: starting 3 threads
2010-07-07 21:50:08: (message) chassis.c:178: [angel] we try to keep PID=5331 alive
2010-07-08 16:57:49: (message) chassis.c:223: [angel] PID=5331 died on signal=11 (it used 0 kBytes max) ... waiting 3min before restart
2010-07-08 16:57:51: (message) mysql-proxy 0.8.0 started
2010-07-08 16:57:51: (message) proxy listening on port :4040
2010-07-08 16:57:51: (message) added read/write backend: <master-ip>:3306
2010-07-08 16:57:51: (message) added read-only backend: <slave-ip>:3306
2010-07-08 16:57:51: (message) chassis-event-thread.c:373: starting 3 threads
2010-07-08 16:57:51: (message) chassis.c:178: [angel] we try to keep PID=30297 alive
2010-07-09 04:58:41: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-09 18:13:01: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-10 05:52:02: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-11 05:03:23: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-11 09:35:08: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-11 10:24:30: (message) chassis.c:223: [angel] PID=30297 died on signal=6 (it used 0 kBytes max) ... waiting 3min before restart
2010-07-11 10:24:32: (message) mysql-proxy 0.8.0 started
2010-07-11 10:24:32: (message) proxy listening on port :4040
2010-07-11 10:24:32: (message) added read/write backend: <master-ip>:3306
2010-07-11 10:24:32: (message) added read-only backend: <slave-ip>:3306
2010-07-11 10:24:32: (message) chassis-event-thread.c:373: starting 3 threads
2010-07-11 10:24:32: (message) chassis.c:178: [angel] we try to keep PID=25379 alive

【Another is stronger, but keeping eating memory from 40m to 545m within 4.5 days:】
2010-07-07 22:08:03: (message) mysql-proxy 0.8.0 started
2010-07-07 22:08:03: (message) proxy listening on port :4040
2010-07-07 22:08:03: (message) added read/write backend: <master-ip>:3306
2010-07-07 22:08:03: (message) added read-only backend: <slave-ip>:3306
2010-07-07 22:08:03: (message) chassis-event-thread.c:373: starting 3 threads
2010-07-07 22:08:03: (message) chassis.c:178: [angel] we try to keep PID=16024 alive
2010-07-08 13:09:08: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-08 15:09:20: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-09 02:15:10: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-09 04:21:58: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-10 05:54:33: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-11 14:16:16: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed
2010-07-11 15:04:06: (critical) g_ptr_array_remove_index_fast: assertion `index < array->len' failed

【I think those assertions in logs may be clues.】
[15 Dec 2014 23:57] Sveta Smirnova
Thank you for the feedback.

Please try with current version 0.8.5 and inform us if bug still exists: I could not repeat it after 4 hours run.
[16 Jan 2015 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".