Bug #64563 Memory leaks in Solaris 10
Submitted: 6 Mar 2012 11:30 Modified: 27 Jul 2012 22:14
Reporter: Bartosz Bubak Email Updates:
Status: Not a Bug Impact on me:
Category:MySQL Server Severity:S2 (Serious)
Version:5.1.61 OS:Solaris (v. 10 x86_64)
Assigned to: CPU Architecture:Any
Tags: Leak, Memory, solaris

[6 Mar 2012 11:30] Bartosz Bubak
We have not very big, but heavily loaded (a few thousand simple queries per minute) database. Unfortunately, engine during work consume more and more memory, which later is not freed. 
MySQL after some time reserves all available memory in the system making it difficult or impossible the work of other processes. 
Unfortunately I am not able to determine what exactly causes this problem, and the only thing that helps is to reboot the database engine.

Output from prstat -a:
At start:
NPROC USERNAME  SWAP   RSS MEMORY      TIME  CPU                             
4 mysql     326M   77M   3.7%   0:00:00 0.4%

After 5 minutes:
4 mysql     394M  146M   7.1%   0:07:00 0.5%

After couple hours:
4 mysql    8963M 1580M    77%  15:53:53 0.0%

Same problem exist in version 5.5.21

How to repeat:
Send 5000 queries/sec for several hours.

I wrote script which constantly sends simple queries in 100 threads, after couple hours used RAM raised from 400mb to 8gb. Memory remains busy even after the the script terminates.

Suggested fix:
Release used memory.
[6 Mar 2012 12:34] Valeriy Kravchuk
Please, check if running FLUSH TABLES helps to reduce memory usage in any way. Send also your my.cnf file content.
[6 Mar 2012 13:12] Bartosz Bubak
I tried "flush tables" command, without any effect.

P.S. I add my.cnf to case files.
[9 Mar 2012 18:52] Sveta Smirnova
Thank you for the feedback.

You have high values for:



open-files-limit = 64000

But if open-files-limit was the reason most likely FLUSH TABLES solved the problem.

Please send us output of SHOW PROCESSLIST taken when memory usage is high.
[14 Mar 2012 10:33] Bartosz Bubak
After 1h tests, memory remain used...

prstat -a:
1 mysql    1018M  493M    12%   1:14:06 0.0%

... but process list is empty:

mysql> show processlist;
| Id  | User | Host      | db   | Command | Time | State | Info             |
| 305 | root | localhost | NULL | Query   |    0 | NULL  | show processlist |
1 row in set (0.00 sec)

P.S. In Mysql 5.0.95 on Solaris 10 is the same problem.
[15 Mar 2012 20:16] Sveta Smirnova
Thank you for the feedback.

Very interesting.

Please also send us output of SHOW ENGINE INNODB STATUS and SHOW GLOBAL STATUS taken 2 times with 10 minutes interval during high memory usage period.
[16 Mar 2012 10:44] Bartosz Bubak

Attachment: 28.txt (text/plain), 6.64 KiB.

[16 Mar 2012 10:45] Bartosz Bubak

Attachment: 01.txt (text/plain), 6.65 KiB.

[16 Mar 2012 10:46] Bartosz Bubak

Attachment: 27.txt (text/plain), 31.32 KiB.

[16 Mar 2012 10:47] Bartosz Bubak

Attachment: 00.txt (text/plain), 32.41 KiB.

[16 Mar 2012 10:51] Bartosz Bubak
Files added.
Memory usage:
1 mysql     375M  121M   5.9%   0:01:09  94%
1 mysql     461M  164M   8.0%   0:11:55  98%
[21 Mar 2012 20:25] Sveta Smirnova
Thank you for the feedback.

I see only Bytes_received, Bytes_sent  and Threads_running noticable increase. Looks like memory occupied by queries and result sets you send through connections. This is probably correct behavior.

Only problem I see is memory does not flush when load is stopped and all connections are closed.

You wrote you have a script which sends queries. Is it some kind of benchmarking script? Could you send it to us in this case?

Or send us output of SHOW CREATE TABLE user_campaign, SHOW CREATE TABLE campaign and SHOW TABLE STATUS for both tables. Do you query only these 2 tables or there are others?
[22 Mar 2012 7:47] Desmond Coertzen
I would like to report the same, but not solaris, on suse linux

Running 5.1.61 x86_64 on intel arch with 32GB RAM physical + 32GB swap.

For unknown reason, mysqld starts sucking in heap non-stop until the inevitable happens:
Mar 22 08:41:09 bbrk-db-srv kernel: [1362267.437368] Out of memory: Kill process 9566 (mysqld) score 1000 or sacrifice child

Four pointers:
* Using innodb tables only
* Over heap-alloc appeared after a few db dumps during backup script testing
* Backup was done with mysqldump --single-transaction --all-databases
* After over heap-alloc, queries performing particularly bad before sacrifice by kernel was queries with joins to federated tables
[22 Mar 2012 17:23] Sveta Smirnova

thank you for the feedback. But we need repeatable test case to repeat the problem. Also are you sure do you experience same issue? Do FLUSH TABLES help you? What is the output of SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_%' and SHOW GLOBAL STATUS LIKE 'Innodb_page_%'?
[27 Mar 2012 9:07] Desmond Coertzen
It is difficult to accomplish anything under the circumstances right before kill by oom.

I did find another problem with query planning where federated is involved, and it seems to be the visible cause. The server may crash during backup because the over caching of vm is already aggravated by the time the mysqldump starts.

I will log a separate bug under federated for this.
[2 Apr 2012 18:18] Sveta Smirnova

thank you for the update.

Set to "Need feedbback", because I am still waiting answer from Bartosz about their script.
[16 Apr 2012 19:53] Sveta Smirnova
Thank you for the feedback.

Please also send output of SHOW TABLE STATUS LIKE 'user_campaign'
[24 Apr 2012 14:12] Bartosz Bubak
mysql> SHOW TABLE STATUS LIKE 'user_campaign'\G
*************************** 1. row ***************************
           Name: user_campaign
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 64804
 Avg_row_length: 89
    Data_length: 5783552
Max_data_length: 0
   Index_length: 9502720
      Data_free: 4194304
 Auto_increment: NULL
    Create_time: 2012-03-01 16:37:26
    Update_time: NULL
     Check_time: NULL
      Collation: latin1_swedish_ci
       Checksum: NULL
1 row in set (0.09 sec)
[19 Jun 2012 6:40] Mukesh Palavalasa
my.cnf file

Attachment: my.cnf.txt (text/plain), 1.70 KiB.

[27 Jul 2012 22:14] Sveta Smirnova
Thank you for the feedback.

I repeated issue with your configuration, but I tend to think this is not a bug.

Reason is:

> After couple hours:
> 4 mysql    8963M 1580M    77%  15:53:53 0.0%

1580M physical memory with innodb_buffer_pool_size=1217M is about 326M at start plus InnoDB buffer pool filled with results of queries.

So we can only consider this as a bug if same memory increase is repeatable with much smaller InnoDB buffer. Closing as "Not a Bug".

If you can repeat it with much smaller InnoDB buffer or on machine where 1580M is much less than 77% of RAM (numbers should be much higher) feel free to reopen the report.
[23 Jun 2015 19:30] Liping Gao
This same exact symptom was happened on our every MySQL server with version 5.6.22 and 5.6.23, and one of the busy server filled up the swap space /tmp to 100% quickly. If this is not a bug, what we should do to avoid the /tmp to be filled up?
our problem server now have 12G physical memory with 16G swap space. we just did one reboot to avoid the server crash. If the mysqld memory usage do not release at all, then we have to schedule the bounce every month.
Please advice what we should do to avoid /tmp full and potential MySQL server crash.
[26 Jun 2015 22:15] Liping Gao
Is there any developer monitor this Bug and Could you reply back to me for any suggestions?