Bug #77403 MySQL server 5.6 consumes all memory on server when running query
Submitted: 18 Jun 2015 12:27 Modified: 5 Mar 2020 12:52
Reporter: Bart Verwilst Email Updates:
Status: Can't repeat Impact on me:
Category:MySQL Server: Optimizer Severity:S2 (Serious)
Version:5.6.25 OS:Linux
Assigned to: CPU Architecture:Any

[18 Jun 2015 12:27] Bart Verwilst
Tested on 5.6.19, 5.6.24 and 5.6.25: PROBLEM

Tested on 5.5.4x: No memory increase!

Default installs, only settings that are changed in my.cnf:

innodb-file-per-table          = 1
innodb-buffer-pool-size        = 1G

The following query drains all memory from the system, growing mysqld memory usage from 1.2G to 5+G in a matter of seconds ( after which the system starts swapping, and the OOM killer kills mysqld.

SELECT SQL_NO_CACHE `messaging_conversations`.`id` * CAST(7 AS SIGNED) + 3 AS `id` , `messaging_conversations`.`subject` AS `subject`, GROUP_CONCAT(DISTINCT IFNULL(`messaging_messages`.`body`, '0') SEPARATOR ' ') AS `messages`, GROUP_CONCAT(DISTINCT IFNULL(`profiles`.`displayname`, '0') SEPARATOR ' ') AS `participants`, `messaging_conversations`.`last_message_at` AS `last_message_at`, `messaging_conversations`.`id` AS `sphinx_internal_id`, 0 AS `sphinx_deleted`, 328567051 AS `class_crc`, IFNULL(`messaging_conversations`.`last_message_at`, '') AS `last_message_at_sort`, `messaging_conversations`.`network_id` AS `network_id`, GROUP_CONCAT(DISTINCT IFNULL(`contents`.`space_id`, '0') SEPARATOR ',') AS `space_id` FROM `messaging_conversations` LEFT OUTER JOIN `messaging_messages` ON `messaging_messages`.`conversation_id` = `messaging_conversations`.`id` LEFT OUTER JOIN `messaging_conversation_trackers` ON `messaging_conversation_trackers`.`conversation_id` = `messaging_conversations`.`id` LEFT OUTER JOIN `profiles` ON `profiles`.`id` = `messaging_conversation_trackers`.`profile_id` LEFT OUTER JOIN `contents` ON `contents`.`content_uuid` = `messaging_conversations`.`content_uuid` WHERE (`messaging_conversations`.`id` >= 42117 AND `messaging_conversations`.`id` <= 43140 AND `messaging_conversations`.`delta` = 1) GROUP BY `messaging_conversations`.`id` ORDER BY NULL;


| id | select_type | table                           | type   | possible_keys                                                                                                      | key                                                      | key_len | ref                                                        | rows | Extra       |
|  1 | SIMPLE      | messaging_conversations         | range  | PRIMARY,index_messaging_conversations_on_content_uuid,index_messaging_conversations_on_network_id_and_content_uuid | PRIMARY                                                  | 4       | NULL                                                       |  479 | Using where |
|  1 | SIMPLE      | messaging_messages              | ref    | index_messaging_messages_on_conversation_id                                                                        | index_messaging_messages_on_conversation_id              | 4       | trustedplatform.messaging_conversations.id                 |    1 | NULL        |
|  1 | SIMPLE      | messaging_conversation_trackers | ref    | index_messaging_conversation_trackers_on_conversation_id                                                           | index_messaging_conversation_trackers_on_conversation_id | 4       | trustedplatform.messaging_conversations.id                 |    4 | NULL        |
|  1 | SIMPLE      | profiles                        | eq_ref | PRIMARY                                                                                                            | PRIMARY                                                  | 4       | trustedplatform.messaging_conversation_trackers.profile_id |    1 | NULL        |
|  1 | SIMPLE      | contents                        | ref    | index_contents_on_content_uuid                                                                                     | index_contents_on_content_uuid                           | 16      | trustedplatform.messaging_conversations.content_uuid       |    1 | NULL        |

How to repeat:
100% reproducible by running the query on my dataset.
[18 Jun 2015 12:28] Bart Verwilst
table definitions + show variables

Attachment: mysql-stats.txt (text/plain), 184.43 KiB.

[19 Jun 2015 23:57] MySQL Verification Team
Thank you for the bug report. Are you able to attach a dump file to insert data enough to test?. Thanks.
[22 Jun 2015 19:27] Liping Gao
we found MySQL 5.6.21 version do not release memory at all. memory usage inrease faster with large join_buffer_size, large max_package_size and large innodb_buffer_pool_size
[23 Jun 2015 7:37] MySQL Verification Team
Thank you for the feedback.
Could you please provide the sample data(sql dump of the tables involved in the query) to reproduce this issue at our end? You may want to mark it as private when attaching here.
[23 Jun 2015 13:32] Liping Gao
I can't upload the dump data to you because of the data security. But you don't have to have my data to reproduce the issue. I am on Sun Solaris 5.11 sun4v sparc sun4v and I noticed our every MySQL server memory usage is climbing, just this one is obvious because of the activity and the big innodb_buffer_pool_size(3G), max_package_size(256M) and join_buffer_size(128M). we have to bounce the server in order to the release the memory.
You can open and close any connection from your MySQL 5.6.22/5.6.23 server and watch the memory release issue.
[23 Jun 2015 13:39] Bart Verwilst
I've uploaded mysql-dumpdata-77403.sql.gz to sftp.oracle.com, under /support/incoming

This contains all tables ( with scrambled data ) that are needed by my query.

My test MySQL instance is a clean install with these 2 params added:

innodb-file-per-table          = 1
innodb-buffer-pool-size        = 1G

Running the query increases memory usage by mysqld to around 4G, at which time the query completes ( my vagrant machine has enough RAM to allow the query to complete ). After the query, memory is freed and usage is +- 500M again.

I hope you guys can trigger the same behaviour!
[23 Jun 2015 13:42] Bart Verwilst
I think Liping Gao's issue is different from mine. The server runs fine without increased memory usage. My issue only occurs when running the actual query, after which memory is released again ( if enough memory is available on the server, otherwise the OOM killer kicks in and kills mysqld).
[23 Jun 2015 13:48] Liping Gao
Sorry Bart. My issue is different than yours. 

Umesh, Should I open another bug for my issue?

Bart, Do you think my issue will be disappear after upgrade to 5.6.25 which the version you have now?

[23 Jun 2015 17:10] Liping Gao
Just tested 5.6.25 on the test server with only two parameters same as Bart, mysqld server memory usage in 'prstat 1' stays the same after query complete and connections closed. So the same issue is on 5.6.25 as well. please advice, Thanks.
[24 Jun 2015 8:56] MySQL Verification Team
@Bart: mysql-dumpdata-77403.sql.gz file seems to be corrupted(tried on windows and linux to extract but with no luck). Please re upload(make sure file naming is mysql-bug-data-77403.zip).


Please report if you think it is different than this issue with repeatable test case.
[24 Jun 2015 9:01] Bart Verwilst

af3684e9a6a2be09e6003d9fe79c46cb  mysql-bug-data-77403.zip
[24 Jun 2015 13:17] Liping Gao
Yes. the issue I have is different with this one. I saw #64563 have exact same symptoms. Should I follow up with that one?
[4 Aug 2015 11:17] Bart Verwilst

Any progress on this issue?


[4 Aug 2015 13:19] Liping Gao
Hi Bart,
Are you asking my issue or the issue in this Bug #77403? I didn't get any comment since I post new Bug regarding MySQL memory release issue. Could you check that Bug as well?
[4 Aug 2015 13:23] Liping Gao
the new Bug number I filed is 77616
[4 Feb 2020 15:55] MySQL Verification Team
Hi Mr. Verwilst,

Your data file is not on our SFTP site any more.

Do you still see the same problem ???

If you do, please upload your data again.
[5 Mar 2020 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".