| Bug #72322 | Query to I_S.tables and I_S.columns leads to huge memory usage | ||
|---|---|---|---|
| Submitted: | 11 Apr 2014 23:52 | Modified: | 12 May 2015 12:41 |
| Reporter: | Przemyslaw Malkowski | Email Updates: | |
| Status: | Closed | Impact on me: | |
| Category: | MySQL Server: Information schema | Severity: | S2 (Serious) |
| Version: | 5.5 5.6 5.7 | OS: | Any |
| Assigned to: | CPU Architecture: | Any | |
| Tags: | information_schema memory leak | ||
[11 Apr 2014 23:53]
Przemyslaw Malkowski
simple script to create tables for test case
Attachment: create_tables.php (application/x-php, text), 937 bytes.
[11 Apr 2014 23:54]
Przemyslaw Malkowski
Massif mem profile visualized
Attachment: I_S_mem_leak_massif_visual.png (image/png, text), 330.54 KiB.
[11 Apr 2014 23:55]
Przemyslaw Malkowski
raw massif file
Attachment: massif_5.7.4.out (application/octet-stream, text), 376.15 KiB.
[14 Apr 2014 10:19]
MySQL Verification Team
Hello Przemyslaw, Thank you for the bug report and test case. Verified as described. Thanks, Umesh
[14 Apr 2014 10:21]
MySQL Verification Team
// on a 6G instance, RES reaches ~6G in no time... PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16833 root 20 0 14.5g 5.6g 4092 S 93.7 92.1 15:41.38 mysqld OOMkiller invoked when it it reached ~6G
[14 Apr 2014 10:23]
MySQL Verification Team
// used 5.6.18 tar binaries
[13 Sep 2014 5:15]
Paul Namuag
Any update on this ticket? Przemyslaw have stated in his fix suggestion as follows: "Fix if this is a memory leak. Or explain why this happens otherwise." Can you please provide such comment around this one? Thanks!
[12 May 2015 12:41]
Paul DuBois
Noted in 5.5.44, 5.6.25, 5.7.8, 5.8.0 changelogs. Certain queries for the INFORMATION_SCHEMA TABLES and COLUMNS tables could lead to excessive memory use when there were large numbers of empty InnoDB tables.
[23 Jun 2015 14:09]
Laurynas Biveinis
commit c3870e089a0f9ba50adcf17c8795871132e81697
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date: Mon Apr 27 23:50:13 2015 +0530
Bug #18592390 QUERY TO I_S.TABLES AND I_S.COLUMNS LEADS TO HUGE MEMORY USAGE
Description: On an example MySQL instance with 28k empty
InnoDB tables, a specific query to information_schema.tables
and information_schema.columns leads to memory consumption
over 38GB RSS.
Analysis: In get_all_tables() call, we fill the I_S tables
from frm files and storage engine. As part of that process
we call make_table_name_list() and allocate memory for all
the 28k frm file names in the THD mem_root through
make_lex_string_root(). Since it has been called around
28k * 28k times there is a huge memory getting hogged in
THD mem_root. This causes the RSS to grow to 38GB.
Fix: As part of fix we are creating a temporary mem_root
in get_all_tables and passing it to fill_fiels(). There we
replace the THD mem_root with the temporary mem_root and
allocates the file names in temporary mem_root and frees
it once we fill the I_S tables in get_all_tables and
re-assign the original mem_root back to THD mem_root.
Note: Checked the massif out put with the fix now the memory growth is just around 580MB at peak.
commit 31c803e8d0543b330aa8e61ef878da43fe1f68f7
Author: V S Murthy Sidagam <venkata.sidagam@oracle.com>
Date: Wed Apr 29 13:51:29 2015 +0530
Bug #18592390 QUERY TO I_S.TABLES AND I_S.COLUMNS LEADS TO HUGE MEMORY USAGE
As part of the fix find_files() prototype has been modified and
mysql-cluster uses find_files() function. Hence modified find_files() call
in ha_ndbcluster_binlog.cc file to make mysql-cluster build successful.

Description: On an example MySQL instance with 28k empty InnoDB tables, a specific query to information_schema.tables and information_schema.columns leads to memory consumption over 38GB RSS. How to repeat: Create 28000 tables with create_tables.php script attached to the issue. Then run following query and watch memory utilization: SELECT concat('Select ''', a.TABLE_SCHEMA,',',a.TABLE_NAME,',', DATA_LENGTH, ',', INDEX_LENGTH, ',', DATA_FREE, ','', count(*),', ''','', ifnull(max(ifnull(length(',b.COLUMN_NAME,'),0)),0),', ''','',' ' ifnull(avg(ifnull(length(', b.COLUMN_NAME,'),0)),0) From ', a.TABLE_SCHEMA,'.',a.TABLE_NAME,';') FROM information_schema.tables a, information_schema.columns b WHERE (a.TABLE_NAME=b.TABLE_NAME AND a.TABLE_SCHEMA=b.TABLE_SCHEMA) AND a.TABLE_SCHEMA NOT IN ('mysql', 'wdmon', 'information_schema', 'performance_schema') AND b.DATA_TYPE IN ('blob', 'longblob', 'longtext', 'mediumblob', 'mediumtext', 'text') UNION SELECT concat('Select ''', TABLE_SCHEMA,',',TABLE_NAME,',', DATA_LENGTH, ',', INDEX_LENGTH, ',', DATA_FREE, ','', count(*),', ''','', 0,', ''','',' ' 0 From ', TABLE_SCHEMA,'.',TABLE_NAME,';') FROM information_schema.tables a WHERE concat(TABLE_SCHEMA,TABLE_NAME) NOT IN (SELECT concat(TABLE_SCHEMA,TABLE_NAME) FROM information_schema.columns WHERE DATA_TYPE IN ('blob', 'longblob', 'longtext', 'mediumblob', 'mediumtext', 'text')) AND TABLE_SCHEMA NOT IN ('mysql', 'wdmon', 'information_schema', 'performance_schema') ORDER BY 1; In peak time of the run, just before the query finishes, mysqld RSS size gets to 38.5032 GB. Notable memory usage growth starts when query state changes from "Opening tables" to "checking permissions". The same behaviour on 5.5.36, 5.6.17 and 5.7.4. MySQL settings used: query-cache-type = 0 query-cache-size = 0 innodb_buffer_pool_size = 1G innodb_log_file_size = 2G innodb_log_buffer_size = 64M innodb_flush_log_at_trx_commit = 0 innodb_file_per_table innodb_open_files = 50000 innodb_flush_method = O_DIRECT Tables present on this instance: mysql >SELECT engine, count(*) tables, concat(round(sum(table_rows)/1000000,2),'M') rows, concat(round(sum(data_length)/(1024*1024*1024),2),'G') data, concat(round(sum(index_length)/(1024*1024*1024),2),'G') idx, concat(round(sum(data_length+index_length)/(1024*1024*1024),2),'G') total_size, round(sum(index_length)/sum(data_length),2) idxfrac FROM information_schema.TABLES GROUP BY engine ORDER BY sum(data_length+index_length); +--------------------+--------+-------+-------+-------+------------+---------+ | engine | tables | rows | data | idx | total_size | idxfrac | +--------------------+--------+-------+-------+-------+------------+---------+ | NULL | 59 | NULL | NULL | NULL | NULL | NULL | | CSV | 2 | 0.00M | 0.00G | 0.00G | 0.00G | NULL | | PERFORMANCE_SCHEMA | 75 | 2.17M | 0.00G | 0.00G | 0.00G | NULL | | MEMORY | 50 | NULL | 0.00G | 0.00G | 0.00G | NULL | | MyISAM | 31 | 0.00M | 0.00G | 0.00G | 0.00G | 0.16 | | InnoDB | 28007 | 0.34M | 0.46G | 0.85G | 1.31G | 1.87 | +--------------------+--------+-------+-------+-------+------------+---------+ 6 rows in set (1 min 4.41 sec) Valgrind massif output attached. In 5.7's ps_helper output we can see: mysql [localhost] {root} (performance_schema) > select * from ps_helper.memory_global_by_current_allocated limit 10; +--------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+ | event_name | current_count | current_alloc | current_avg_alloc | high_count | high_alloc | high_avg_alloc | +--------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+ | memory/sql/thd::main_mem_root | 6140 | 35.75 GiB | 5.96 MiB | 6449 | 35.80 GiB | 5.68 MiB | | memory/performance_schema/internal_buffers | 60 | 497.00 MiB | 8.28 MiB | 60 | 497.00 MiB | 8.28 MiB | | memory/sql/TABLE | 10063 | 21.58 MiB | 2.20 KiB | 10138 | 21.98 MiB | 2.22 KiB | | memory/sql/TABLE_SHARE::mem_root | 8012 | 18.83 MiB | 2.41 KiB | 8339 | 19.55 MiB | 2.40 KiB | | memory/mysys/KEY_CACHE | 3 | 8.00 MiB | 2.67 MiB | 3 | 8.00 MiB | 2.67 MiB | | memory/mysys/lf_node | 21219 | 1.36 MiB | 67 bytes | 21219 | 1.36 MiB | 67 bytes | | memory/mysys/lf_slist | 20375 | 636.72 KiB | 32 bytes | 20375 | 636.72 KiB | 32 bytes | | memory/mysys/IO_CACHE | 6 | 448.05 KiB | 74.67 KiB | 17 | 736.15 KiB | 43.30 KiB | | memory/sql/sp_head::main_mem_root | 40 | 422.67 KiB | 10.57 KiB | 40 | 422.67 KiB | 10.57 KiB | | memory/mysys/array_buffer | 84 | 372.12 KiB | 4.43 KiB | 99 | 389.35 KiB | 3.93 KiB | +--------------------------------------------+---------------+---------------+-------------------+------------+------------+----------------+ 10 rows in set (0.00 sec) On a machine with 32GB of RAM this query triggers OOMkiller, so had to test on bigger one. Suggested fix: Fix if this is a memory leak. Or explain why this happens otherwise.