Bug #70641 5.6 partitions use much more memory than 5.1
Submitted: 16 Oct 2013 23:56 Modified: 29 Jul 2014 16:04
Reporter: Yoshinori Matsunobu (OCA) Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: Partitions Severity:S3 (Non-critical)
Version:5.6.14 OS:Any
Assigned to: CPU Architecture:Any

[16 Oct 2013 23:56] Yoshinori Matsunobu
See how to repeat

How to repeat:
Run 5.1.59 and 5.6.14 instance with large enough table_cache (I set table_open_cache=50000), and make sure to set innodb_file_per_table=1.

Create below databases/tables
100 partitions per table
100 tables per database
30 databases

for i in `seq 1 100`
echo "create table p$i (a int primary key auto_increment, b varchar(1000)) engine=innodb partition by key (a) partitions 100;" >> table.sql

for i in `seq 1 30`
mysql -e "create database db$i"
mysql  db$i < table.sql

After creating tables, cleanly shutdown mysqld, then restart with small buf pool size.
mysqladmin shutdown
mysqld_safe --defaults-file=my.cnf --innodb_buffer_pool_size=1G &

Then run below query.
select count(*) from information_schema.partitions;

creating tables: 8.8 sec per database
RES after create: 24g
RES after reboot and select: 4.8g

creating tables: 14.2 sec per database
RES after create: 23g
RES after reboot and select: 7.7g
[12 Nov 2013 9:26] MySQL Verification Team
Hello Yoshinori,

Thank you for the bug report and test case.
Verified as described.

[27 Mar 2014 11:21] Mattias Jonsson
Looks like an issue after the fix of bug#57480.

ha_innobase::srch_key_val* was added which took ~6kB which is the majority of the increase in memory for tables in the table cache.

Notice that each partition will have these 6 kB allocated, so when having many partitions in the table cache it will increase the memory usage.

Possible fixes:
- remove srch_key_val2 and use the stack in records_in_range instead.
- replace srch_key_val1 with a pointer instead and allocate the memory in ha_innobase::index_init() and free it in ha_innobase::end_index(). That way it will not stay allocated in the table cache, and it can also be smaller. The down side is that it will be an extra alloc() for each handler in each query that uses indexes.
- dynamically allocate them in one alloc call when the statement starts (external_lock(!UNLOCK) or start_stmt) and free them at end of statement (unlock). Needs investigation because of SQL HANDLER statements.
- use the prebuilt->heap for allocating the search key in ha_innobase::index_read().

Fix srch_key_val2 is of lower risk than fixing srch_key_val1, since it is only used within records_in_range() and srch_key_val1 is used in between handler:: calls.
[27 Mar 2014 11:32] Mattias Jonsson
Profile from google perftools; heap profile for one db with 100 tables with 100 partitions each in mysql-5.1

Attachment: heapprofile.db2.100t100p.51.4.pdf (application/pdf, text), 12.70 KiB.

[27 Mar 2014 11:33] Mattias Jonsson
Profile from google perftools; heap profile for one db with 100 tables with 100 partitions each in mysql-5.5

Attachment: heapprofile.db2.100t100p.55.2.pdf (application/pdf, text), 11.16 KiB.

[27 Mar 2014 11:35] Mattias Jonsson
Profile from google perftools; heap profile for one db with 100 tables with 100 partitions each in mysql-5.6

Attachment: heapprofile.db2.100t100p.56.2.pdf (application/pdf, text), 11.62 KiB.

[27 Mar 2014 11:37] Mattias Jonsson
mysql-5.6 with ha_innobase::srch_key_val allocated in index_init() and freed in index_end()

Attachment: heapprofile.db2.100t100p.56.4.pdf (application/pdf, text), 13.29 KiB.

[27 Mar 2014 11:47] Mattias Jonsson
Object sizes:
Reading symbols from /export/home2/tmp/mjonsson/mysql-5.1/sql/mysqld...done.
(gdb) p sizeof(handler)
$1 = 352
(gdb) p sizeof(ha_partition)
$2 = 784
(gdb) p sizeof(ha_innobase)
$3 = 496
(gdb) p sizeof(TABLE)
$4 = 2200
(gdb) p sizeof(TABLE_SHARE)
$5 = 856
(gdb) p sizeof(partition_info)
$6 = 560
(gdb) p sizeof(partition_element)
$7 = 152
(gdb) p sizeof(row_prebuilt_t)
$8 = 440

Reading symbols from /export/home2/tmp/mjonsson/test-5.5-rel/sql/mysqld...done.
(gdb) p sizeof(handler)
$1 = 360
(gdb) p sizeof(ha_partition)
$2 = 856
(gdb) p sizeof(ha_innobase)
$3 = 6712
(gdb) p sizeof(TABLE)
$4 = 2240
(gdb) p sizeof(TABLE_SHARE)
$5 = 824
(gdb) p sizeof(partition_info)
$6 = 536
(gdb) p sizeof(partition_element)
$7 = 152
(gdb) p sizeof(row_prebuilt_t)
$8 = 800

Reading symbols from /export/home2/tmp/mjonsson/test-5.6-rel/sql/mysqld...done.
(gdb) p sizeof(handler)
$1 = 504
(gdb) p sizeof(ha_partition)
$2 = 1104
(gdb) p sizeof(ha_innobase)
$3 = 6920
(gdb) p sizeof(TABLE)
$4 = 2256
(gdb) p sizeof(TABLE_SHARE)
$5 = 800
(gdb) p sizeof(partition_info)
$6 = 584
(gdb) p sizeof(partition_element)
$7 = 152
(gdb) p sizeof(row_prebuilt_t)
$8 = 856
[9 Jun 2014 10:12] Mattias Jonsson
We are currently working on this bug, and the current plan is to:
- Change the srch_key_val* ha_innobase member variables to pointer (initialized to 0)
- probably add srch_key_val_length to the ha_innobase class.
- On ha_innobase::open(), evaluate the maximum total length of all INT fields in of all indexes.
- Allocate that much memory to both srch_key_val1 and srch_key_val2 (probably OK to allocate from prebuilt->heap).
- Add debug asserts that the conversion never uses more memory than allocated.

This way we will only allocated the size of actual used memory, while still not changing the code too much and only allocate it once on open. So for normal operations it will work as before (Only an extra length calculation + prebuilt->heap allocation in open()), but with less used memory.
I.e. if a table has at most 8 BIGINT columns in an index, it will allocate 128 bytes extra (8 x 8 x 2 bytes), instead of twice the size of the maximum key length (6kB in total).
[29 Jul 2014 16:04] Jon Stephens
Thank you for your bug report. This issue has been committed to our source repository of that product and will be incorporated into the next release.

Documented fix in the MySQL 5.6.21 and 5.7.5 changelogs, as follows:

    Large numbers of partitioned InnoDB tables could consume much
    more memory when used in MySQL 5.6 or 5.7 than the memory used
    by the same tables used in previous releases of the MySQL

    See also BUG#11764622 and BUG#57480.


If necessary, you can access the source repository and build the latest available version, including the bug fix. More information about accessing the source trees is available at

[30 Jul 2014 12:34] Mattias Jonsson
Patch for mysql-5.6.19

Attachment: bug70641-5.6.19.diff (application/octet-stream, text), 12.12 KiB.

[14 Aug 2014 8:56] Florian Rampp

when having 50 tables opened in parallel by 100 connections, each table having 480 partitions with 8 subpartitions each, MySQL 5.6.17 runs out of memory with 7.5 GB RAM available (on an Amazon RDS db.m1.large instance).
Does this relate to this bug?

Are the 6kb data used per partition or even per subpartition?
[14 Aug 2014 11:14] Mattias Jonsson
Hi Florian,

every subpartition will take 6 kB memory each before this fix.

Having open 50 tables with 480 partitions * 8 subpartitions would at least take 1 GB ram extra without this fix (and if you have 100 concurrent connections all trying to use the same 50 tables, it is likely that there are several instances open of each table which will increase the memory usage). So yes I think you are affected by this bug.

May I ask you a bit more about your use case? (Or perhaps we can discuss it at the MySQL Partitioning forum: http://forums.mysql.com/list.php?106)

Mattias, MySQL Partitioning developer
[14 Aug 2014 11:45] Florian Rampp
Hi Matthias,

thanks for your super quick response and confirming my suspicions.
I described our setup and problem here: http://serverfault.com/questions/619756/mysql-5-6-17-running-out-of-memory-when-opening-ta...

I also created a new topic in the partitioning forum.

[13 Sep 2014 16:20] James Day
This bug was introduced by the fix for bug #57480 "Memory leak when having 256+ tables".
[25 Sep 2014 11:45] Laurynas Biveinis
revno: 6075
committer: Aditya A <aditya.a@oracle.com>
branch nick: mysql-5.6
timestamp: Tue 2014-07-15 10:23:36 +0530
  More memory usage was seen due to increase in size of
  ha_innobase object . The two character arrays 
  srch_key_val*  in ha_innobase class each contribute 
  approximately 6Kbytes ,so when large databases with 
  many partitions are used ,memory usage increases.
  MySQL stores the INTs datatypes in little endian format
  where as innodb stores them in big endian format.The 
  srch_key_val* arrays are only needed when we are 
  converting INTs datatypes in the search key from little 
  endian format to big endian format. So we allocate 
  memory equivalent to number of INTs in each index of the 
  table ,which will reduce memory footprint. Also we have
  moved the allocation of the memory to ha_innobase::open()
  so that the memory is only allocated during opening of 
  table handler.  
  [Approved by Mattiasj and Jimmy #rb5565]
[25 Sep 2014 11:46] Laurynas Biveinis
revno: 6077
committer: Aditya A <aditya.a@oracle.com>
branch nick: mysql-5.6
timestamp: Tue 2014-07-15 14:30:32 +0530
  Post push fix. Compilation warning in gcov build.