Bug #36613 Creating many tables with many columns in innodb leads to out of memory
Submitted: 9 May 2008 5:33 Modified: 27 May 2008 13:05
Reporter: Markus Schindler Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S2 (Serious)
Version:5.1.24 OS:Any
Assigned to: Assigned Account CPU Architecture:Any
Tags: innodb

[9 May 2008 5:33] Markus Schindler
Description:
ccreting big innodb tables (large number of columns) together with partiotions seem to leak in memory consumption.

My actual application creates an table every night to store a news day date in it. This table has > 500 columns. I was noticing the bug when I found out that after switching to partitions mysql was suddenly killed by kernel's oom-killer.

What happened was that the Server itself is running with 4 GB of physical ram and 1 GB of SWAP. The "production" server is designed to use up to 3.5GB of the physical RAM. After running around 5 days, the server was killed by oom-killer.

I was able to trace this down to the time of the partioned table creation.

To reproduce I switched to my "play" server which is Suse Linux x86_32 2GB of ram and kernel 2.6.16.13-4 whereas the production is 64Bits and kernel 2.6.18.2-34.

I noticed that each table creation (script below) increased the mysql process Data usage by around 200MB. So on the play server after around 10 new table creations the swap was nearly completely used as well as the RAM. The 11th table lead to the kill of mysql by oom-killer.

After switching to myisam I did not notice such a memory consumption yet. Also not using archive engine. I did not try with pbxt, yet. I also was not yet able to get innodb-plugin running so no test result, yet.

On the play server I used for test purpose the standard my-huge.cnf with the 3 lines in addition to fit my data layout:
innodb_data_home_dir=/var/lib/mysql/innoDB/
innodb_log_group_home_dir=/var/lib/mysql/innoDB/log/data
innodb_file_per_table

How to repeat:
below is a function for an perl script. Just put a wrapper around and create some tables for different "dates"... monitor the "Data" usage with "top" while doing so. (In top, push "f" and select "DATA       = Data+Stack size (kb)") or keep going until oom-killer will kill mysqld...

sub createTable
{
	my $date = shift;
	
	my $stat = "CREATE TABLE `PM_VALUES_${date}` (   
             `PM_Sub4_id` int(10) unsigned default NULL,                                          
             `PM_Sub3_id` int(10) unsigned default NULL,                                         
             `PM_Sub2_id` int(10) unsigned NOT NULL default '0',                                   
             `PM_Sub1_id` int(10) unsigned NOT NULL default '0',                                
             `PM_TIME_id` int(10) unsigned NOT NULL default '0'";
        
	for(my $i=0; $i < 500; $i++) {
		$stat .= ", `VS.Column$i` float default NULL";
	}
                                                                                          
	$stat .= "
           ) ENGINE=innodb
	   PARTITION BY list(PM_Sub1_id) 
			SUBPARTITION BY hash(PM_Sub2_id)
    				SUBPARTITIONS 10 (
					PARTITION p0 VALUES IN(1),
					PARTITION p1 VALUES IN(2),
					PARTITION p2 VALUES IN(3),
					PARTITION p3 VALUES IN(4),
					PARTITION p4 VALUES IN(5),
					PARTITION p5 VALUES IN(6),
					PARTITION p6 VALUES IN(7),
					PARTITION p7 VALUES IN(8),
					PARTITION p8 VALUES IN(9),
					PARTITION p9 VALUES IN(10)
				)";
	print $stat."\n";
	$sth = $dbh->prepare($stat);
	$sth->execute();

}

Suggested fix:
fix memory leak ;)
[9 May 2008 5:34] Markus Schindler
fixed tag
[10 May 2008 2:44] Markus Schindler
Just verified today, with latest innodb plugin 1.01 the same problem present.

addition to my.cnf:
innodb_file_format="Barracuda"
[11 May 2008 10:39] Sveta Smirnova
Thank you for the report.

Verified as described with exception I used my-huge.cnf both with innodb_file_per_table and without this setting and got same results.

Bug is not repeatable with MyISAM storage engine.
[27 May 2008 10:35] Mattias Jonsson
This is not a partitioning bug, but a bug within the innodb engine. (It is just that when using partitioning it is easier to find, since there will create more 'table definition/column definitions' than when using non partitioned innodb tables. It is repeatable with non partitioned innodb (but on my machine, 4GB macbook, it takes 3000 tables instead of 30 tables with 100 partitions.)

As I understand, it only pushed table definitions into the dict_sys.table_LRU when doing CREATE TABLE and it does not check how many tables already in it (and therefore not removing any tables). And eventually it uses all memory on the machine and dies of out-of-memory (which is also reported in the logs).

It did not occur when the tables are already created (after a restart of the server and the select from each and every table).

FLUSH TABLES does not help releasing dict_sys.table_LRU and recover the used memory.

I will add a test case without partitioning.
[27 May 2008 11:25] Mattias Jonsson
test cases (and include files) for verifying bug. test oom_500_col_innodb_2.test can be used for testing without partitioning

Attachment: bug36613_oom-tests.tgz (application/x-gzip, text), 2.61 KiB.

[27 May 2008 11:28] MySQL Verification Team
related to bug #20877 ?