Bug #22017 Huge memory usage after mysqldump.
Submitted: 5 Sep 2006 12:39 Modified: 3 Jul 2013 5:52
Reporter: Dimitrij HIlt Email Updates:
Status: Can't repeat Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S4 (Feature request)
Version:5.1.11, 5.0.26, 5.0.24-standard-log OS:Linux (Linux/Debian ia32/2.6)
Assigned to: Heikki Tuuri CPU Architecture:Any

[5 Sep 2006 12:39] Dimitrij HIlt
Description:
On a database server ( 1500 databases with same DB schema, see privat attachment )   after mysqldump from all databases the memory usage from DBMS is bigger then server RAM. After dump of each database memory usage increases in 1-2 MB steps.

Allocated memory will never be released. Only restart from DBMS ( our workaround ) will release memory.

How to repeat:
1) notice memory usage.
2) Create 1500 databeses with DB schema from pivat attachment
3) on bash: "for I in `seq 1 1500`; do mysqldump --opt DB$I >/dev/null'
4) look at memory usage.
[5 Sep 2006 12:59] Valeriy Kravchuk
Thank you for a problem report. Please, send your my.cnf content, for completeness.
[5 Sep 2006 13:09] Dimitrij HIlt
my.cnf is now attached.

Dimi
[5 Sep 2006 13:37] Valeriy Kravchuk
Sorry, I do not see it among files in this bug report.
[5 Sep 2006 13:45] Dimitrij HIlt
my.cnf

Attachment: my.cnf (application/octet-stream, text), 2.59 KiB.

[5 Sep 2006 13:46] Dimitrij HIlt
Ok, my bad. my.cnf is online now.

Dimi
[22 Oct 2006 13:01] Valeriy Kravchuk
Please, try to repeat with a newer version, 5.0.26, and inform about the results. Have you checked memory usage with "free" utility?
[23 Oct 2006 9:06] Dimitrij HIlt
Hi,

i'v installed mysql 5.0.26 last week but the Problem still exists.

Dimi
[20 Nov 2006 15:19] D. Pegram
I don't know if this is relevant but, we're experiencing similar problems using "Ver 14.12 Distrib 5.0.18, for pc-linux-gnu (i686) using readline 5.0" running on Linux kernel 2.4.21-4.ELsmp. We have two schemas that are backed-up nightly from a remote machine. The backups are mysqldump commands in a bash script kicked off via cron. Here's the command: 

/usr/bin/mysqldump -i -l -r $BACKUPDIR/$SCHEMA_PREFIX-$DOW.sql $SCHEMA --host=$HOST --port=$PORT --triggers -u $USER -p$PASS

The dump file for the second schema is about 3.6GB. After a couple of nights the remote machine goes down with "out of memory" errors. This machine is a dedicated backup DB server in case the primary fails. It's on a private network and isn't hosting any other services.

I hope this helps!

 - David
[26 Nov 2006 14:36] Valeriy Kravchuk
Sorry, but I was not able to repeat the behaviour described with 5.0.32-BK on 32-bit Linux, with your scripts. I did the following:

openxs@suse:~/dbs/5.0> for i in `seq 1 100`; do bin/mysql -uroot -e "create dat
abase dbt$i"; done
openxs@suse:~/dbs/5.0> for i in `seq 1 100`; do bin/mysql -uroot dbt$i </tmp/cr
eate_customer.sql; done
openxs@suse:~/dbs/5.0> free
             total       used       free     shared    buffers     cached
Mem:        256300     252792       3508          0       7636      48004
-/+ buffers/cache:     197152      59148
Swap:       514040      20728     493312
openxs@suse:~/dbs/5.0> for i in `seq 1 100`; do bin/mysqldump -uroot --opt dbt$
i >/dev/null; echo -n "$i "; done
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 5
7 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 openxs@suse:~/dbs/5.0>
openxs@suse:~/dbs/5.0> free
             total       used       free     shared    buffers     cached
Mem:        256300     251332       4968          0       2392      48028
-/+ buffers/cache:     200912      55388
Swap:       514040      20728     493312
openxs@suse:~/dbs/5.0> for i in `seq 1 100`; do bin/mysql -uroot dbt$i </tmp/cr
eate_procedure.sql; done
openxs@suse:~/dbs/5.0> free
             total       used       free     shared    buffers     cached
Mem:        256300     252012       4288          0       3056      47936
-/+ buffers/cache:     201020      55280
Swap:       514040      20728     493312
openxs@suse:~/dbs/5.0> for i in `seq 1 100`; do bin/mysqldump -uroot --opt dbt$
i >/dev/null; echo -n "$i "; done; echo
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 5
7 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83
84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100
openxs@suse:~/dbs/5.0> free
             total       used       free     shared    buffers     cached
Mem:        256300     251556       4744          0       2328      48428
-/+ buffers/cache:     200800      55500
Swap:       514040      20728     493312

So, I do not see any big difference (1-2M per database) here (although I used 100, not 1500 databases). Please, check if I missed something obvious.
[27 Nov 2006 8:16] Dimitrij HIlt
Hi,

i'v does not tried 5.0.32, because 5.0.26 is a stble release. PLease try with 5.0.26 from mysql.com Downloads ( staticaly compiled! ) again.

Dimi
[20 Dec 2006 13:23] Dimitrij HIlt
Hi,

do you need an account on one of such servers by us to reproduce and analyze this bug? Let me know. I can make it quickly for you. This bug is at the moment a performance killer on lot of databases.

Regards,

Dimi
[25 Dec 2006 2:03] Dimitrij HIlt
Hi,

i'v tested with all versions ( icc/static/libc-2.3 ) and can allways reproduce this issue. mysql-5.1.11 ( beta ) have this problem too.

I'v checked the memory from mysqld with memfetch ( www.freshports.org/sysutils/memfetch ) and saw lot of strings with XML Data from test fields after dump was done.

Alos i'v tested this with valgrind, but it is not any memory leak in mysql found.

Whats about a testserver from us for you with real data?
Due this issue our shops lags performance!

Dimi
[30 Jan 2007 14:45] Valeriy Kravchuk
Please, try to repeat with 5.1.14 (the latest one released), and inform about the results. Please, send the results of:

uname -a
getconf GNU_LIBPTHREAD_VERSION
getconf GNU_LIBC_VERSION

also.
[27 Feb 2007 15:07] Dimitrij HIlt
Hi,

i will test it with 5.1.4 ASAP, if i get any free time. Here are other informations:
rdb221:~# uname -a
Linux rdb221 2.6.17.11 #1 SMP Fri Aug 25 11:15:19 CEST 2006 i686 GNU/Linux
rdb221:~# getconf GNU_LIBPTHREAD_VERSION
NPTL 0.60
rdb221:~# getconf GNU_LIBC_VERSION
glibc 2.3.2
rdb221:~# 

Dimi
[12 Mar 2007 4:07] Anders Henke
For the past two weeks, we've migrated the InnoDB tables from shared to file-per-table-spaces, but this didn't do anything regarding the dump issue
(but created a new bug :-).

A few days ago, I migrated back to InnoDB shared space, last night I updated from 5.0.26 ("standard" = static glibc-2.2) to 5.0.37 (glibc-2.3, dynamic libs). The excessive memory leakage during and after mysqldump is still there.
[12 Mar 2007 4:08] Anders Henke
Memory usage with 5.0.37 during dump (starting at 2am)

Attachment: rdb225-mem.gif (image/gif, text), 16.89 KiB.

[19 Mar 2007 14:14] Heikki Tuuri
Dimitrij,

do you have 1500 databases, each containing a hundred tables? InnoDB typically uses a few kilobytes of memory in its internal data dictionary cache for each table definition. If you dump 150,000 tables, the data dictionary cache can eat up several hundred megabytes of memory.

In InnoDB-5.1.1x this memory consumption has been reduced by about 50 %.

Why do you create so many individual tables? Why not create only a hundred tables, and add the customer id as a column to each table?

Regards,

Heikki
[19 Mar 2007 14:42] Dimitrij HIlt
Hi Heikki,

we ( btw. our developers ) have indeed 1500 Databases witch 180-200 tables each database. It's right, we dump up to 373,918 tables.

I'v tested this issue with mysql.-5.1.11 and got same trouble.

Does InnoDB leve all dictionaries in buffer_pool? buffer_pool is just 1 GB.

We have lot of tables because the schema from shop ( system uses these databeses ) do need all this. And every database is for one customer who uses the shop.

I does not known about any problems with mysql+InnoDB and lot of tables, so we'v designed all databases with lot of tables.

If it is not possible with mysql to resolv this problem, so we must make a redesign or use another DBMS.

Do you need any another information ( or access to one of test servers ) to verify this theorie?

Regards,

Dimitrij
[19 Mar 2007 15:17] Heikki Tuuri
Dimitrij,

the data dictionary memory consumption was about halved in:

http://dev.mysql.com/doc/refman/5.1/en/news-5-1-12.html

It has been discussed that we could use a LRU list to trim the least used table definitions from the InnoDB data dictionary cache, but there is no decision to implement that.

64-bit computers partially solve this problem by allowing the virtual memory to extend much bigger than 2 GB which is the normal in a 32-bit computer.

I am marking this bug report as a feature request.

Regards,

Heikki
[19 Mar 2007 15:54] Dimitrij HIlt
HI Heikki,

have any feture request a chance to be inplemented in mysql-5.0? We can not use mysql with so high memory consumption but we can't use beta release 5.1 too...

Very strange.

Regards,

Dimi
[19 Mar 2007 16:20] Heikki Tuuri
Dimitrij,

5.0 is the 'production' release. No new features will be implemented in it.

5.1 is in the 'beta' phase, which means that only minor new features are allowed into it.

Regards,

Heikki
[21 Dec 2007 17:01] john danilson
we are seeing this on 
 5.0.44-enterprise-gpl-log MySQL Enterprise Server (GPL)

but interestingly, not on all our servers.  We are doing further investigation to determine what makes our one QA server different that it hits this bug and runs out of memory every third day.  All our servers (dev, QA, and prod) are at the same version and run on virtually identical rhel boxes.
[6 Apr 2009 22:05] Dee Smith
I'm seeing the same problem on 5.0.51a.  Has this been fixed in a later 5.0.x release?
[19 Aug 2012 3:37] Theresia Tre
I am also experiencing this on 5.5.

Server:
8 GB RAM, no swap
total database size (MyISAM and InnoDB) around 50GB
[3 Jul 2013 5:52] MySQL Verification Team
Probably a duplicate of http://bugs.mysql.com/bug.php?id=38002
If anybody sees this again then:

1. try 5.5.32 or newer, of 5.6.13 or newer.
2. open a new bug report
3. provide useful memory profiling information. e.g. 
http://www.mysqlperformanceblog.com/2013/01/09/profiling-mysql-memory-usage-with-valgrind-...
[11 May 2017 6:12] Rahul Thakral
Having the same issue , 8GB Total RAM, Trying to dump a database into another database directly, having size of 31GB with 190 Tables and got the error 2013 - connection lost and when look into memory usage found on a large table all memory consumed by cached memory and as soon as free memory finished, it sends back us error with message Error 2013 - Connection Lost.
[11 May 2017 6:17] Rahul Thakral
Database Server:

erver: Localhost via UNIX socket
Server type: MySQL
Server version: 5.6.35 - MySQL Community Server (GPL)
Protocol version: 10
Server charset: UTF-8 Unicode (utf8) 

Webserver: 

Cpsrvd 11.64.0.20
Database client version: libmysql - 5.1.73
PHP extension: mysqliDocumentation curlDocumentation mbstringDocumentation
PHP version: 5.6.30

Using putty to do the dump process using this command

mysqldump --user=db_user_1 --password=db_user_1_pswd db_1 | mysql --user=db-user_2 --password=db_user_2_pswd db_2;