Bug #8396 mysqldump crashes while working with a remote server
Submitted: 9 Feb 2005 15:12 Modified: 9 Feb 2005 20:54
Reporter: Olivier NEPOMIACHTY Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: mysqldump Command-line Client Severity:S2 (Serious)
Version:4.0.18 OS:Linux (Linux)
Assigned to: CPU Architecture:Any

[9 Feb 2005 15:12] Olivier NEPOMIACHTY
Description:
I have two servers : S1 and S2. Both have MySQL 4.0.18 installed in rpm on a Fedora Core 2. 
S2 can dump its local databases. 
S1 can not dump on particular database located on S2, it just crashes.

I am using this simple command line on S1 :
/usr/bin/mysqldump -u username -p'password' -h S2 --add-drop-table databasename > /tmp/databasename.sql
the file /tmp/databasename.sql grows for a while and stop growing. Then, the swap (on S1) is filled till it is 100% used, then the server S1 stucks, and finally mysqldump exit with an error (connection lost), and the swap is freed.

The sql dump shows that mysqldump works well until it deals with one of the biggest tables : 
records : 13 616 112 
size : 1,9 Go

here is some size information on the database :
tables : 52 
records : 18 064 960 
size : 3,6 Go

I am able to repeat the same bug with a Red Hat 9 (Shrike) on S1.

How to repeat:
Try with big tables, the size have to be bigger than your free memory + your free swap.
[9 Feb 2005 20:54] Aleksey Kishkin
Please use --opt or --quick options.

According the documentation (http://dev.mysql.com/doc/mysql/en/mysqldump.html):

" If you run mysqldump without the --quick or --opt option, mysqldump loads the whole result set into memory before dumping the result. This will probably be a problem if you are dumping a big database."