Bug #46103 mysqldump: Error 2013: Lost connection to MySQL server during query when dumping
Submitted: 9 Jul 2009 22:47 Modified: 15 Nov 2016 19:26
Reporter: John Miller Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: mysqldump Command-line Client Severity:S4 (Feature request)
Version:5.1.34-community via TCP/IP OS:Windows (2k3 SP2)
Assigned to: CPU Architecture:Any

[9 Jul 2009 22:47] John Miller
Description:
This command gets run as part of our backup:

D:\util\mantis_backup_test>mysqldump -q -e --max_allowed_packet=500m --net_buffer_length=16m -u XXXX -pXXXXXXXXX rpcbugtracker    1>test.backup
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `mantis_bug_file_table` at row: 743

Row 743 does not seem "special":
mysql> select id, length(content), bug_id,filename from mantis_bug_file_table where id=743;
+-----+-----------------+--------+-------------+
| id  | length(content) | bug_id | filename    |
+-----+-----------------+--------+-------------+
| 743 |           76288 |   2313 | P0130rs.doc |
+-----+-----------------+--------+-------------+
1 row in set (0.00 sec)

If I go into my application (mantis) and remove that attachment - removing that row from the table:
mysql> select id, length(content), bug_id,filename from mantis_bug_file_table where id=743;
Empty set (0.00 sec)

I just get the same error with a different row:
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `mantis_bug_file_table` at row: 429

The table itself is big:
D:\Program Files\MySQL\MySQL Server 5.0\data\rpcbugtracker>dir /os
 Volume in drive D is Applications
 Volume Serial Number is 1099-7FDC

 Directory of D:\Program Files\MySQL\MySQL Server 5.0\data\rpcbugtracker

01/21/2009  05:03 AM    <DIR>          .
01/21/2009  05:03 AM    <DIR>          ..
12/18/2006  09:14 AM                 0 mantis_project_file_table.MYD
...
06/02/2009  05:16 AM       212,981,800 mantis_bug_file_table.MYD
              79 File(s)    221,295,146 bytes
               2 Dir(s)   6,323,625,984 bytes free

D:\Program Files\MySQL\MySQL Server 5.0\data\rpcbugtracker>

How to repeat:
Table in question is about 200mb, so I am not sure that a test case will be possible. The command I am trying is:
mysqldump -q -e --max_allowed_packet=500m --net_buffer_length=16m -u XXXX -pXXXXXXXXX rpcbugtracker    1>test.backup

though a simpler versions result in pretty much the same thing:
D:\util\mantis_backup_test>mysqldump --opt -e -u root -pRedpL@net rpcbugtracker    1>test.backup
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `mantis_bug_file_table` at row: 429

D:\util\mantis_backup_test>mysqldump -q -u root -pRedpL@net rpcbugtracker    1>test.backup
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `mantis_bug_file_table` at row: 429

D:\util\mantis_backup_test>

I am happy to run any commands to test this.
[9 Jul 2009 23:33] MySQL Verification Team
Probably related to bug: http://bugs.mysql.com/bug.php?id=30809.
[10 Jul 2009 5:32] Sveta Smirnova
Thank you for the report.

But version 5.1.11 is very old and many bugs were fixed since. Please upgrade to current version 5.1.23, try with it and inform us if problem still exists. Please upload full server error log file also.
[14 Jul 2009 22:55] John Miller
I uploaded the log file. I will upgrade and try error again. From reading elsewhere I have seen this error pop up when people have attempted to upgrade, so I hope we do not run into this.
[15 Jul 2009 8:57] Sveta Smirnova
Thank you for the feedback.

We will wait results from your upgrade.

But error log you uploaded is either not MySQL server error log or it is compressed, so we can not read it as text file. Please upload MySQL server error log next time/indicate archive format.
[15 Jul 2009 16:18] John Miller
I misread the version page. The client is 5.1.11, the server is at 5.1.34. All testing in the original bug report was done at 5.1.34.
[15 Jul 2009 16:21] John Miller
mysql version image

Attachment: sqlserver.png (image/png, text), 114.70 KiB.

[15 Jul 2009 16:29] John Miller
Hi Sveta,
I see our server version is 5.1.34, not 5.1.11 as I originally posted. The client is listed as 5.1.11 which was the source of my confusion, please see http://bugs.mysql.com/file.php?id=12489 for details. Do you still think an upgrade (to 5.1.36) will make a difference?

The log file is plain text, uncompressed and I see a link to it as:

Attachment: MysqlErrorLog.txt (text/plain), 40.11 KiB.

On the bug page and can click on:

http://bugs.mysql.com/file.php?id=12477

And read it directly in my browser. I got this file from clicking on "server logs" in the MySql Administrator and cutting and pasting to notepad. Is there another location I should be looking for the log?

Thanks for all your help,
John
[16 Jul 2009 5:02] Sveta Smirnova
Thank you for the feedback.

Error log is correct now. It shows MySQL server crashed once, but seems this is not related to the problem described.

Please note record at http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_max_allowed_pac...: "As of MySQL 5.1.31, the session value of this variable is read only. Before 5.1.31, setting the session value is allowed but has no effect. " So --max_allowed_packet=500m would not work if server has different value. Which value of max_allowed_packe do you have in the configuration file?
[17 Jul 2009 7:01] John Miller
Hi,

    The max_allowed_packet size is set to 100m in the my.ini file. Thanks for your help.
[17 Jul 2009 7:03] Sveta Smirnova
Thank you for the feedback.

Specifying --max_allowed_packet=500m as mysqldump option would not help. Please try with --max_allowed_packet=500m in the configuration file.
[17 Jul 2009 7:31] John Miller
I tried to set the max_allowed_packet to 500m and restarted the mysql server. But still the same "lost connection" error occurs while performing backup.
[17 Jul 2009 7:36] John Miller
This is the latest configuration file that i am using for mysql. Do you think that i need to alter any more variables?

Attachment: my.ini.txt (text/plain), 10.90 KiB.

[22 Jul 2009 6:19] Sveta Smirnova
Thank you for the feedback.

Try option --skip-extended-insert also.
[22 Aug 2009 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[26 Aug 2009 16:52] MySQL Verification Team
I am able to repeat this behavior with ridiculous values for net_write_timeout and max_allowed_packet:

mysql> SHOW CREATE TABLE T;
+-------+--------------------------------------------------------------------------+
| Table | Create Table |
+-------+--------------------------------------------------------------------------+
| T | CREATE TABLE `t` (
`d` longtext
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |
+-------+--------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> SELECT LENGTH(d) FROM t;
+-----------+
| LENGTH(d) |
+-----------+
| 41943040 |
| 41943040 |
| 41943040 |
+-----------+
3 rows in set (0.22 sec)

mysql> SET GLOBAL max_allowed_packet=4096;
Query OK, 0 rows affected (0.02 sec)

mysql> SET GLOBAL net_write_timeout=1;
Query OK, 0 rows affected (0.00 sec)

mysql> exit
Bye

D:\>mysqldump --max-allowed-packet=1G --extended-insert bigTest2 t > test.sql
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `t` at row: 1

Increasing net_write_timeout and max_allowed_packet solves it:

D:\>mysql -e "SET GLOBAL net_write_timeout=30; SET GLOBAL max_allowed_packet=1024*1024*1024;"
D:\>mysqldump --compact -t --skip-comments --max-allowed-packet=41M --extended-insert bigTest2 t > test.sql

-- No error, the file is written correctly.

Depending on the speed of the network over which mysqldump has connected and the size of the rows being dumped, it may be necessary to increase new_write_timeout much higher - at least as long as it takes to transmit max_allowed_packet bytes over the network plus write it to the output.
[26 Aug 2009 17:29] MySQL Verification Team
Not sure if this is a client tools issue or an options and variables issue - server disconnects client because write timeout is exceeded (I think)
[8 Sep 2009 11:55] MySQL Verification Team
Setting as Not a Bug, because increasing max_allowed_packet and net_write_timeout to appropriate levels solves it.
[8 Sep 2009 12:08] Mark Leith
As people generally don't want their dumps to fail, even for a net write timeout like this, I'm re-opening this, and verifying it, as a feature request:

mysqldump should set it's session net_write_timeout to a larger value (600 seconds should do it), by default on first connection.
[12 Dec 2009 21:56] MySQL Verification Team
Bug: http://bugs.mysql.com/bug.php?id=49641 marked as duplicate of this one.
[29 Dec 2009 10:32] Cyril SCETBON
Hi,

Mysqldump should complain that these variables are not set correctly
[7 Jun 2016 8:07] Harald Auer
Have same problem with mysql 5.6.28 on openSUSE Leap 42.1 (x86_64).

It did not happen with same table layout using InnoDB.
We recently switched to MyISAM as this table contains blob fields (file contents). And since then our backups fail sometimes.

Even after raising timeouts, mysqldump fails nearly every night.
Current values:
net_read_timeout            | 120
net_write_timeout           | 1200
[8 Aug 2016 14:07] teo teo
How can this still be an issue after 7 years??

Is it that difficult to fix, or is there anybody that still doesn't see this as a bug??

1 - mysqldump should OBVIOUSLY set whatever limits there are on time and size to infinite, or whatever is needed to avoid a timeout or a failure due to excessive size. It's a tool specifically meant to dump entire databases, for god's sake. One shouldn't have to tinker with server settings and figure out what mysqldump need in order not to fail. And if there are maximums on the server sides that cannot be overridden by mysqldump, then there is a design flaw in the server. The server must be designed taking into account that tools such mysqldump must exist.

2 - the error message itself is ridiculously uninformative. It MUST tell you exactly what the reason for the connection being closed is. I understand mysqldump is a client tool. If the server is telling it the reason why the connection is being closed, then mysqldump failing to report it is a mysqldump bug. If the server is not giving the client this information, then it's a bug on the server. Either way, if a dump fails, the error message must tell you exactly why.
[7 Sep 2016 13:08] Georgi Kodinov
We can max out the timeouts, but that comes at a price. And it is that if the client fails to communicate for some reason (flaky or congested network etc) it's going to wait forever instead of stopping with a timeout.
How about making this configurable instead ? I.e. have a option to dump tell it to max the timeouts out (both on the server and on the client).
As for the error message, on timeout there's nothing that can be received or sent. So the server's only reliable choice is cut the connection. And this is what it's doing. Thus you get the relevant (admittedly generic) error message.
It could be made to write the fact of cutting the connection out into the server error log, but I doubt the practicality of that.
[15 Nov 2016 19:26] Paul DuBois
Posted by developer:
 
Noted in 8.0.1 changelog.

A new mysqldump option, --network-timeout, enables large tables to be
dumped by setting max_allowed_packet to its maximum value and network
read and write timeouts to a large value. This option is enabled by
default. To disable it, use --skip-network-timeout.
[17 Mar 2017 10:13] james wang
I am having the same issue.

Please advise how large net_write_timeout should be and full mysqldump syntax or a sample script for example:

#!/bin/bash
mysql -e "set global net_write_timeout=1200"
mysqldump max_allowed_packet=1G $mydb > $mydb.sql

....

BTW, My database table can be >1 billion rows
Thanks
[14 Aug 2017 10:56] MySQL Verification Team
we need this  --network-timeout for mysqlbinlog too,  when used with the read-from-remote-server option..  will file a new FR.