Bug #30809 Lost Connection error (error 2013) with mysqldump
Submitted: 4 Sep 2007 20:38 Modified: 26 May 2011 23:21
Reporter: aiton goldman Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:5.1.20 OS:Linux
Assigned to: CPU Architecture:Any

[4 Sep 2007 20:38] aiton goldman
Description:
While running mysqldump, I get the following error intermitently :

Error 2013: Lost connection to MySQL server during query
when dumping table....

It doesn't happen consistently on any one table (when I have gotten the error, it has been on tables that range in size from 1gig to 5gigs), nor does it happen on any specific row.  I have set the interactive_timeout setting to a ridiculously high number (172800), and I have increased the max_allowed_packet setting to 30M, but the lost connections still occur.   Even with --log-warnings set to 2, I don't see anything about the lost connections reported in the server error log.  

How to repeat:
The full command I use : 

mysqldump -S /tmp/mysql.sock --single-transaction -C -q --allow-keywords --hex-blob --quote-names  -u backup -p<removed> --databases DurandLab2
[4 Sep 2007 20:46] aiton goldman
Attached is a file with the schema for the database which I've seen the problem in.  There are 2 tables we've seen the problem in so far : blast_hit_nc_tmp and prot_seq_str
[4 Sep 2007 20:51] aiton goldman
table schema for database where I have seen the problem

Attachment: table.out (application/octet-stream, text), 7.01 KiB.

[4 Sep 2007 20:58] aiton goldman
The file I attached to the bug is actually a dump of the relevant information_schema rows - the following is the actuall schemas of the tables I have seen the problem occur on

CREATE TABLE `blast_hit_nc_tmp` (
  `nc_id` int(11) NOT NULL,
  `seq_id_0` int(11) NOT NULL,
  `seq_id_1` int(11) NOT NULL,
  `num_match_seq_0` mediumint(9) NOT NULL,
  `num_match_seq_1` mediumint(9) NOT NULL,
  `num_match_seq_both` mediumint(9) NOT NULL,
  `nc_score` double NOT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1

CREATE TABLE `prot_seq_str` (
  `seq_str_id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `sequence` text NOT NULL,
  `crc` varchar(255) NOT NULL,
  `length` mediumint(8) unsigned NOT NULL,
  `molecular_weight` mediumint(8) unsigned NOT NULL,
  PRIMARY KEY (`seq_str_id`),
  UNIQUE KEY `seq_str_id` (`seq_str_id`),
  UNIQUE KEY `crc` (`crc`,`length`,`molecular_weight`)
) ENGINE=InnoDB AUTO_INCREMENT=3216365 DEFAULT CHARSET=latin1
[4 Sep 2007 21:01] aiton goldman
Changing to the MySQL server category
[6 Sep 2007 6:08] Valeriy Kravchuk
Thank you for a problem report. Can you, please, upload the entire error log, compressed.
[6 Sep 2007 21:23] aiton goldman
mysql error log

Attachment: goby.compbio.cs.cmu.edu.err (application/octet-stream, text), 36.08 KiB.

[6 Sep 2007 21:38] aiton goldman
the error log for 5.1.20

Attachment: goby.err (application/octet-stream, text), 26.81 KiB.

[6 Sep 2007 21:39] aiton goldman
I uploaded two log files - the first was for the 5.1.18 version of mysql server we were running (and seeing the connection lost error) and the second is for the 5.1.20 version (in which we were also seeing the connection lost error)
[6 Sep 2007 21:49] aiton goldman
Here are the times of the three most recent times we saw the error 

Sep 3, around 6:47am

Sep 2, around 6:38am

Sep 1, around 9:19am
[18 Oct 2007 22:22] aiton goldman
We are now seeing this Lost Connection error in code we have written.  Our code is written in python, and we are using the mysqldb module.  We see this error on both selects and inserts, and we are unable to get any query to reliably produce the error.  Is this bug being worked on?  Since some of our queries take hours to run , getting this error towards the end of the query is becoming a common, and serious, problem for us.     

This bug looks similar to bug #30641.
[26 Oct 2007 12:10] Magnus BlÄudd
Could be a problem with network timeout settings. Look at net_read_timeout and net_write_timeout
[1 Feb 2008 9:16] Valeriy Kravchuk
Please, try to repeat with a newer version, 5.1.22. In case of the same problem, please, send the results of:

show global variables like '%timeout%';

net_read_timeout and
net_write_timeout
[2 Mar 2008 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[17 Mar 2008 10:15] Susanne Ebrecht
We still need to know if you will have problems with this by using newer versions. Our newest version at the moment is 5.1.23.
[17 Apr 2008 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[15 Oct 2008 12:46] Andreas Mller
Hello,

it seems the problem is still there. I've got threse kind of error on

5.1.23
5.1.25
5.1.28

on Win32 (XP, 2003 Server). Each time a dump from the local machine is done - server and mysqldump runs on the same machine - and each time the storage engine is myisam. 

It seems this problem occours only on tables with large columns (blob files with some megabyte of data). So I guess that there is a buffer problem with results in connection abort.

net_read_timeout and net_write_timeout doesn't solve the problem. Also testing with or without compression results in the same. Also interresting is that the row is not everytime the same.

I hope this could be fixed very soon because without a fix it's impossible to backup some databases without backing up the binary files.

Regards,
Andreas
[16 Oct 2008 9:13] Sveta Smirnova
Thank you for the feedback.

Please provide output of show global variables like '%timeout%'; as requested and try to change max_allowed_packet to largest value and see if it helps.
[16 Oct 2008 16:02] Andreas Mller
As described net_read_timeout and net_write_timeout doesn't solve the problem.

I tried (and checked) multiple values from 60 over 3600 up to 86400 - everytime with the same result.

Current configuration:

mysql> show global variables like '%timeout%';
+-------------------------+-------+
| Variable_name           | Value |
+-------------------------+-------+
| connect_timeout         | 10    |
| delayed_insert_timeout  | 300   |
| interactive_timeout     | 28800 |
| net_read_timeout        | 3600  |
| net_write_timeout       | 3600  |
| slave_net_timeout       | 3600  |
| table_lock_wait_timeout | 50    |
| wait_timeout            | 28800 |
+-------------------------+-------+
8 rows in set (0.00 sec)
[16 Oct 2008 16:03] Andreas Mller
Today I got the same error on a linux system with mysql 4.1.10a.
[17 Nov 2008 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[9 Apr 2009 9:08] Ronan McNulty
I have the same problem ( 5.1.22 )

+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| connect_timeout            | 5     |
| delayed_insert_timeout     | 300   |
| innodb_lock_wait_timeout   | 50    |
| innodb_rollback_on_timeout | OFF   |
| interactive_timeout        | 28800 |
| net_read_timeout           | 30    |
| net_write_timeout          | 60    |
| slave_net_timeout          | 3600  |
| table_lock_wait_timeout    | 50    |
| wait_timeout               | 28800 |
+----------------------------+-------+
[13 Apr 2009 21:43] Sveta Smirnova
Thank you for the feedback.

Please try with current version 5.1.33: this can be fixed with fix for other bug reports.

Ronan, if you have some information in the error log please attach it to the report.
[14 Apr 2009 9:51] Andreas Mller
With 5.1.33 still getting the same error on the same table around the same row.

Waht can we do to get this solved? Such a long time and so solution!
[14 Apr 2009 10:08] Sveta Smirnova
Andreas,

> With 5.1.33 still getting the same error on the same table around the same row.

Please provide this problem part of your dump. We can not solve the problem if we are not able to repeat it.
[14 Apr 2009 14:24] Andreas Mller
It can't provide a 10 GB table of data here. Also this kind of data I can't publish as public.

So if you will reproduce this error write a little application that loads (large) binary data (e.g. some directories from filesystem but with some files largen that 10 MB or so) into a longblob field. After you stored some tausend records with a tablesize of some gigabyte try to dump this ...

Note: Only MyISAM seems to have this problem.
[14 Apr 2009 15:26] Sveta Smirnova
Thank you for the feedback.

We tried to load large files in blob columns and everything worked fine. Also your problem looks like wrong max_allowed_packet.  At least SHOW CREATE TABLE and full error log can help.
[13 May 2009 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[21 Jul 2009 23:51] MySQL Verification Team
See bug: http://bugs.mysql.com/bug.php?id=46103.
[4 Mar 2011 10:50] Nico Lachmann
I have similiar problem with the current ( 5.1.49sp1-enterprise-gpl-pro-log ) version when dumping a larger innodb database (~12GB). Problem does not appear during every dump, however. (with max_allowed_packet=16M)
[26 Mar 2011 1:56] Sveta Smirnova
Nico,

thank you for the feedback.

Your prolem looks very similar to bug #46103. Please dramatically increase net_write_timeout and net_read_timeout, then try mysqldump again to confirm this.
[26 Apr 2011 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[27 May 2011 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[27 May 2018 23:59] Vincent Wansink
I'm getting this same error with 5.5.45.  

My current settings:

connect_timeout	40
delayed_insert_timeout	300
innodb_lock_wait_timeout	50
innodb_rollback_on_timeout	OFF
interactive_timeout	28800
lock_wait_timeout	31536000
net_read_timeout	6000
net_write_timeout	6000
slave_net_timeout	3600
wait_timeout	60
max_allowed_packet = 1073741824

I can backup every table except this one big one.  It always loses connection somewhere around the half million row mark, but never the same row.

My table create statement:

CREATE TABLE `childattendance` (
  `id` int(11) NOT NULL AUTO_INCREMENT,
  `childid` int(11) NOT NULL DEFAULT '0',
  `starttime` time DEFAULT NULL,
  `endtime` time DEFAULT NULL,
  `subtotal` decimal(11,2) DEFAULT NULL,
  `date` date NOT NULL DEFAULT '0000-00-00',
  `needsattention` tinyint(1) NOT NULL DEFAULT '0',
  `createddate` datetime DEFAULT NULL,
  `modifieddate` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,
  `groupid` int(11) NOT NULL DEFAULT '0',
  `absent` varchar(1) NOT NULL DEFAULT 'N',
  `pickupperson` varchar(250) DEFAULT NULL,
  `companyid` int(11) DEFAULT NULL,
  `username` varchar(255) DEFAULT NULL,
  PRIMARY KEY (`id`),
  KEY `Date` (`date`),
  KEY `Room Id` (`groupid`),
  KEY `Start Time` (`starttime`),
  KEY `End Time` (`endtime`),
  KEY `Child Id` (`childid`),
  KEY `needsattention` (`needsattention`),
  KEY `companyid` (`companyid`)
) ENGINE=InnoDB AUTO_INCREMENT=10775253 DEFAULT CHARSET=latin1;

It currently has just over 7 million rows in 646 MB.
[28 May 2018 0:02] Vincent Wansink
EDIT:  Sorry, not at the half a million row mark, rather around 500 million rows.  Usually just before 500 million rows.