Bug #62656 mysqldump losing connection on Big tables: Lost connection to MySQL server
Submitted: 7 Oct 2011 18:07 Modified: 26 Jan 2012 18:48
Reporter: Imran Ahmed Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: DML Severity:S2 (Serious)
Version:5.5.16 OS:Windows (sever 2003 32 bit)
Assigned to: CPU Architecture:Any
Tags: Lost connection on row xx

[7 Oct 2011 18:07] Imran Ahmed
Description:
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `doc` at row: 345

I am trying to backup a big table, but mysql always lose connection on row 345.
Same thing happens when I try to select the whole table.
So I thought the allowed_max_packets was too low. I bump it up to 1G, but I am still getting the same error. The Database size is around 2.5 GB.
Here is mySql config:
[client]
max_allowed_packet = 1G
port=3306

[mysql]

default-character-set=latin1

# SERVER SECTION
# ----------------------------------------------------------------------
#
# The following options will be read by the MySQL Server. Make sure that
# you have installed the server correctly (see above) so it reads this 
# file.
#
[mysqld]

# The TCP/IP Port the MySQL Server will listen on
port=3306

#Path to installation directory. All paths are usually resolved relative to this.
basedir="C:/Program Files/MySQL/MySQL Server 5.5/"

#Path to the database root
datadir="C:/Documents and Settings/All Users/Application Data/MySQL/MySQL Server 5.5/Data/"

# The default character set that will be used when a new schema or table is
# created and no character set is defined
character-set-server=latin1

# The default storage engine that will be used when create new tables when
default-storage-engine=INNODB

# Set the SQL mode to strict
sql-mode="STRICT_TRANS_TABLES,NO_AUTO_CREATE_USER,NO_ENGINE_SUBSTITUTION"

# The maximum amount of concurrent sessions the MySQL server will
# allow. One of these connections will be reserved for a user with
# SUPER privileges to allow the administrator to login even if the
# connection limit has been reached.
max_connections=100

# Query cache is used to cache SELECT results and later return them
# without actual executing the same query once again. Having the query
# cache enabled may result in significant speed improvements, if your
# have a lot of identical queries and rarely changing tables. See the
# "Qcache_lowmem_prunes" status variable to check if the current value
# is high enough for your load.
# Note: In case your tables change very often or if your queries are
# textually different every time, the query cache may result in a
# slowdown instead of a performance improvement.
query_cache_size=1M

# The number of open tables for all threads. Increasing this value
# increases the number of file descriptors that mysqld requires.
# Therefore you have to make sure to set the amount of open files
# allowed to at least 4096 in the variable "open-files-limit" in
# section [mysqld_safe]
table_cache=256

# Maximum size for internal (in-memory) temporary tables. If a table
# grows larger than this value, it is automatically converted to disk
# based table This limitation is for a single table. There can be many
# of them.
tmp_table_size=18M

# How many threads we should keep in a cache for reuse. When a client
# disconnects, the client's threads are put in the cache if there aren't
# more than thread_cache_size threads from before.  This greatly reduces
# the amount of thread creations needed if you have a lot of new
# connections. (Normally this doesn't give a notable performance
# improvement if you have a good thread implementation.)
thread_cache_size=8

#*** MyISAM Specific options

# The maximum size of the temporary file MySQL is allowed to use while
# recreating the index (during REPAIR, ALTER TABLE or LOAD DATA INFILE.
# If the file-size would be bigger than this, the index will be created
# through the key cache (which is slower).
myisam_max_sort_file_size=100G

# If the temporary file used for fast index creation would be bigger
# than using the key cache by the amount specified here, then prefer the
# key cache method.  This is mainly used to force long character keys in
# large tables to use the slower key cache method to create the index.
myisam_sort_buffer_size=35M

# Size of the Key Buffer, used to cache index blocks for MyISAM tables.
# Do not set it larger than 30% of your available memory, as some memory
# is also required by the OS to cache rows. Even if you're not using
# MyISAM tables, you should still set it to 8-64M as it will also be
# used for internal temporary disk tables.
key_buffer_size=25M

# Size of the buffer used for doing full table scans of MyISAM tables.
# Allocated per thread, if a full scan is needed.
read_buffer_size=64M
read_rnd_buffer_size=256M

# This buffer is allocated when MySQL needs to rebuild the index in
# REPAIR, OPTIMZE, ALTER table statements as well as in LOAD DATA INFILE
# into an empty table. It is allocated per thread so be careful with
# large settings.
sort_buffer_size=256K

#*** INNODB Specific options ***

# Use this option if you have a MySQL server with InnoDB support enabled
# but you do not plan to use it. This will save memory and disk space
# and speed up some things.
#skip-innodb

# Additional memory pool that is used by InnoDB to store metadata
# information.  If InnoDB requires more memory for this purpose it will
# start to allocate it from the OS.  As this is fast enough on most
# recent operating systems, you normally do not need to change this
# value. SHOW INNODB STATUS will display the current amount used.
innodb_additional_mem_pool_size=1G

# If set to 1, InnoDB will flush (fsync) the transaction logs to the
# disk at each commit, which offers full ACID behavior. If you are
# willing to compromise this safety, and you are running small
# transactions, you may set this to 0 or 2 to reduce disk I/O to the
# logs. Value 0 means that the log is only written to the log file and
# the log file flushed to disk approximately once per second. Value 2
# means the log is written to the log file at each commit, but the log
# file is only flushed to disk approximately once per second.
innodb_flush_log_at_trx_commit=1

# The size of the buffer InnoDB uses for buffering log data. As soon as
# it is full, InnoDB will have to flush it to disk. As it is flushed
# once per second anyway, it does not make sense to have it very large
# (even with long transactions).
innodb_log_buffer_size=1M

# InnoDB, unlike MyISAM, uses a buffer pool to cache both indexes and
# row data. The bigger you set this the less disk I/O is needed to
# access data in tables. On a dedicated database server you may set this
# parameter up to 80% of the machine physical memory size. Do not set it
# too large, though, because competition of the physical memory may
# cause paging in the operating system.  Note that on 32bit systems you
# might be limited to 2-3.5G of user level memory per process, so do not
# set it too high.
innodb_buffer_pool_size=1600M

# Size of each log file in a log group. You should set the combined size
# of log files to about 25%-100% of your buffer pool size to avoid
# unneeded buffer pool flush activity on log file overwrite. However,
# note that a larger logfile size will increase the time needed for the
# recovery process.
innodb_log_file_size=24M

# Number of threads allowed inside the InnoDB kernel. The optimal value
# depends highly on the application, hardware as well as the OS
# scheduler properties. A too high value may lead to thread thrashing.
innodb_thread_concurrency=10
wait_timeout = 22800
max_allowed_packet = 1G
tmpdir = c:\mysqltemp

This is really important because I have to backup our clients data every night. Any tip or suggestion to fix would be great. I am doing cold backups for now, but I really need mysqldump to generate sql files properly.
Thanks!

How to repeat:
delimiter $$

CREATE TABLE `doc` (
  `doc_id` int(10) unsigned NOT NULL DEFAULT '0',
  `content` longblob NOT NULL,
  `content_type` varchar(255) NOT NULL DEFAULT '',
  `file_name` varchar(255) NOT NULL DEFAULT '',
  `client_path` varchar(255) NOT NULL DEFAULT '',
  `size` decimal(19,0) NOT NULL DEFAULT '0',
  `session_id` varchar(64) DEFAULT NULL,
  `record_id` varchar(255) NOT NULL DEFAULT '',
  `record_dtime` datetime NOT NULL DEFAULT '0000-00-00 00:00:00',
  PRIMARY KEY (`doc_id`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=2000000 AVG_ROW_LENGTH=700000$$

Cant give you any data, because it is confidential. But try to insert some dummy word docs in it.
I have around 5000 records in this table.
Just run the usuall mysqldump command:
mysqldump production_ram > c:\backups\doc.sql

The output I get is:
mysqldump: Error 2013: Lost connection to MySQL server during query when dumping table `doc` at row: 345
[9 Oct 2011 7:51] Valeriy Kravchuk
Please, send the output of SHOW TABLE STATUS for this table. 

Do you have anything unusual in server's error log at the moments when you tried to select all data from this table? Had you tried to execute

check table docs extended;

do this table?
[11 Oct 2011 12:48] Imran Ahmed
SHOW TABLE STATUS where name="doc";

Name , Engine , Version , Row_format , Rows , Avg_row_length , Data_length , Max_data_length , Index_length, Data_free , Auto_increment , Create_time         , Update_time , Check_time , Collation , Checksum , Create_options , Comment 

'doc', 'MyISAM', '10', 'Dynamic', '4999', '478148', '2390265036', '281474976710655', '53248', '0', NULL, '2011-10-07 12:03:10', '2011-10-07 12:06:49', NULL, 'latin1_swedish_ci', NULL, 'max_rows=2000000 avg_row_length=700000', ''

-------------------------------------

check table doc extended;

'production_ram.doc', 'check', 'status', 'OK'

No, nothing unusual. I really think something is missing in the config. Because I have copied over the data files to my local server and was able to do full mySqldump.
Or it could be 32bit vs 64bit issue. The server is 32bit and my local computer is 64bit.
Any suggestion would be appreciated.
Thanks.
[26 Dec 2011 18:48] Sveta Smirnova
Thank you for the feedback.

Please try with option --extended-insert=0 and inform us if this helps.
[27 Jan 2012 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".