Bug #31523 Lost connection to MySQL server during query, CR_SERVER_LOST
Submitted: 11 Oct 2007 9:17 Modified: 25 Dec 2007 16:54
Reporter: Pirmin Braun Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: C API (client library) Severity:S2 (Serious)
Version:5.0.37 OS:Windows (5.0.37-community-nt on Windows XP)
Assigned to: CPU Architecture:Any
Tags: CR_SERVER_LOST, mysql_fetch_row, mysql_store_result, mysql_use_result

[11 Oct 2007 9:17] Pirmin Braun
Description:
"Lost connection to MySQL server during query"

Server version          5.0.37-community-nt on Windows XP
Protocol version        10
Connection              localhost via TCP/IP
TCP port                3306

Client 4.x

Settings in my.ini:

[mysql]
default-character-set=utf8
[mysqld]
port=3306
log_warnings=2
basedir="C:/Programme/MySQL5_0/"
datadir="C:/Programme/MySQL5_0/Data/"
default-character-set=utf8
default-storage-engine=MYISAM
sql-mode="MYSQL40"
max_allowed_packet=2MB
max_connections=20
query_cache_size=48M
table_cache=256
tmp_table_size=48M
thread_cache_size=8
myisam_max_sort_file_size=100G
myisam_max_extra_sort_file_size=100G
myisam_sort_buffer_size=128M
key_buffer_size=128M
read_buffer_size=2M
read_rnd_buffer_size=4M
sort_buffer_size=4M

Application:
using the C-API

1 single process, single thread, 2 connections, same computer;
connection 1: subsequent mysql_fetch_row() after mysql_use_result() on temporary table with about 3000 records; trying to use normal table instead of temporary table doesn't make a difference;

connection 2: for each row of connection 1 many short read/update/insert operations with mysql_store_result() but without touching the table of connection 1;

after a reproducable exact number (2113) of mysql_fetch_row() in connection 1, no more row is fetched and error "Lost connection to MySQL server during query" occurs; server doesn't notice this error, log_warnings=2 but no entry in error log;

it's not a timeout, since time varies until the error occurs, depending on other machine load from 3-5 minutes;

the error occurs always exactly after the same amount of fetched records;
it's not a data problem since when shifting the start of processing (by modifiyng the where-clause), the abortion point is shifted by the exact same amount;

when doing less operations in connection 2, the error doesn't occur;

when doing the fetch in connection 1 with mysql_store_result(), the error doesn't occur; but that's not a solution since the data should stay on the server and be fetched row by row to the client;

I do not close the connection myself;
database was not migrated to 5.0 but restored from a dump produced with mysql-dump;
none of the other reasons from http://dev.mysql.com/doc/refman/5.0/en/gone-away.html
applies;
extended-status aftwards looks like this (0-lines omitted):
+-----------------------------------+-----------+
| Variable_name                     | Value     |
+-----------------------------------+-----------+
| Bytes_received                    | 51391615  |
| Bytes_sent                        | 262284992 |
| Com_change_db                     | 3         |
| Com_create_table                  | 1         |
| Com_delete                        | 6         |
| Com_drop_table                    | 2         |
| Com_insert                        | 5998      |
| Com_select                        | 40549     |
| Com_set_option                    | 4         |
| Com_show_fields                   | 100       |
| Com_show_keys                     | 109       |
| Com_update                        | 13680     |
| Compression                       | OFF       |
| Connections                       | 5         |
| Created_tmp_disk_tables           | 100       |
| Created_tmp_tables                | 3858      |
| Flush_commands                    | 1         |
| Handler_delete                    | 9708      |
| Handler_read_first                | 77        |
| Handler_read_key                  | 80418     |
| Handler_read_next                 | 184335    |
| Handler_read_rnd                  | 3699      |
| Handler_read_rnd_next             | 805704    |
| Handler_update                    | 31133     |
| Handler_write                     | 16658     |
| Key_blocks_unused                 | 109105    |
| Key_blocks_used                   | 5672      |
| Key_read_requests                 | 691031    |
| Key_reads                         | 5672      |
| Key_write_requests                | 128513    |
| Key_writes                        | 48575     |
| Last_query_cost                   | 0.000000  |
| Max_used_connections              | 3         |
| Open_files                        | 230       |
| Open_tables                       | 115       |
| Opened_tables                     | 123       |
| Qcache_free_blocks                | 131       |
| Qcache_free_memory                | 30235848  |
| Qcache_hits                       | 22139     |
| Qcache_inserts                    | 40192     |
| Qcache_lowmem_prunes              | 0         |
| Qcache_not_cached                 | 573       |
| Qcache_queries_in_cache           | 5146      |
| Qcache_total_blocks               | 10442     |
| Questions                         | 82598     |
| Rpl_status                        | NULL      |
| Select_scan                       | 232       |
| Slave_running                     | OFF       |
| Slow_queries                      | 1         |
| Sort_range                        | 1478      |
| Sort_rows                         | 6903      |
| Sort_scan                         | 3646      |
| Ssl_session_cache_mode            | NONE      |
| Table_locks_immediate             | 59890     |
| Threads_connected                 | 3         |
| Threads_created                   | 3         |
| Threads_running                   | 1         |
| Uptime                            | 436       |
| Uptime_since_flush_status         | 436       |
+-----------------------------------+-----------+

How to repeat:
by repeating the same processing

Suggested fix:
some resource gets exhausted?
any pending/slow inserts/updates get piled up?
tell me which variable to tweak
[11 Oct 2007 9:32] Valeriy Kravchuk
Thank you for a problem report. Can you send a smallest (or real) C code that demonstrates the behaviour described?
[11 Oct 2007 13:14] Pirmin Braun
wrapper to avoid name clashes with other libraries

Attachment: MySQLWrapper.c (application/octet-stream, text), 2.55 KiB.

[11 Oct 2007 13:15] Pirmin Braun
DB Abstraction Layer

Attachment: PBMySQLChannel.m (application/octet-stream, text), 15.43 KiB.

[11 Oct 2007 13:16] Pirmin Braun
.h for DB Abstraction Layer

Attachment: PBMySQLChannel.h (application/octet-stream, text), 2.04 KiB.

[11 Oct 2007 13:17] Pirmin Braun
ObjC Categorie of WOApplication dedicated to DB Handling

Attachment: Application_1.m (application/octet-stream, text), 75.93 KiB.

[11 Oct 2007 13:17] Pirmin Braun
.h for ObjC Categorie of WOApplication dedicated to DB Handling

Attachment: Application_1.h (application/octet-stream, text), 4.53 KiB.

[11 Oct 2007 13:21] Pirmin Braun
IntarScript with Business Logic that performes the processing causing the error

Attachment: bu_migration_auftraege.script (application/octet-stream, text), 5.60 KiB.

[11 Oct 2007 14:20] Pirmin Braun
added the files involved
[25 Nov 2007 16:54] Valeriy Kravchuk
Thank you for a detailed bug report. Please, try to repeat with a newer version, 5.0.45, and inform about the results.
[26 Dec 2007 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".