Bug #74933 Lost connection to MySQL server
Submitted: 20 Nov 2014 5:39 Modified: 30 Mar 2015 15:34
Reporter: Guillaume Poulin Email Updates:
Status: Closed Impact on me:
None 
Category:Connector / Python Severity:S2 (Serious)
Version:2.0.2 OS:Linux
Assigned to: Geert Vanderkelen CPU Architecture:Any

[20 Nov 2014 5:39] Guillaume Poulin
Description:
When doing a medium sized query (853 rows, 5 columns), I'm getting the following exception within 1 second.

```
InterfaceError: (InterfaceError) 2013: Lost connection to MySQL server during query ...
```

I reproduced the bug on both gentoo and ubuntu 12.04

Downgrading to 1.2.2 solve the issue.

How to repeat:
Make a large query

```
cur.execute('SELECT * FROM table')
```
[6 Dec 2014 0:00] Ed Dawley
Python 2.6.6
Connector/Python 2.0.2
Percona Server 5.5
max_allowed_packet = 32MB

I've been running into this issue as well and unfortunately am unable to reproduce it consistently.  It happens seemingly at random for queries that have large payloads (in bytes, not rows).  Here's the relevant stacktrace when I see it.

    self.rows = cursor.fetchall()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/cursor.py", line 823, in fetchall
    (rows, eof) = self._connection.get_rows()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/connection.py", line 671, in get_rows
    rows = self._protocol.read_text_result(self._socket, count)
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/protocol.py", line 309, in read_text_result
    packet = sock.recv()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/network.py", line 281, in recv_py26_plain
    errno=2055, values=(self.get_address(), _strioerror(err)))
mysql.connector.errors.OperationalError: 2055: Lost connection to MySQL server at 'REPLACED', system error: timed out

It's important to note that this can happen legitimately if your connect_timeout is set too low.  Apparently that config value is used for *all* socket communications and not just when opening the socket.  So any time your mysql instance is slow to send back data, you can get the above error.  In this case, the exception would happen significantly below the timeout seconds.

As I was debugging this issue, I did notice that there is a code branch in protocol.py::read_text_result() that might lead to this behavior.  If you get into the branch for large payloads (ie http://dev.mysql.com/doc/internals/en/sending-more-than-16mbyte.html) AND there is no EOF, it will loop back around and call sock.recv().

As I tried to test that theory, I ran into legitimate bugs within the read_text_result() function and large payloads.  I have attached a script that reproduces these bugs.  Output is below.

  (pql)ed@dawley ~$ python big.py
16777210 bytes
16777211 bytes
Traceback (most recent call last):
  File "big.py", line 43, in <module>
    results = cursor.fetchall()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/cursor.py", line 823, in fetchall
    (rows, eof) = self._connection.get_rows()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/connection.py", line 671, in get_rows
    rows = self._protocol.read_text_result(self._socket, count)
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/protocol.py", line 316, in read_text_result
    if packet[4] == 254:
IndexError: bytearray index out of range

16777212 bytes
Traceback (most recent call last):
  File "big.py", line 53, in <module>
    results = cursor.fetchall()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/cursor.py", line 823, in fetchall
    (rows, eof) = self._connection.get_rows()
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/connection.py", line 671, in get_rows
    rows = self._protocol.read_text_result(self._socket, count)
  File "/usr/local/ed/pql/lib/python2.6/site-packages/mysql/connector/protocol.py", line 320, in read_text_result
    rowdata = utils.read_lc_string_list(b''.join(datas))
TypeError: sequence item 0: expected string, bytearray found

Done
[6 Dec 2014 0:01] Ed Dawley
Demonstrates incorrect handling of large payloads

Attachment: big.py (text/x-python-script), 1.45 KiB.

[6 Dec 2014 0:02] Ed Dawley
I also think this bug should be raised to an S2.
[9 Dec 2014 13:20] Bart Lengkeek
The cause is the way the protocol header is read (mysql/connector/network.py line 223). If the header is spread over more than one network packet, it will report the error, while it should continue to read from the next packet.

I'll provide a patch that fixes it later (today I hope).
[16 Jan 2015 5:51] Patrick Dobbs
This bug is occurring reliably when a longblob column contains more than about 12-15 MB. 

Any sign of the patch that Bart mentioned? Where is the source code repo? Both github and launchpad look stale.

thanks
[2 Feb 2015 16:10] No Thankyou
Ran into this bug on 2.0.2 running on OS X. Database is long, but no big columns. Tried 2.0.3 and the bug persists, only now it hangs instead of throwing a quick exception.
[3 Feb 2015 13:09] Andrii Nikitin
Verified as described
[17 Mar 2015 16:02] Noah Ludington
I'm also getting this issue in 2.0.2 and 2.0.3. As previously described, 2.0.2 throws an exception and 2.0.3 seems to hang indefinitely.

The threshold for throwing this error seems to be very low, and this is very restrictive.
[30 Mar 2015 15:34] Paul DuBois
Noted in 2.0.4, 2.1.2 changelogs.

Queries that produced a large result could result in an "IndexError:
bytearray index out of range" exception.