Bug #48346 Communications link failure when reading compressed data with compressed=true
Submitted: 27 Oct 2009 9:46 Modified: 15 Sep 2016 21:44
Reporter: Vassilis Virvilis Email Updates:
Status: Closed Impact on me:
None 
Category:Connector / J Severity:S3 (Non-critical)
Version:5.1.10 (also older 5.0.x) OS:Any
Assigned to: Filipe Silva CPU Architecture:Any
Tags: Communications link failure when reading compressed data

[27 Oct 2009 9:46] Vassilis Virvilis
Description:
Hi

When I am trying to do multiple sqls on a table that is of the form
        ID | zipdata

I eventually get an exception like the one below after 5-6 or six iterations

setup:
---------
server OS: debian lenny
mysql server: 5.051
client mysql connector/j 5.1.10 (in both debian unstable and windows)

We get this error through a lot of versions (both server and client) but just now we report it.

The obvious workaround is to specify compressed=false (default) in the connection string of mysql instead of compressed=true that we are doing right now.

In my opinion the bug resides entirely in connector/J since the logs in the server indicate that executes normally all queries.

The last packet successfully received from the server was 23 milliseconds ago.  The last packet sent successfully to the server was 173 milliseconds ago.
com.mysql.jdbc.CommunicationsException: Communications link failure

The last packet successfully received from the server was 23 milliseconds ago.  The last packet sent successfully to the server was 173 milliseconds ago.
    at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3486)
    at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:919)
    at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1416)
    at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2872)
    at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:476)
    at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2576)
    at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1757)
    at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2167)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2642)
    at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2571)
    at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1464)
    ... ... ...
Caused by: java.lang.ArrayIndexOutOfBoundsException
       at java.lang.System.arraycopy(Native Method)
       at com.mysql.jdbc.CompressedInputStream.read(CompressedInputStream.java:279)
       at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2494)
       at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3005)
       at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2938)
       at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3481)
       ... 17 more

    bill

How to repeat:
Create a big table of 5000 rows of the form

        ID | zipdata

do the following

    for (id : id_array) {
         #read zipdata from table rows with id
    }

for id_array length 5 to 10 it will work without problems. If the id_array length is 100 you will see the exception I already posted
[6 Nov 2009 4:19] Yan Tang
We experienced likely problem, while load larger content about 3K bytes from TEXT, every time "com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure" is reported and when we comment out connection property useCompression=true, the exception disapear. 
We are useing mysql connector/J 5.1.10 and mysql server 5.1.31-1ubuntu2-log
[6 Nov 2009 8:47] Vassilis Virvilis
Hi,

I just wanted to add that our compressed blob is likely more than 3k in most occasions so it looks like the bug effect factor is the size of the data and not that the data were stored compressed as I originally thought.

 Thanks for the insight Yan
[6 Nov 2009 9:50] Tonci Grgin
Vassilis, yes, your last comment is true. Buffers should be able to keep up with data in it's uncompressed state. Should I check this or you have increased max_allowed_packet and, possibly, other buffers and the error's gone?
[9 Nov 2009 14:37] Vassilis Virvilis
Hi Tonci,

I will be able to do some tests Thursday. I have some problems with disk crashes so it is not easy to do it earlier. Sorry about that.

Do you want me to try anything else except max_allowed_packet?

      Vassilis
[12 Nov 2009 8:02] Tonci Grgin
Vassilis, start with MAX_ALLOWED_PACKET. Waiting on your results.
[12 Nov 2009 11:50] Vassilis Virvilis
Sorry about that but I can't reproduce the bug so there is no point to try with different MAX_ALLOWED_PACKET.

What's changed:

Due to disk crash I had to rebuild our mysql database (in one case) and restore from backup (in another case). Both mysql servers are recently restarted.

The max_allowed_packet was set to 1073740800 in both servers

We are turning compression on in order to wait for the exception and to report back here

Thanks
[12 Nov 2009 12:14] Tonci Grgin
Vasillis, as I understand, I should wait for your new test results before closing, right?
[13 Nov 2009 7:39] Vassilis Virvilis
Hi Tonci

Aah. What I can say... I suppose you should close it for now. I will reopen it when I have a test case.

One more setup detail that may help future generations is that the max length of the compressed blob is 2007616 (~2MB) when the max_allowed_packet is 1073740800 (~1GB). The bug will resurface I am certain for it because we had to introduce company policy for turning the compression of when dealing with compressed blob

   Vassilis
[13 Nov 2009 8:28] Tonci Grgin
Vasillis, I will not close it but rather leave in NeedFeedback until you attach test case. Mind you, your problem is very data-related so be sure to have necessary DDL/DML statements in your test.

> One more setup detail that may help future generations is that the max length of the compressed blob is 2007616 (~2MB) when the max_allowed_packet is 1073740800 (~1GB).

As I said, size of *compressed* data is irrelevant, what matters is size of *uncompressed* row which should be visibly smaller than max_allowed_packet.
[23 Nov 2009 15:40] Vassilis Virvilis
Ok

I think I may have a reproducible case now. What do you want me to check? The size of the uncompressed BLOB?

The last packet successfully received from the server was 1,148 milliseconds ago.  The last packet sent successfully to the server was 2,660 milliseconds ago.
com.mysql.jdbc.CommunicationsException: Communications link failure

The last packet successfully received from the server was 1,148 milliseconds ago.  The last packet sent successfully to the server was 2,660 milliseconds ago.
	at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1070)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3486)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:919)
	at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1416)
	at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2872)
	at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:476)
	at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2576)
	at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1757)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2167)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2642)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2571)
	at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1464)
	...
        ...
Caused by: java.lang.ArrayIndexOutOfBoundsException
	at java.lang.System.arraycopy(Native Method)
	at com.mysql.jdbc.CompressedInputStream.read(CompressedInputStream.java:279)
	at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:2494)
	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3005)
	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:2938)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:3481)
	... 17 more
[25 Nov 2009 8:29] Tonci Grgin
Vassilis:
 > I think I may have a reproducible case now. What do you want me to check? The size of the uncompressed BLOB?

Yes, and I'd like for entire row with uncompressed data to fit in max_allowed_packet.

Waiting on test case.

Side note: Bare stack trace is almost useless... I'd have to be miracle-maker to guess what's happening just from that.
[26 Nov 2009 11:56] Vassilis Virvilis
Hi,

Looks like the error is not on compressed data. It is on general binary blob data which is even worse. Sorry for the confusion.

The probematic SQL was
mysql> SELECT id, myblob FROM Compact WHERE id IN (124);

where myblob is a binary blob
mysql> SELECT id, length(myblob) FROM Compact WHERE id IN (124);
id length(myblob)
124 4790772

mysql> show variables like '%packet%';
+--------------------+------------+
| Variable_name      | Value      |
+--------------------+------------+
| max_allowed_packet | 1073740800 |
+--------------------+------------+

So it looks that max_allowed_packet is big enough unless it overflows and wraps around in 64bit systems

If you want I can send you the blob but it is 4.6MB file.

The only thing that I am not sure of yet is if a fresh mysqld restart makes the problem to go away but I cannot restart the server right now since it is in production.

        Vassilis
[26 Nov 2009 12:00] Tonci Grgin
Vassilis, any file should do so I don't need yours, at least not now.

Can you now please put all this together in one test case and attach it so I can try? Also, do note JRE used and full connenct string.
[11 Dec 2009 14:40] Tonci Grgin
No test case still...
[21 Dec 2009 9:50] Tonci Grgin
Since nobody provided me with proper test case I wrote one for our test framework:

    	File testBlobFile = null;
    	int requiredSize = 0;
    	
    	Properties props = new Properties();
        props.put("useCompression", "true");
        props.put("maxAllowedPacket ", "104857600");
        Connection conn1 = getConnectionWithProps(props);
        
        Statement stmt1 = conn1.createStatement();
        //Get real value
        this.rs = stmt1.executeQuery("SHOW VARIABLES LIKE 'max_allowed_packet'");
		this.rs.next();
		//Create 2kB smaller tahn maximum allowed BLOB for testing
		requiredSize = this.rs.getInt(2) - 2048;
		System.out.println("Required size: " + requiredSize);
		this.rs.close();
		
		//http://dev.mysql.com/doc/refman/5.1/en/server-system-variables.html#sysvar_max_allowed_pac...
		// setting GLOBAL variable during test is not ok
		// The protocol limit for max_allowed_packet is 1GB.
		if (testBlobFile == null || testBlobFile.length() != requiredSize) {
			if (testBlobFile != null && testBlobFile.length() != requiredSize) {
				testBlobFile.delete();
			}

			testBlobFile = File.createTempFile("cmj-testblob", ".dat");
			testBlobFile.deleteOnExit();
			
			cleanupTempFiles(testBlobFile, "cmj-testblob");

			BufferedOutputStream bOut = new BufferedOutputStream(
					new FileOutputStream(testBlobFile));

			int dataRange = Byte.MAX_VALUE - Byte.MIN_VALUE;

			for (int i = 0; i < requiredSize; i++) {
				bOut.write((byte) ((Math.random() * dataRange) + Byte.MIN_VALUE));
			}

			bOut.flush();
			bOut.close();
		}
		
        
        createTable("BLOBTEST", "(pos int PRIMARY KEY auto_increment, blobdata LONGBLOB)");
        BufferedInputStream bIn = new BufferedInputStream(new FileInputStream(testBlobFile));
        
        this.pstmt = conn1.prepareStatement("INSERT INTO BLOBTEST(blobdata) VALUES (?)");
		this.pstmt.setBinaryStream(1, bIn, (int) testBlobFile.length());
		this.pstmt.execute();
		this.pstmt.clearParameters();
        
		this.rs = stmt1.executeQuery("SELECT blobdata from BLOBTEST LIMIT 1");
		this.rs.next();
		
		if (bIn != null) {
			bIn.close();
		}

I've noticed no problems running my test case against latest c/J sources.
[21 Dec 2009 12:38] Vassilis Virvilis
Tonci,

I am terribly sorry for the delay but I am in terrible pressure right now. We have a reproducible case. I will try to clean it up and submit it ASAP but that probably means next year

      Vassilis
[21 Dec 2009 14:38] Tonci Grgin
Vassilis, take your time. Regardless of status, I am monitoring the report. And, happy new year!
[7 Feb 2013 8:27] Alexander Soklakov
Hi,

Note that c/J 5.1.23 contains fix for different compression problems. If this exact problem still exists, please, try that new connector version.
[16 Jun 2013 17:26] NOT_FOUND NOT_FOUND
We are running 5.1.25 and are still seeing this exception:

Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:
Communications
link failure

The last packet successfully received from the server was 25 milliseconds
ago.
  The last packet sent successfully to the server was 28 milliseconds ago.
	at sun.reflect.GeneratedConstructorAccessor95.newInstance(Unknown Source)
	at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)

	at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
	at com.mysql.jdbc.Util.handleNewInstance(Util.java:411)
	at
com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1121)

	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4115)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:927)
	at com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:2006)
	at com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:3493)
	at com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:490)
	at com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:3198)
	at com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:2366)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2789)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2809)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2758)
	at com.mysql.jdbc.StatementImpl.executeQuery(StatementImpl.java:1612)
	at
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)

	at
org.apache.commons.dbcp.DelegatingStatement.executeQuery(DelegatingStatement.java:208)

	at
com.usamp.manager.manager.ProjectManager.initializeDashboardProject(ProjectManager.java:532)

	... 41 more
Caused by: java.lang.ArrayIndexOutOfBoundsException
	at
com.mysql.jdbc.CompressedInputStream.read(CompressedInputStream.java:280)

	at com.mysql.jdbc.MysqlIO.readFully(MysqlIO.java:3116)
	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3626)
	at com.mysql.jdbc.MysqlIO.reuseAndReadPacket(MysqlIO.java:3559)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4110)
	... 54 more

Tbis problem is difficult to re-create, however, it seems to be related to when result set contains TEXT fields which are quite long 200+K in size.
[25 Jun 2013 12:08] Hannes Innerhofer
I do have a similar problem.
The only difference is that I get the exception when reading a BLOB-Field. As far as I can see, the size of the BLOB does not really matter (even small sized Blobs of some KB cause the problem)

I am using MySQL 5.6 together with Connector 5.1.25.
[13 Oct 2015 6:13] Ryosuke Yamazaki
In my environment, this bug can be reproduced easily.

server OS: Oracle Linux 6 with Unbreakable Linux Kernel Release 3 (3.8.13-68.2.2.el6uek.x86_64)
JRE: Oracle JRE 8u60
MySQL Server: 5.5.40 + Amazon RDS
MySQL Connector/J: 5.1.36

I observed what happens when ArrayIndexOutOfBoundsException happens with Linux's "strace" tool. I noticed that read syscall finished with bytes less than specified in read syscall argument.

As you know, read syscall does not ensure the read bytes specified in third argument (=byte size to read).

Actually it sometimes finishes with smaller size than specified in the parameter. We can see that behaviour with heavy-loaded server, huge bytes in one send() syscall, long-latency network, congested network or high-loss network.

That is the nature of TCP socket and is the cause of this bug.

Note: 
* I think it's almost impossible to reproduce this bug with 1 server + 1 client, small dataset, low-latency network (loopback or local network).
* Tuning max_allowed_packet does not help.
[13 Oct 2015 6:17] Ryosuke Yamazaki
Here's very simple patch but effects:
  https://github.com/mysql/mysql-connector-j/pull/6
I hope it helps.
[15 Oct 2015 14:01] Omer Barnir
Thank you for the contribution. 
It is tracked in bug#78845
[16 May 2016 20:59] Filipe Silva
Although we can't reproduce it we are considering the contribution and analyzing it.
[18 May 2016 11:11] Filipe Silva
Hi Ryosuke,

I think there must be more to it than what you observed. You see, the connector already guarantees that incomplete socket reads are re-read in order to fulfill the expected length of bytes, as such the method getNextPacketFromServer() should end up with a full packet available. If it fails somehow, then it should throw another kind of exception.

I really would like to understand what's the cause of this issue. Can you provide some more details about your environment? Connection properties you're using, special server settings? Do you have concurrent db reads in your application?

Is there any chance you could narrow this down to a reproducible use case?

Thanks,
[20 May 2016 10:36] Ryosuke Yamazaki
I wrote "That is the nature of TCP socket and is the cause of this bug." but it was wrong. I'm very sorry.

So I'd like to describe the problem and how patch works.

Problem occurs when an uncompressed packet gets splitted and compressed into multiple packets.
So let's look into the code and see how it'll work in this situation.

---------------------------------------------------------------------------------------------------

In src/com/mysql/jdbc/MysqlIO.java:
   483      protected final Buffer readPacket() throws SQLException {
   484          try {
   485              int lengthRead = readFully(this.mysqlInput, this.packetHeaderBuf, 0, 4);
                                
   490              int packetLength = (this.packetHeaderBuf[0] & 0xff) + ((this.packetHeaderBuf[1] & 0xff) << 8) + ((this.packetHeaderBuf[2] & 0xff) << 16);
                                
   511              // Read data
   512              byte[] buffer = new byte[packetLength];
   513              int numBytesRead = readFully(this.mysqlInput, buffer, 0, packetLength);

(line 485-490: retrieve the length of an original uncompressed packet)
(line 513: try to read the whole bytes of original uncompressed packet but fails as described below.

---------------------------------------------------------------------------------------------------

In src/com/mysql/jdbc/CompressedInputStream.java:
   186      public int read(byte[] b, int off, int len) throws IOException {
   	
   195          try {
   196              getNextPacketIfRequired(len);
   197          } catch (IOException ioEx) {
   198              return -1;
   199          }
   200          System.arraycopy(this.buffer, this.pos, b, off, len);
   201          this.pos += len;
   202          return len;
   203      }
(line 200: copy a uncompressed packet into a buffer but will fail in this situation because...(see next))

   158      private void getNextPacketIfRequired(int numBytes) throws IOException {
   159          if ((this.buffer == null) || ((this.pos + numBytes) > this.buffer.length)) {
   160              getNextPacketFromServer();
   161          }
   162      }
(line 158-162: This method calls getNextPacketFromServer() only once per single compressed packet.
 My patch makes this line to read multiple compressed packets correctly.)

---------------------------------------------------------------------------------------------------

Multiple compressed packets are generated when 'query cache' sends a very large resultset.

---------------------------------------------------------------------------------------------------
In mysql-5.6.26/sql/sql_cache.cc:
  1212  static bool
  1213  send_data_in_chunks(NET *net, const uchar *packet, ulong len)
  1214  {
  1215    /*
  1216      On the client we may require more memory than max_allowed_packet
  1217      to keep, both, the truncated last logical packet, and the
  1218      compressed next packet.  This never (or in practice never)
  1219      happens without compression, since without compression it's very
  1220      unlikely that a) a truncated logical packet would remain on the
  1221      client when it's time to read the next packet b) a subsequent
  1222      logical packet that is being read would be so large that
  1223      size-of-new-packet + size-of-old-packet-tail >
  1224      max_allowed_packet.  To remedy this issue, we send data in 1MB
  1225      sized packets, that's below the current client default of 16MB
  1226      for max_allowed_packet, but large enough to ensure there is no
  1227      unnecessary overhead from too many syscalls per result set.
  1228    */
  1229    static const ulong MAX_CHUNK_LENGTH= 1024*1024;

  1230    while (len > MAX_CHUNK_LENGTH)
  1231    {
  1232      if (net_write_packet(net, packet, MAX_CHUNK_LENGTH))
  1233        return TRUE;
  1234      packet+= MAX_CHUNK_LENGTH;
  1235      len-= MAX_CHUNK_LENGTH;
  1236    }
  1237    if (len && net_write_packet(net, packet, len))
  1238      return TRUE;

  1239    return FALSE;
  1240  }	

(line 1232: SPLIT AND COMPRESS!!)
---------------------------------------------------------------------------------------------------

It was very easy to reproduce in my environment last year but it's very hard today. I tried it 10000 times but to no avail.

I hope it'll fixed soon. Please feel free to contact me if you have further questions.

Thanks,
[25 May 2016 19:29] Filipe Silva
Hi Ryosuke,

Actually large results, which end up split by multiple packets, don't cause this error. But there is one particular situation that can actually cause it. It is documented in a note in the page http://dev.mysql.com/doc/internals/en/example-several-mysql-packets.html.

In this particular case we may end up reading a compressed packet that doesn't fulfill all the requested uncompressed bytes, and so causing the reported stack trace. This is very, very hard to reproduce, if not impossible, though. The way the latest servers are splitting compressed packets may actually be preventing this to happen anymore.

Anyway, I think the fix you proposed is fit for this situation.

If in the meantime you come up with a reproducible test case, please share it with us.

Thank you,
[12 Jul 2016 11:46] Filipe Silva
Hi Ryosuke,

Have you been observing this issue recently or can you still reproduce it?

We are working on with a patch different from yours and it would be decisive to have a working test case that could be used to verify it. So far we weren't able to verify the issue other than by reading the source code, which is very thin in this particular case.

Thank you,
[22 Jul 2016 5:06] Ryosuke Yamazaki
Hi Filipe,

I tried it again and again but the bug was not reproduced.

I am happy to hear that your are working on another patch. I am willing to apply your patch into our production environment for testing when it get released.

Regards,
[29 Aug 2016 16:46] Filipe Silva
Hi Ryosuke,

Any chance you could test this patch in your environment?

Just replace the file I attached previously (here's its md5 signature if you want to check: df604e4dfa74650c672200cc4e8ac16b  CompressedInputStream.class) in your connector-java-5.1 jar.

Many thanks,
[29 Aug 2016 16:49] Filipe Silva
Testing patch

Attachment: CompressedInputStream.class (application/java-vm, text), 5.04 KiB.

[29 Aug 2016 16:52] Filipe Silva
My apologies, I sent a wrong version. The latter is the right one.
MD5: e2fd8f6e3f7a3e85e9b28c919f552cdb  CompressedInputStream.class
[15 Sep 2016 21:44] Daniel So
Posted by developer:
 
Added the following entry to the Connector/J 5.1.40 changelog: 

"When a very large amount of compressed data is transmitted from the server to the client and under very special circumstances, a CommunicationsException might occur. It happened when a single compressed packet from the server was not big enough to provide an entire uncompressed packet to the client side. With this fix, Connector/J reads as many compressed packets as necessary to fill the size of the uncompressed packet that was being requested."
[3 Oct 2016 15:05] Daniel So
Posted by developer:
 
Updated the changelog entry to the following: 

"When a very large amount of compressed data is transmitted from the server to the client and under very special circumstances, a CommunicationsException might occur. It happened when a single compressed packet from the server was not big enough to provide an entire uncompressed packet to the client side. With this fix, Connector/J reads as many compressed packets as necessary to fill the size of the uncompressed packet that was being requested. Thanks to Ryosuke Yamazaki for contributing to the fix."
[14 Oct 2016 16:05] Daniel So
The fix has been included in Connector/J 6.0.5. The entry for the 5.1.40
changelog has been included into the 6.0.5 changelog.