Bug #68804 Very Slow Network Performance
Submitted: 28 Mar 2013 16:01 Modified: 3 Apr 2013 10:28
Reporter: Moises Soto Email Updates:
Status: Won't fix Impact on me:
None 
Category:Connector / ODBC Severity:S5 (Performance)
Version:5.2.4w OS:Windows (Windows 7 Enterprise)
Assigned to: Bogdan Degtyariov CPU Architecture:Any

[28 Mar 2013 16:01] Moises Soto
Description:
I was updating some code to export a large set of data to a remote server.  We decided to use array-binding in order to decrease network time delays and reduce the number of ODBC calls.  After finishing the code we tested it using MS SQL and MySQL.  On MSSQL, the code ran 8 times faster than the original.  On MySQL, the code ran only about 25% faster than the original.  We were very puzzled by the performance of the ODBC driver.  Within our local network, the export was really fast but exporting to the remote server was ridiculously slow.

I opened MySQL Workbench on the remote server and noticed the traffic while exporting the data was about 10KB/s.  I opened up WireShark to capture the network traffic while exporting the data set.  I noticed two big differences with the MSSQL and MySQL ODBC drivers.  MSSQL would send larger packets, about ten times larger than MySQL.  The other difference is that MySQL seems to wait for acknowledgment after it sends a packet while MSSQL does not.  These two differences appear to be the big source of the slow network performance of the ODBC driver.

I am sure others must have noticed the slow network performance while sending large amounts of data, so if I have misconfigured something please let me know.

I even used a straight IP address for the odbc settings to rule out dns issues.  I know the network is fine since exporting to the same machine using MSSQL has no issues.

Any suggestions would be greatly appreciated.

How to repeat:
1. Create a database with a simple table on a remote server.
2. Prepare your handle and set the required attributes for array-binding. (We used row-based array binding in our code)
3. Prepare the SQL statement.
4. Bind all your parameters to your buffer.
5. Fill the array.
6. Execute.
7. While executing have a network capture tool capture the packets sent.  You can use WireShark to view the packets.
8. You will notice the small packets and acknowledgment packets transfered back and forth.  Most of the time consumed in the export will be the waiting to receive and send the packets.

Suggested fix:
I am not an export on the ODBC driver but I would suggest if it were possible to increase the size of the packets, that would at least increase the bandwith.  As for waiting to hear back from the server everytime you send a packet, I am not sure if you can change the behavior so that it only waits for an aknowledgment after the whole buffer is sent.

Thanks in advance, we greatly appreciate all the work you do with the ODBC driver.
[3 Apr 2013 10:28] Bogdan Degtyariov
Hi Moises,

Thanks for your interest in MySQL software.

Unfortunately, your suggestion to increase size of the network packets cannot be implemented within the current architecture of MySQL Network protocol, which is a one-way street. Sending a new command to the server without reading the server's response results in an error (Commands out of sync).

You can increase the network efficiency by using batched queries (sending several SQL statements at once). For instance, the following call sends two SELECTs at once and the server returns two result sets at once:

SQLExecDirect(hstmt, "SELECT * FROM tab1; SELECT * FROM tab2", SQL_NTS);

Also, for INSERT statements you can do several records at a time as:

INSERT INTO tab1 (col1, col2) VALUES (1, 1), (2, 2), (3, 3), (4, 4);

There is no plans to change the behavior you observe. So, with all regret I must set the status "Won't fix" for this bug report.