Bug #62257 Application hangs while sending 1GB data using mysql_stmt_send_long_data()
Submitted: 25 Aug 2011 5:27 Modified: 4 Jul 2012 19:28
Reporter: Avinash Dubey Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: Prepared statements Severity:S2 (Serious)
Version:5.5.8-enterprise-commercial, 5.1.64 OS:Any
Assigned to: CPU Architecture:Any
Tags: mysql_stmt_send_long_data

[25 Aug 2011 5:27] Avinash Dubey
Description:
My application inserts BLOB data into the table using mysql_stmt_send_long_data()
function. The data size is 1GB. The application is in C using MySQL APIs.
For smaller BLOB data of 50MB, 100MB, 200MB it works. But as the size of data increases the time needed to complete the operation takes more than expected time.

How to repeat:
Try to insert the BLOB data of 1GB using mysql_stmt_send_long_data() function in chunks.

Suggested fix:
Although we do not know how server handles the data internally. But most probably there seem to be two issues:

1. What looks to me is MySQL server is storing entire data in memory. This is not a correct way as if LOB data is huge then you can not keep all of it in memory, until execute is fired. 

2. Also when we send chunk to the server it must be doing some initial allocation of memory(small sized) and when chunks keep coming and Server side buffer is full it must be relocating the bigger buffer again and will be copying the data in to the newer buffer and will be de-allocating the previous buffer. This allocation of new buffer and de-allocation of previous buffer and copying of data from previous buffer to new one must be consuming the time. So reserving initial big memory should help here.
[17 Jan 2012 16:48] Sveta Smirnova
Thank you for the report.

Please provide example of code you use with function mysql_stmt_send_long_data
[18 Jan 2012 9:36] Avinash Dubey
Call this APIU simply by supplying the buffer in chunks of (say 4096) in a loop. Total data size need to be aprrox 1 GB. It will take more than 30 minutes.

We care calling it something like this in loop.

//Supply data in chunks to server.
result = mysql_stmt_send_long_data(stmt->stmtHandle, parameterIndex, buffer, dataSize);
[18 Jan 2012 19:23] Sveta Smirnova
Thank you for the feedback.

But I need not single call, but code demonstrating how do you send chunks. I want to be sure this is not your code which uses all memory.
[17 Feb 2012 6:47] Avinash Dubey
I can not share our product code for sure. You can create and array of 2000 bytes and keep sending them in loop using the send_long_data API in a loop until the size is approximately 1 GB. IT is very easy to reproduce.
[17 Feb 2012 12:07] Sveta Smirnova
Not enough information was provided for us to be able to handle this bug. Please re-read the instructions at http://bugs.mysql.com/how-to-report.php

If you can provide more information, feel free to add it to this bug and change the status back to 'Open'.

Thank you for your interest in MySQL.

Thank you for the feedback.

I can not repeat it with my test code.

I also don't need you to share your application, but I need small test case demonstrating the problem.
[3 Jul 2012 6:00] Avinash Dubey
I am attaching here some sample application which you can use, following are the steps:

1. Create a database ggvam:
mysql> create database ggvam;
mysql> use ggvam;
2. Create a table inside the database;
mysql > create table emp(longcol longtext);
3. Compile and run the application attached which inserts the long data into the table.
[3 Jul 2012 6:23] Avinash Dubey
This is the sample code which inserts text data in emp table

Attachment: Sample.cpp (text/plain), 3.36 KiB.

[4 Jul 2012 19:28] Sveta Smirnova
Thank you for the test case.

Verified as described.

I created files with noOfRepetitions = 100, 1000, 10000, 100000, 1000000 and slow down started happen at 100,000:

[sveta@delly bug62257]$ time ./Sample100

real	0m0.463s
user	0m0.016s
sys	0m0.010s
[sveta@delly bug62257]$ time ./Sample1000

real	0m0.200s
user	0m0.018s
sys	0m0.019s
[sveta@delly bug62257]$ time ./Sample10000

real	0m2.000s
user	0m0.139s
sys	0m0.094s
[sveta@delly bug62257]$ time ./Sample100000

real	1m2.070s
user	0m1.132s
sys	0m0.792s
[sveta@delly bug62257]$ time ./Sample1000000

real	4m6.283s
user	0m12.040s
sys	0m8.700s
[4 Jul 2012 22:35] Sveta Smirnova
Actually with 5.5 (or any other 5.0 - 5.7) client accessing 5.5 server no hang occurs, just some slowdown, but if any client connects to 5.1 server hang occurs:

[sveta@delly bug62257]$ time ./Sample100

real	0m0.189s
user	0m0.013s
sys	0m0.012s
[sveta@delly bug62257]$ time ./Sample1000

real	0m1.641s
user	0m0.026s
sys	0m0.017s
[sveta@delly bug62257]$ time ./Sample10000

real	2m35.923s
user	0m0.050s
sys	0m0.067s
[sveta@delly bug62257]$ time ./Sample
^C

real	75m7.504s
user	0m0.248s
sys	0m0.264s

Leaving it as "Verified", because we have no entry in Change log saying this was fixed.