Bug #16273 Reading blobs larger than a certain size fails with memory error
Submitted: 7 Jan 2006 2:46 Modified: 11 Nov 2009 2:36
Reporter: [ name withheld ] Email Updates:
Status: Won't fix Impact on me:
None 
Category:Connector / J Severity:S2 (Serious)
Version:3.1.11 and 5.0.0 beta (MySQL 5.0.16) OS:Linux (linux, windows)
Assigned to: CPU Architecture:Any

[7 Jan 2006 2:46] [ name withheld ]
Description:
Reading blobs larger than a certain size fails with memory error (java.lang.OutOfMemoryError).

How to repeat:
Insert a blob about 10Mo or more in size and then try to read it back out.
[9 Jan 2006 12:28] Aleksey Kishkin
what java options did you use? Default heap size for java virtual machine is 64M. Did you use -Xmx options to increase heap size?
[9 Jan 2006 13:50] [ name withheld ]
I don't use -Xmx options and don't want to, as I'd like to be able to get 1GB blobs, and I don't have 1GB memory to give to the java application.

At the page http://www.mysql.com/products/connector/j/ is written :
"The driver now also supports "streaming" result sets, which allows users to retrieve large numbers of rows without using a large memory buffer. With newly added large-packet protocol support, the driver can send rows and BLOBs up to 2 gigabytes in size."

Does it mean I should be able to read big blobs ?
I succefully write 1GB to the mysql database, but I can't read the blob back in a streaming way. Streaming seems to be implemented only for the writing. At the reading, the blob goes into memory.
[10 Jan 2006 14:08] Mark Matthews
You need to use the "emulateLocators=true" property (see the documentation) to "stream" blobs back to the JDBC driver. At this time neither the server nor the MySQL protocol itself has an efficient way to retrieve large blobs, only to send them (via prepared statements in chunks).

If you let the JDBC driver emulate locators, it does greatly reduce the amount of memory your client-side application uses, at the expense of causing the server to call SUBSTRING() on the blob multiple times.

On the other hand, usually if you're dealing with 1GB blobs, there are other, sometimes better approaches to dealing with the data, such as not storing it in the database.