Bug #5677 connector j out of memory err setting large blob to inputstream file > 160meg
Submitted: 21 Sep 2004 0:14 Modified: 22 Sep 2004 6:31
Reporter: Brian Harvey Email Updates:
Status: Not a Bug Impact on me:
None 
Category:Connector / J Severity:S2 (Serious)
Version:all versions OS:Windows (windown xp pro)
Assigned to: Mark Matthews CPU Architecture:Any

[21 Sep 2004 0:14] Brian Harvey
Description:
Hope someone can help, this is my first post. 

Im getting an out of memory error trying to store large blobs in a mysql database. I think I have tracked the issue down to connector j. 

I have code like this to set the blob field with a java.io.InputStream: 
InputStream fstream = new FileInputStream("C:\\c.zip"); 
int a = fstream.available(); 
m_PreStmt.setBinaryStream(5,fstream,a); 

When the file size gets over 100meg, or 200 meg; I get an out of memory error. 
My system has over a gig of memory, and I have the JVM set to 500meg. 

My understanding of setting a blob to an inputstream is that this should avoid the 
whole (chunk of data) being loaded into memory at once... that the stream would 
be read a little at a time. 

I have tried several versions of connector j 3.0.11, 3.0.1.5.ga, and 3.1.4-beta, all give the same out of memory error. 

In the source for connector j 3.1.4-beta the error occurs in the method ensureCapacity(int additionalData) 

The line of code is: byte[] newBytes = new byte[newLength]; 

It seems that the entire chunk of memory is being allocated? 

Can anyone offer any help? 

also this seems to be the same type problem as reported in bug # 1605 
http://bugs.mysql.com/bug.php?id=1605 

Thank You! 

final void ensureCapacity(int additionalData) throws SQLException { 
if ((this.position + additionalData) > getBufLength()) { 
if ((this.position + additionalData) < this.byteBuffer.length) { 
// byteBuffer.length is != getBufLength() all of the time 
// due to re-using of packets (we don't shrink them) 
// 
// If we can, don't re-alloc, just set buffer length 
// to size of current buffer 
setBufLength(this.byteBuffer.length); 
} else { 
// 
// Otherwise, re-size, and pad so we can avoid 
// allocing again in the near future 
// 
int newLength = (int) (this.byteBuffer.length * 1.25); 

if (newLength < (this.byteBuffer.length + additionalData)) { 
newLength = this.byteBuffer.length 
+ (int) (additionalData * 1.25); 
} 

if (newLength < this.byteBuffer.length) { 
newLength = this.byteBuffer.length + additionalData; 
} 

---> byte[] newBytes = new byte[newLength]; 

System.arraycopy(this.byteBuffer, 0, newBytes, 0, 
this.byteBuffer.length); 
this.byteBuffer = newBytes; 
setBufLength(this.byteBuffer.length); 
} 
} 
} 

How to repeat:
Set a blob field to a large file (200 meg or more)
like this

InputStream fstream = new FileInputStream("C:\\c.zip"); 
int a = fstream.available(); 
m_PreStmt.setBinaryStream(5,fstream,a); 

Try inserting the row into a table.
You will get an out of memory error (I have my JVM set to 500 meg)

Suggested fix:
This is the mthod causing the out of memory error in connector j
I copied this from 3.1.14-beta version, but the same proble exists in 3.0.11 and 3.0.15.ga

The specific line of code is: byte[] newBytes = new byte[newLength]; 

final void ensureCapacity(int additionalData) throws SQLException { 
if ((this.position + additionalData) > getBufLength()) { 
if ((this.position + additionalData) < this.byteBuffer.length) { 
// byteBuffer.length is != getBufLength() all of the time 
// due to re-using of packets (we don't shrink them) 
// 
// If we can, don't re-alloc, just set buffer length 
// to size of current buffer 
setBufLength(this.byteBuffer.length); 
} else { 
// 
// Otherwise, re-size, and pad so we can avoid 
// allocing again in the near future 
// 
int newLength = (int) (this.byteBuffer.length * 1.25); 

if (newLength < (this.byteBuffer.length + additionalData)) { 
newLength = this.byteBuffer.length 
+ (int) (additionalData * 1.25); 
} 

if (newLength < this.byteBuffer.length) { 
newLength = this.byteBuffer.length + additionalData; 
} 

---> byte[] newBytes = new byte[newLength]; 

System.arraycopy(this.byteBuffer, 0, newBytes, 0, 
this.byteBuffer.length); 
this.byteBuffer = newBytes; 
setBufLength(this.byteBuffer.length); 
} 
} 
}
[22 Sep 2004 6:31] Mark Matthews
This is a shortcoming of the MySQL protocol. Blobs must fit entirely into memory on the client when sending or retrieving. If you use server prepared statements and Connector/J 3.1.x, the driver can chunk blobs when sending, but not when retrieving.

If you're working with multi-hundred megabyte BLOBs, you might want to consider the architecture of your application, and store them as files instead.
[24 Sep 2004 23:22] Brian Harvey
Thanks for answering Mark.   I understand what you are saying about storing blobs that large.  I can work around this by storing on the file system with a pointer in the database... which is most likely what I should be doing.  It would be sooooo nice though to store everything in the database, and not have to worry about what system / box the acutal file is on.... espically for distrubuted applications.
[28 Oct 2004 14:48] Mat Hofschen
May I ask for further clarification on this issue. Mark you say that sending large data to the server is supported with Connector 3.1 and Server 4.1 but not retrieving large data from the server. 

I can not get the storing of large files to work (> 100mb). If I enable useNewIO then I get an OutOfMemory from the above mentioned ensureCapacity method.