Bug #31090 Memory consumption issue
Submitted: 19 Sep 2007 1:53 Modified: 13 Nov 2007 11:37
Reporter: Gary Pendergast Email Updates:
Status: Closed Impact on me:
None 
Category:Connector / NET Severity:S3 (Non-critical)
Version:5.1.2 OS:Windows
Assigned to: CPU Architecture:Any
Tags: Contribution

[19 Sep 2007 1:53] Gary Pendergast
Description:
Memory consumption issue when logging realtime data for extended periods using the C# .NET Connector.

How to repeat:
Reported repeat instructions:

"While running our new in-house devlopment software (C# application) for 24hrs or more we were seeing a memory issue build up. Basically the development system is logging 40 data streams to 40 separate tables at 100Hz per table (~8,640,000 records per table per day), over time the memory footprint would start to flip flop suddenly (sometimes by 200Mb or more). All the logging is multithreaded, so that each data stream is logged within a different thread by making use of thread pooling. I have spent a whole week tracking this issue down, and eventually found a fix to the issue in the C# connector code (version 5.1.2). All the tables contain a long blob column, which means that the MySqlDataReader.GetBytes method is extensively utilised. The binary buffer contained within the data reader is transferred into the supplied byte[] buffer by use of Array.Copy. I was surprised to see this as I had expected Buffer.BlockCopy, because Array.Copy is functionally similar to memmove and Buffer.BlockCopy is functionally similar to memcpy."

Suggested fix:
new version of GetBytes():

 public override long GetBytes(int i, long dataIndex, byte[] buffer, int bufferIndex, int length)
{
if (i >= fields.Length)
throw new IndexOutOfRangeException();

IMySqlValue val = GetFieldValue(i, false);

if (!(val is MySqlBinary))
throw new MySqlException("GetBytes can only be called on binary columns");

MySqlBinary binary = (MySqlBinary)val;
if (buffer == null)
return (long)binary.Value.Length;

if (bufferIndex >= buffer.Length || bufferIndex < 0)
throw new IndexOutOfRangeException("Buffer index must be a valid index in buffer");
if (buffer.Length < (bufferIndex + length))
throw new ArgumentException("Buffer is not large enough to hold the requested data");
if (dataIndex < 0 ||
((ulong)dataIndex >= (ulong)binary.Value.Length && (ulong)binary.Value.Length > 0))
throw new IndexOutOfRangeException("Data index must be a valid index in the field");

byte[] bytes = (byte[])binary.Value;

// adjust the length so we don't run off the end
if ((ulong)binary.Value.Length < (ulong)(dataIndex + length))
{
length = (int)((ulong)binary.Value.Length - (ulong)dataIndex);
}

//Array.Copy(bytes, (int)dataIndex, buffer, (int)bufferIndex, (int)length);

Buffer.BlockCopy(bytes, (int)dataIndex, buffer, (int)bufferIndex, (int)length);

return length;
}
[19 Sep 2007 20:59] MySQL Verification Team
Thank you for the bug report and contribution. Our Connector/Net team will
analyze the code suggested.
[21 Sep 2007 20:59] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/34465
[21 Sep 2007 21:01] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/34466
[21 Sep 2007 21:03] Reggie Burnett
Fixed in 1.0.11, 5.0.9, 5.1.4, & 5.2+
[21 Sep 2007 21:09] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/34467
[13 Nov 2007 11:37] MC Brown
A note has been added to the 1.0.11, 5.0.9, 5.1.4 and 5.2.0 changelogs. 

Memory usage could increase and decrease significantly when
        updating or inserting a large number of rows.