Bug #22853 max_allowed_packet misnomer - CONCAT() in SQL command allows for loss of data
Submitted: 30 Sep 2006 1:59 Modified: 30 Sep 2006 12:22
Reporter: Joseph Drago Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: General Severity:S4 (Feature request)
Version:4.1.14-nt OS:Windows (Windows XP)
Assigned to: CPU Architecture:Any
Tags: concat, max_allowed_packet

[30 Sep 2006 1:59] Joseph Drago
Description:
[Marked as S4 under assumption of mySQL's stance on the issue.]

This is a revisit of two prior bugs to some degree: 2691, and 20458.

I ran into an issue today when attempting to CONCAT() a small string onto a LONGTEXT field that was right at the boundary of max_allowed_packet in size (just under 1MB). 

I searched the buglists and found the two prior bugs, hoping to find an appropriate workaround. Instead, I found both of them as marked "not a bug" due to the fact that since you're exceeding max_allowed_packet, you are "doing the wrong thing".

I consider this a bug. First off, the name max_allowed_packet suggests the size of the transmitted SQL sequence(s) as an atomic piece of data. It appears that your implementation of CONCAT() (and perhaps other SQL commands) conflates the notion of "max_allowed_packet" and your final output buffer for the postprocessed SQL command. 

If I plan to update / append to a LONGTEXT field incrementally, the most obvious and effective means would be to use CONCAT(), as it only transmits the data to append across the network/pipe boundary, and leaves it up to the implementation of the server to make that happen. The fact that max_allowed_packet is involved in the final buffer size on the server is a misnomer at best, and more likely a peek into a bugged implementation. Setting our maximum packet size by estimating an upper boundary on a field (whose documented maximum size is 4GB) is a little silly. 

The frustrated tone in this bug description stems from the apparent dismissal of the two prior bugs, without even offering a followup location to continue the discussion. I would like to get this resolved, as I consider cranking up max_allowed_packet to 1GB (or whatever ceiling I find appropriate) to be a poor workaround.

How to repeat:
Please see either of the previous bugs. 

All that is required is to CONCAT() an arbitrary amount of text into a LONGTEXT field until the total size of that field exceeds max_allowed_packet. The field will be NULL'd out.

Suggested fix:
Any of three options:

1. Disconnect the notion of max_allowed_packet from the size of the postprocessed buffer size, and perhaps grow the output buffer dynamically to accomodate the appropriate size, as it has no effect on actual packet size. 

2. Change how CONCAT() works to perform the append in-place, avoiding the need to populate your output buffer with the current contents of the field. This might be tough, if the SQL syntax is compiled on-the-fly, but would make for a considerably faster implementation (assuming the LONGTEXT is being copied into a buffer every update).

3. Update the documentation of CONCAT() and other commands that are subject to this kind of failure as a warning that max_allowed_packet means something more than just packet size.
[30 Sep 2006 12:22] Valeriy Kravchuk
Thank you for a reasonable feature request. Personally I think your option 1 is the best solution:

"1. Disconnect the notion of max_allowed_packet from the size of the
postprocessed buffer size, and perhaps grow the output buffer dynamically to
accomodate the appropriate size, as it has no effect on actual packet size."

But let the developers decide.
[30 Sep 2006 12:58] Martin Friebe
reading this, and according to my understanding there is a contradiction in the documentation.
From http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html
> max_allowed_packet
> The maximum size of one packet or any generated/intermediate string.
> You must increase this value if you are using large BLOB columns or long
> strings. It should be as big as the largest BLOB you want to use. The protocol
> limit for max_allowed_packet is 16MB before MySQL 4.0 and 1GB thereafter.
From http://dev.mysql.com/doc/refman/4.1/en/string-type-overview.html
Longblob has a max size of 4 GB

If max_allowed_packet_size can not be set to more than 1 GB, and declares the max size of ANY string in mysql; How can anyone ever get 4GB into a blob?

Or, If you can get $gb, how does it comply with this documentation?

Also adding to this feature request (point 1 from the how to fix section):
max_allow_packet indeed applies to to many thinks at the same time. Increasing it in order to allow larger (calculated) blobs, also increases the max size of net_buffer_length, potentially leaving the server open to DOS attacks through large sql from the network.
So point 1 of the original request should at least include:
- max_allowed_packet_for_processing_iitem (or approprite name
- net_max_allowed_packet
[22 May 2009 15:56] Antonio Garcia
I know this is not a "FIX", but I found what seems to be a fairly simple workaround for this issue.  Basically, replace CONCAT with INSERT and keep track of your position.  I have been using this with good success for a few months now, thought I would share and get feedback.  Below is the PHP code for inserting the BLOB:

$size = filesize($filename);
$fp = fopen($filename, ‘r’);
$offset = 0;
while( $size > 0 ) {
$bytes = CHUNK_SIZE;
if ( $bytes > $size ) {
$bytes = $size;
$size = 0;
} else {
$size -= $bytes;
}
$data = fread($fp, $bytes);
mysql_query(”UPDATE `files` SET `data` = INSERT(`data`, “.($offset+1).”, “.$bytes.”, ‘”.mysql_real_escape_string($data).”‘) WHERE `id` = ‘”.$fileid.”‘”);
$offset += $bytes;
}