Bug #83958 BLOB size should not be restricted by the server-side max_allowed_packet param.
Submitted: 24 Nov 2016 21:34 Modified: 25 Nov 2016 14:40
Reporter: Björn Voigt (OCA) Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: Connection Handling Severity:S3 (Non-critical)
Version:5.7.16 OS:Any
Assigned to: CPU Architecture:Any
Tags: BLOB, max_allowed_packet

[24 Nov 2016 21:34] Björn Voigt
Description:
MySQL clients can theoretically write (INSERT) BLOBs of any size (smaller or equal to the BLOB data type size) using the features "server prepared statement" and "BinaryStream"/mysql_send_long_data.

Unfortunately the MySQL server rejects the already transmitted data in sql/item.cc with the error "Parameter of prepared statement which is set through mysql_send_long_data() is longer than 'max_allowed_packet' bytes":
[...]
bool Item_param::set_longdata(const char *str, ulong length)
{
[...]
  if (str_value.length() + length > current_thd->variables.max_allowed_packet)
  {
    my_message(ER_UNKNOWN_ERROR,
               "Parameter of prepared statement which is set through "
               "mysql_send_long_data() is longer than "
               "'max_allowed_packet' bytes",
               MYF(0));
    DBUG_RETURN(true);
  }
[...]

From a practical point of view, often users are not allowed to change the MySQL server configuration file and set "max_allowed_packet" big enough. So they have no chance to use BLOBs bigger than max_allowd_packet default (4 MB).

How to repeat:
1. Create a table with a medium BLOB

CREATE TABLE TEST (ablob MEDIUMBLOB) ENGINE = InnoDB;

2. Search a file bigger than the max_allowed_packet size (> 4 MB)

3. Upload the file (> 4 MB) using MySQL Workbench or any other client
(personally I tested with MySQL Connector/J 5.1.40)

Suggested fix:
Remove the server-side restriction.
[25 Nov 2016 14:08] MySQL Verification Team
Hi,

Max_allowed_packet was introduced back in 1999, because it can eat out server's memory.

Imagine 4.000 connections, which is not so rare, and on half of them, a BLOB of 20 Mb is sent from the client to the server. That, alone, would eat 40 Gb of RAM. That is why this parameter is invented and it is why it exists. Last, but not least, DBA can change that parameter globally. Not only that, the application can set this variable at session value, send data, and reset the parameter.

That is how this parameter is used for the last 17 years and no changes are necessary.
[25 Nov 2016 14:40] Björn Voigt
My sketchy debugging shows, that the MySQL server throws the error, if it already received the whole BLOB data separated in smaller data chunks. So the memory may be already reserved and the variable max_allowed_packet only partially helps to restrict the amount of memory.

I will test soon, if it is possible to stream a really big amount of data in chunks and to crash the server before it can throw the error.

Also the name and the documentation let's users assume, that the variable max_allowed_packet mainly restricts the network packet size. Thanks for the explanation it's now clear, that variable is made to secure the server.

> That is how this parameter is used for the last 17 years and no changes
> are necessary.

Since the default value was increased in MySQL 5.6.6 from 1 MB to 4 MB shows, that the conditions 17 years ago have changed. These days many people want to import data like high resolution photos into their MySQL databases and use hosted MySQL databases without the necessary access rights to increase the variable.
[25 Nov 2016 15:08] MySQL Verification Team
Hi,

Change in defaults have changed due to lower price of RAM. Also, you can set your own value for that variable. Default is , hence, irrelevant.

Sending chunks is a smart way of avoiding large packet buffers, but do note that this is available only if you use our prepared statements.
[27 May 14:07] Rafal Somla
I think there are situations where users need/want to insert huge blobs of data into tables. One can understand that we want to limit the size of packets sent to the server. But why the limit is applied to total length of blobs that are sent in multiple chunks - that is hard to understand.

Currently the only solution we provide for inserting huge blobs into MySQL tables is "increase server's max_allowed_packet size". This solution is not satisfactory for many reasons and I think many users bumped into that limitation. Maybe it is time to provide our users some better story here and make the `mysql_stmt_send_long_data()` function work as its name suggest and as anyone would expect, i.e., insert long data in a streaming fashion not limited by network packet size but only by the constraints of the database.
[28 May 10:10] MySQL Verification Team
Thank you, Rafal.