Bug #83958 | BLOB size should not be restricted by the server-side max_allowed_packet param. | ||
---|---|---|---|
Submitted: | 24 Nov 2016 21:34 | Modified: | 25 Nov 2016 14:40 |
Reporter: | Björn Voigt (OCA) | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server: Connection Handling | Severity: | S3 (Non-critical) |
Version: | 5.7.16 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | BLOB, max_allowed_packet |
[24 Nov 2016 21:34]
Björn Voigt
[25 Nov 2016 14:08]
MySQL Verification Team
Hi, Max_allowed_packet was introduced back in 1999, because it can eat out server's memory. Imagine 4.000 connections, which is not so rare, and on half of them, a BLOB of 20 Mb is sent from the client to the server. That, alone, would eat 40 Gb of RAM. That is why this parameter is invented and it is why it exists. Last, but not least, DBA can change that parameter globally. Not only that, the application can set this variable at session value, send data, and reset the parameter. That is how this parameter is used for the last 17 years and no changes are necessary.
[25 Nov 2016 14:40]
Björn Voigt
My sketchy debugging shows, that the MySQL server throws the error, if it already received the whole BLOB data separated in smaller data chunks. So the memory may be already reserved and the variable max_allowed_packet only partially helps to restrict the amount of memory. I will test soon, if it is possible to stream a really big amount of data in chunks and to crash the server before it can throw the error. Also the name and the documentation let's users assume, that the variable max_allowed_packet mainly restricts the network packet size. Thanks for the explanation it's now clear, that variable is made to secure the server. > That is how this parameter is used for the last 17 years and no changes > are necessary. Since the default value was increased in MySQL 5.6.6 from 1 MB to 4 MB shows, that the conditions 17 years ago have changed. These days many people want to import data like high resolution photos into their MySQL databases and use hosted MySQL databases without the necessary access rights to increase the variable.
[25 Nov 2016 15:08]
MySQL Verification Team
Hi, Change in defaults have changed due to lower price of RAM. Also, you can set your own value for that variable. Default is , hence, irrelevant. Sending chunks is a smart way of avoiding large packet buffers, but do note that this is available only if you use our prepared statements.
[27 May 14:07]
Rafal Somla
I think there are situations where users need/want to insert huge blobs of data into tables. One can understand that we want to limit the size of packets sent to the server. But why the limit is applied to total length of blobs that are sent in multiple chunks - that is hard to understand. Currently the only solution we provide for inserting huge blobs into MySQL tables is "increase server's max_allowed_packet size". This solution is not satisfactory for many reasons and I think many users bumped into that limitation. Maybe it is time to provide our users some better story here and make the `mysql_stmt_send_long_data()` function work as its name suggest and as anyone would expect, i.e., insert long data in a streaming fashion not limited by network packet size but only by the constraints of the database.
[28 May 10:10]
MySQL Verification Team
Thank you, Rafal.