Bug #103387 max_allowed_packet prevents use of LONGBLOB and large JSON
Submitted: 20 Apr 2021 17:27 Modified: 26 Apr 2021 14:33
Reporter: Matthew Boehm Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:8.0.19 OS:Any
Assigned to: CPU Architecture:Any
Tags: BLOB, json

[20 Apr 2021 17:27] Matthew Boehm
Description:
MySQL supports several large blob-type columns: LONGTEXT, LONGBLOB, and JSON. It is not possible to INSERT > 1GB of data into a row using these column types due to the maximum allowed packet size of 1GB.

Even when the data manipulation happens solely on the server, max_allowed_packet still prevents large data from being updated/inserted.

While this example focuses on JSON, the same limitation applies to LONGBLOB when storing generic binary data.

So the question/bug is, why does LONGBLOB even exist if you can't actually store/insert more than 1GB of data?

How to repeat:
CREATE TABLE `bigJson` (
  `id` int unsigned NOT NULL AUTO_INCREMENT,
  `jData` json DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB;

INSERT INTO bigJson VALUES (1, '[{"id": 1, "url": "https://via.placeholder.com/600/92c952", "title": "accusamus beatae ad facilis cum similique qui sunt", "albumId": 1, "thumbnailUrl": "https://via.placeholder.com/150/92c952"}]');

Repeat until error:
 UPDATE bigJson SET jData = JSON_ARRAY_APPEND(jData, '$', jData) WHERE id = 1;

Eventually you will get this result:
mysql> UPDATE bigJson SET jData = JSON_ARRAY_APPEND(jData, '$', jData) WHERE id = 1;
ERROR 1301 (HY000): Result of json_binary::serialize() was larger than max_allowed_packet (1073741824) - truncated

Suggested fix:
Allow max_allowed_packet to be > 1GB
[21 Apr 2021 14:10] MySQL Verification Team
Hi Mr. Boehm,

Thank you for your bug report.

However, there are already too many feature requests on how to improve on this variable limitation. There are many feature requests, the earliest one that we found is this one:

https://bugs.mysql.com/bug.php?id=1459

Beside feature request, this problem is easily solved by using chunks with prepared statements. Some of our client programs do support this feature.

Duplicate.
[21 Apr 2021 14:35] Matthew Boehm
Hello Verification Team,
The problem is not solved by using chunks-prepared statements. You are still limited by max_allowed_packet when completing the prepared statement. Thus, you still cannot take 2GB of data, chunk it, and insert into a single row-colum. You get the same error.
Please be more specific rather than saying "Some of our client programs do support this feature" and give specific names so when people come here to read the bug they get an actual potential answer.
[22 Apr 2021 13:07] MySQL Verification Team
Hi,

You can always try compression. First of all, in the client-server protocol.  It is a good workaround for the JSON.

All your questions are answered in our Reference Manuals.
[22 Apr 2021 16:42] Matthew Boehm
Even with compression, you are still restricted to 1GB column length. If I compress 2GB to 1.4GB, I cannot insert 1.4GB into a column that is supposed to support up to 4GB.

Using mysql_stmt_send_long_data (which is what I think you are talking about when you ask me to search the docs) is still restricted to max_packet_size on final chunk/commit. If I send 5x300MB chunks to MySQL as part of a single insert, the query fails and I cannot insert the total 1.5GB into the column.
[23 Apr 2021 12:16] MySQL Verification Team
Hi Mr. Boehm,

What you wrote is quite true, which is why this is a duplicate bug, since there are several feature requests that have been reported much earlier than this one.

Also, this is dangerous territory. Many, many installations simply set global variable to the level that occurs sporadically. That means that for 1 Gb max_allowed_packet and 1000 connections , you would spent 1 Tb RAM only for connection packets.
[23 Apr 2021 14:30] Matthew Boehm
Hello V-Team,
Thank you for that confirmation. To re-ask, why does MySQL even have LONGBLOB column type if you can't actually write that much data?
[26 Apr 2021 12:20] MySQL Verification Team
Hi,

MySQL database is made to comply with the latest SQL standards, which actually answers your question.
[26 Apr 2021 14:33] Matthew Boehm
The SQL standard assumes you can actually use it to its fullest capacity. That is not the case with MySQL. I can create a LONGBLOB column but I cannot insert more than 1GB.

This is a blatant misrepresentation of MySQL's capabilities. Imagine going to a coffee shop with small, medium, and large cups. You can fill the small and medium up all the way, but you can't fill the large cup more than 25%. It's a complete falsehood.

It should be well documented that due to mysql's internal limitations, you can't actually use LONGBLOB to its fullest extent.