Bug #114511 Turn on binlog compression and read binlog generate by big transaction cause oom
Submitted: 29 Mar 2024 7:05 Modified: 2 Apr 2024 9:58
Reporter: Yawei Sun Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: Replication Severity:S3 (Non-critical)
Version:MySQL8.0.22 OS:Any
Assigned to: MySQL Verification Team CPU Architecture:Any

[29 Mar 2024 7:05] Yawei Sun
Description:
when turn on the binlog compression(set binlog_transaction_compression=ON) and execute a big transaction, it will generate the binlog like 'binlog.000001'.
Later we execute "show binlog events in 'binlog.000001'" to read the binlog events will cause oom. 

How to repeat:
set @@global.binlog_transaction_compression=ON
set @@global.max_allowed_packet = 1073741824 // to ensure the insert sql can work normally
binlog_format=row;
binlog_row_image=full
gtid_mode = ON;

create table tb1(id int primary key auto_increment,b1 longblob);

begin;
insert into tb1(b1)values(repeat('a',1073741000)); // repeat 100 times
...

commit;

show binlog events in 'binlog.xxx';

Suggested fix:
when read the compressed binlog events, control the memory useage if possible.
[29 Mar 2024 11:35] MySQL Verification Team
HI Mr. Sun,

Thank you for your bug report.

However, you are using a VERY old release of 8.0.

Please try 8.0.36 and let us know here whether you experience the same problem.

Also, try setting binary log sizes to the higher value.

However, it should not cause OOM, unless you are having very little RAM or if you have an error in the configuration of your container.

Unsupported.
[29 Mar 2024 13:32] MySQL Verification Team
Hi,

We tested with both 8.0.36 and 8.3.0 and it works fine .......

Do, however note, that if you insert a BLOB or TEXT containing 1073741000 bytes, that MySQL will take 5 times that much memory in order to store it into InnoDB and in order to write it to binlog.

That means 5 Gb, just for that one operation !!!!!