| Bug #111540 | binlog compress optimize | ||
|---|---|---|---|
| Submitted: | 23 Jun 2023 3:41 | Modified: | 23 Jun 2023 5:14 |
| Reporter: | alex xing (OCA) | Email Updates: | |
| Status: | Verified | Impact on me: | |
| Category: | MySQL Server: Replication | Severity: | S5 (Performance) |
| Version: | 8.0.32 | OS: | Any |
| Assigned to: | CPU Architecture: | Any | |
| Tags: | Contribution | ||
[23 Jun 2023 3:42]
alex xing
a simple patch to describe the optimization (*) I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it.
Contribution: binlog_compress_optimize.patch (text/plain), 1.46 KiB.
[23 Jun 2023 5:14]
MySQL Verification Team
Hello alex xing, Thank you very much for your patch contribution, we appreciate it! regards, Umesh

Description: When the compressed size of the ROW event being processed is higher than it's uncompressed counter part, it is better to abandon this compression. Have the following benefits: 1. Saves storage space 2. The slave node reduces decompression costs How to repeat: Refer to the repetition scheme in binlog_compression_size_higher.test: # 2. Create a table with a blob column. CREATE TABLE t1 (id bigint NOT NULL AUTO_INCREMENT PRIMARY KEY, x mediumblob NOT NULL) ENGINE=InnoDB; # 3. Prepare an insert statement and populate the blob columns with # uncompressable data. PREPARE s FROM "INSERT INTO t1 (x) VALUES (?)"; SET @a = LOAD_FILE('../../std_data/binlog_compression.gz'); EXECUTE s USING @a; Suggested fix: just as the below patch