Bug #78827 | Speedup replication of compressed tables | ||
---|---|---|---|
Submitted: | 14 Oct 2015 6:32 | Modified: | 18 Aug 2020 8:16 |
Reporter: | Daniël van Eeden (OCA) | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: InnoDB storage engine | Severity: | S4 (Feature request) |
Version: | 5.7, 8.0 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | compression, innodb, replication |
[14 Oct 2015 6:32]
Daniël van Eeden
[14 Oct 2015 6:43]
Daniël van Eeden
Another idea: Work with the connectors to make it possible to do the compression/decompression there (optionally!). Now we have the compressed client-server protocol and compressed InnoDB tables with a lot of compression/decompression (especially with replication). Would be nice if it could work like this 1. connector compresses data and sends this to the server 2. server writes this compressed to the table (iblogs, etc) and binlogs 3. slave also writes this compressed data to the table 4. client requests data from the slave and gets this compressed data 5. connector decompresses the data. This is already possible with a BLOB field, but that's not transparent to the client. This makes validation of the data hard, but if the compression is done per field for BLOB fields this should work. For TEXT fields is will be hard to filter out invalid utf8 sequences or bytes > 127 for ascii.
[23 Nov 2018 14:30]
Daniël van Eeden
.
[18 Aug 2020 8:16]
Daniël van Eeden
I consider this mostly fixed by: * Compressed binary logs * Page compression (CREATE TABLE ... COMPRESSION="zlib")