Bug #53361 Binlog entry too big to be read by the mysql client
Submitted: 3 May 2010 8:01 Modified: 6 May 2010 7:55
Reporter: Kim Carlsen Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: Backup Severity:S1 (Critical)
Version:5.1.42 OS:Any
Assigned to: CPU Architecture:Any
Tags: binlog row backup "packet too big"

[3 May 2010 8:01] Kim Carlsen
Description:
Hi

It seems it is possible for mysql to write an entry to the binary log that gets so huge, that it cant read it again.

When you try to apply the binary log to the database, you will get a message saying: 'ERROR 1153 (08S01) at line XYZ: Got a packet bigger than 'max_allowed_packet' bytes', I have now tried to increase the max allowed packet to 1GB which is maximum, and its still not enough to process all binlog entries.

I have a slave which has no problems streaming the binary logs, so it seems odd to me that you would need so much memory to restore from a backup and you can still risk getting too big entries issues.

A simple way to generate such an entry is to delete every row from a big table. In the binary log it is translated to a delete statement for each row with a where clause describing every column in the table. All these single delete statements needs to fit in one packet (as I see it).

I have marked this bug as critical, as it renders backups using binary log invalid.

How to repeat:
1) create table test (
  A int(11) primary key auto_increment,
  B varchar(100),
  C varchar(100),
  D varchar(100),
  E varchar(100),
  F varchar(100),
  G varchar(100),
  H varchar(100),
  I varchar(100),
  J varchar(100),
  K varchar(100)
);

2) INSERT INTO test VALUES (
  null,   'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA',  'AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA');

3) INSERT INTO test select null,B,C,D,E,F,G,H,I,J,K from test;

4) repeat step 3 until you have a lot of rows in test.

5) Enable ROW format binary logging

6) DELETE FROM test;

7) mysqlbinlog mysql-binlog.0000001 | mysql -uroot 
ERROR 1153 (08S01) at line XYZ: Got a packet bigger than 'max_allowed_packet' bytes

It is easier to reproduce this bug if you change max_allowed_packet to a smaller value, but it is trivial to see that this problem also exists for the largest value possible of max_allowed_packet
[4 May 2010 8:29] Sveta Smirnova
Thank you for the report.

I can not repeat described behavior.

How do you set max_allowed_packet?
[4 May 2010 8:49] Kim Carlsen
You can set max_allowed_packet in my.cnf by adding the following to the [mysqld] section
max_allowed_packet=1K

You will need to add enough rows to the table test, so that when you delete all the rows the binary entry in the binlog will be larger than max_allowed_packet.
[5 May 2010 17:54] Sveta Smirnova
Thank you for the feedback.

But I can not repeat described behavior if max_allowed_packet is large enough. Also max_allowed_packet=1K looks too small.

> You will need to add enough rows to the table test, so that when you delete all the rows
the binary entry in the binlog will be larger than max_allowed_packet.

In this case it is expected what export fails. Where is bug here? And how it is related with "I
have now tried to increase the max allowed packet to 1GB which is maximum, and its still
not enough to process all binlog entries."?
[5 May 2010 20:13] Kim Carlsen
The 1k limit is just to illustrate the problem. How many rows did you add to the table?

When you can reproduce with 1k packet size, you can then increase the packet size and the number of rows you delete, and after a certain amount of rows you will get the error message again. After you reach packet size of 1GB, mysql wont allow you to increase it, but it is still possible to generate larger entries in the binary log.
[6 May 2010 5:50] Sveta Smirnova
Thank you for the feedback.

This is not a bug in this case: max_allowed_packet is just small enough for mysql command line client.
[6 May 2010 6:30] Kim Carlsen
Im not sure, why you think its 'not a bug'.

I have a database in a production environment where we use mysqldump to make a daily backup and use binary log to do point-in-time recovery.

I have now tried to restore the full dump, and applied the binary log. What I discovered was that I wasnt able to use the binary log. I have increased the max_allowed_packet to 1G and it still report
"ERROR 1153 (08S01) at line XYZ: Got a packet bigger than 'max_allowed_packet' bytes"

This break point-in-time recovery, and you cant increase 'max_allowed_packet' to more than 1G. 

I use the following commands for full restore and point-in-time restore
> cat dumpfile | mysql -uroot
> mysqlbinlog binarylog.00000001 | mysql -uroot
[6 May 2010 6:46] Sveta Smirnova
If package is more than 1G you will get this error and this is expected.

I could not reproduce the problem with package less than 1G and proper max_allowed_packet.

mysqlbinlog has hard-coded limit 1G, there is bug #49932 about it.
[6 May 2010 7:55] Kim Carlsen
So what you are saying is, if you have a large enough dataset then point-in-time recovery is broken by design and you want it to stay like that?

I dont understand that, since its no problem for a slave to read the binary log. Its a little cumbersome to setup a master/slave, just to process the binary log.

I would expect either
a) mysql to write smaller chunk to the binary log 
b) mysql to be able to 'stream' the binary log, like a replicating slave does
c) mysqlbinlog should divide the decoded binary log to smaller chunks so it would be processable by mysql. When using -v to mysqlbinlog, you can see that it already decode the base64 encoded data in smaller chunks, but it sends it all in one big 'packet'