Bug #53361 | Binlog entry too big to be read by the mysql client | ||
---|---|---|---|
Submitted: | 3 May 2010 8:01 | Modified: | 6 May 2010 7:55 |
Reporter: | Kim Carlsen | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server: Backup | Severity: | S1 (Critical) |
Version: | 5.1.42 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | binlog row backup "packet too big" |
[3 May 2010 8:01]
Kim Carlsen
[4 May 2010 8:29]
Sveta Smirnova
Thank you for the report. I can not repeat described behavior. How do you set max_allowed_packet?
[4 May 2010 8:49]
Kim Carlsen
You can set max_allowed_packet in my.cnf by adding the following to the [mysqld] section max_allowed_packet=1K You will need to add enough rows to the table test, so that when you delete all the rows the binary entry in the binlog will be larger than max_allowed_packet.
[5 May 2010 17:54]
Sveta Smirnova
Thank you for the feedback. But I can not repeat described behavior if max_allowed_packet is large enough. Also max_allowed_packet=1K looks too small. > You will need to add enough rows to the table test, so that when you delete all the rows the binary entry in the binlog will be larger than max_allowed_packet. In this case it is expected what export fails. Where is bug here? And how it is related with "I have now tried to increase the max allowed packet to 1GB which is maximum, and its still not enough to process all binlog entries."?
[5 May 2010 20:13]
Kim Carlsen
The 1k limit is just to illustrate the problem. How many rows did you add to the table? When you can reproduce with 1k packet size, you can then increase the packet size and the number of rows you delete, and after a certain amount of rows you will get the error message again. After you reach packet size of 1GB, mysql wont allow you to increase it, but it is still possible to generate larger entries in the binary log.
[6 May 2010 5:50]
Sveta Smirnova
Thank you for the feedback. This is not a bug in this case: max_allowed_packet is just small enough for mysql command line client.
[6 May 2010 6:30]
Kim Carlsen
Im not sure, why you think its 'not a bug'. I have a database in a production environment where we use mysqldump to make a daily backup and use binary log to do point-in-time recovery. I have now tried to restore the full dump, and applied the binary log. What I discovered was that I wasnt able to use the binary log. I have increased the max_allowed_packet to 1G and it still report "ERROR 1153 (08S01) at line XYZ: Got a packet bigger than 'max_allowed_packet' bytes" This break point-in-time recovery, and you cant increase 'max_allowed_packet' to more than 1G. I use the following commands for full restore and point-in-time restore > cat dumpfile | mysql -uroot > mysqlbinlog binarylog.00000001 | mysql -uroot
[6 May 2010 6:46]
Sveta Smirnova
If package is more than 1G you will get this error and this is expected. I could not reproduce the problem with package less than 1G and proper max_allowed_packet. mysqlbinlog has hard-coded limit 1G, there is bug #49932 about it.
[6 May 2010 7:55]
Kim Carlsen
So what you are saying is, if you have a large enough dataset then point-in-time recovery is broken by design and you want it to stay like that? I dont understand that, since its no problem for a slave to read the binary log. Its a little cumbersome to setup a master/slave, just to process the binary log. I would expect either a) mysql to write smaller chunk to the binary log b) mysql to be able to 'stream' the binary log, like a replicating slave does c) mysqlbinlog should divide the decoded binary log to smaller chunks so it would be processable by mysql. When using -v to mysqlbinlog, you can see that it already decode the base64 encoded data in smaller chunks, but it sends it all in one big 'packet'