Bug #15937 LOAD DATA binlog events are too big, if read_buffer_size >= max_allowed_packet
Submitted: 22 Dec 2005 12:17 Modified: 21 Nov 2006 14:37
Reporter: martin koegler Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: Replication Severity:S3 (Non-critical)
Version:4.0.27-BK, 5.0.19-BK, 4.1.14, 4.1.15 OS:Linux (Linux)
Assigned to: Assigned Account CPU Architecture:Any

[22 Dec 2005 12:17] martin koegler
Description:
The binary logging of LOAD DATA statements copies the file content to the binary log. The data is read using an io_cache, whose size is read_buffer_size. 

The data itself is copied by log_loaded_block, which takes the whole unprocessed cache content and turns it in a binary log event.

So if the cache size is bigger than max_allowed_packet, mysql can create events, which can not be read by mysqlbinlog or a replication slave.

How to repeat:
Set max_allowed_packet to maximum (1GB) and read_buffer_size to 1500M and enable binary logging.
Then create load file /tmp/xx > 1.5GB, in my case consisting only of the line
"1234567";"223";"cxycxcxxxc"
repeated, until the file is big enough.

CREATE TABLE `t` (
  `xx` decimal(12,0) default NULL,
  `yy` smallint(6) default NULL,
  `zz` char(60) )

load data infile '/tmp/xx' into table t;

Then examine the binary log file, which will show a "event too big" error message.

Suggested fix:
Verify, that read_buffer_size is smaller than max_allowed_packet or break up the data in log_loaded_block.
[22 Dec 2005 16:12] Aleksey Kishkin
looks like duplicate of http://bugs.mysql.com/bug.php?id=8215 ?
[22 Dec 2005 19:47] Martin Kögler
If #8215 is really a duplicat of this bug, then the after applying the following patch, mysqlbinlog can read such a file:
--- mysqlbinlog.cc.orig 2005-12-22 20:27:46.459554624 +0100
+++ mysqlbinlog.cc      2005-12-22 20:28:34.231292216 +0100
@@ -964,6 +964,7 @@ static int dump_local_log_entries(const
   byte tmp_buff[BIN_LOG_HEADER_SIZE];
   bool old_format = 0;
   int error= 0;
+  max_allowed_packet=0x7FFFFFFF;

   last_db[0]= 0;

For my bug, the "corrupt" binlog files must be bigger that max_allowed_packet. For a 33 MB file, this seems quite unlikly to me (I am using max_allow_packet=1G for ages, so it may seem small to me)
[27 Dec 2005 14:44] Valeriy Kravchuk
I tried to repeat the problem you described with artificially small values, on 4.1.17-BK on (32 bit) Linux:

[openxs@Fedora 4.1]$ bin/mysqld_safe --defaults-file=/home/openxs/dbs/4.1/my.cnf --log-bin &
[1] 2667
[openxs@Fedora 4.1]$ Starting mysqld daemon with databases from /home/openxs/dbs/4.1/var

[openxs@Fedora 4.1]$ bin/mysql -uroot test =/home
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1 to server version: 4.1.17-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> drop table t;
Query OK, 0 rows affected (0.02 sec)

mysql> CREATE TABLE `t` (
    ->   `xx` decimal(12,0) default NULL,
    ->   `yy` smallint(6) default NULL,
    ->   `zz` char(60) );
Query OK, 0 rows affected (0.01 sec)

mysql> exit
Bye
[openxs@Fedora 4.1]$ cd /tmp
[openxs@Fedora tmp]$ echo '"1234567","223","cxycxcxxxc"' > xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ cat xx xx > yy; mv yy xx
[openxs@Fedora tmp]$ ls -l xx
-rw-rw-r--    1 openxs   openxs      14848 Р"РчРє 27 17:24 xx
[openxs@Fedora openxs]$ cd ~/dbs/4.1/
[openxs@Fedora 4.1]$ bin/mysql -uroot test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2 to server version: 4.1.17-log

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> load data infile '/tmp/xx' into table t;
Query OK, 512 rows affected, 1536 warnings (0.01 sec)
Records: 512  Deleted: 0  Skipped: 0  Warnings: 1536

mysql> load data infile '/tmp/xx' into table t fields terminated by ',';
Query OK, 512 rows affected, 1024 warnings (0.01 sec)
Records: 512  Deleted: 0  Skipped: 0  Warnings: 1024

mysql> exit
Bye
[openxs@Fedora 4.1]$ bin/mysqladmin -uroot shutdown
STOPPING server from pid file /home/openxs/dbs/4.1/var/Fedora.pid
051227 17:26:33  mysqld ended

[1]+  Done                    bin/mysqld_safe --defaults-file=/home/openxs/dbs/4.1/my.cnf --log-bin
[openxs@Fedora 4.1]$ bin/mysqlbinlog var/Fedora-bin.000001
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
# at 4
#051227 17:23:00 server id 1  log_pos 4         Start: binlog v 3, server v 4.1.
17-log created 051227 17:23:00 at startup
# at 79
#051227 17:23:13 server id 1  log_pos 79        Query   thread_id=1     exec_time=0     error_code=0
use test;
SET TIMESTAMP=1135696993;
drop table t;
# at 126
#051227 17:23:20 server id 1  log_pos 126       Query   thread_id=1     exec_tim
e=0     error_code=0
SET TIMESTAMP=1135697000;
CREATE TABLE `t` (
  `xx` decimal(12,0) default NULL,
  `yy` smallint(6) default NULL,
  `zz` char(60) );
# at 265
#051227 17:24:50 server id 1  log_pos 265       Query   thread_id=2     exec_time=0
# LOAD DATA INFILE '/tmp/xx' INTO TABLE `t` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY '' (xx,yy,zz);
# file_id: 1  block_len: 14848
# at 15190
#051227 17:24:50 server id 1  log_pos 15190
#Exec_load: file_id=1
#051227 17:24:50 server id 1  log_pos 265       Query   thread_id=2     exec_time=0
LOAD DATA LOCAL INFILE '/tmp/xx-1-0' INTO TABLE `t` FIELDS TERMINATED BY '\t' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY '' (xx,yy,zz);

# file_id: 1  block_len: 14848
# at 15213
#051227 17:26:13 server id 1  log_pos 15213     Query   thread_id=2     exec_time=0
# LOAD DATA INFILE '/tmp/xx' INTO TABLE `t` FIELDS TERMINATED BY ',' ENCLOSED BY
 '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY '' (xx,yy,zz);
# file_id: 2  block_len: 14848
# at 30138
#051227 17:26:13 server id 1  log_pos 30138
#Exec_load: file_id=2
#051227 17:26:13 server id 1  log_pos 15213     Query   thread_id=2     exec_time=0
LOAD DATA LOCAL INFILE '/tmp/xx-2-0' INTO TABLE `t` FIELDS TERMINATED BY ',' ENCLOSED BY '' ESCAPED BY '\\' LINES TERMINATED BY '\n' STARTING BY '' (xx,yy,zz);
# file_id: 2  block_len: 14848
[openxs@Fedora 4.1]$ cat my.cnf
[mysqld]
max-allowed-packet = 10000
read-buffer-size = 65736

So, I had loaded file that was larger than max_allowed_packet (block_len: 14848), although smaller than read_buffer_size, and I see no problems. Can you, please, send the mysqlbinlog results fragment for the problem you described? Can you repeat it with smaller file size and parameters values?
[27 Dec 2005 15:43] martin koegler
mysqlbinlog can read it, because max_allowed_packets defaults to 1GB for this program. (I'm not sure, if it is possible to change it for mysqlbinlog without modifing the source).

In mysqld, this limit is changeable. So if you send this log to a replication slave (with the same settings), you should get an error, which says to raise max_allowed_packet.

While the size of the event is smaller than 1GB, this is correctable by following the instructions in the error log (raise max_allowed_packet).
Bigger events (>1GB) can not be read (without changing mysql), as the upper limit for max-allowed-packet is 1GB.

So only for read_buffer_size > max_allowed_packet>=1GB, the LOAD DATA event (for a file bigger >1GB) is not readable in any case.

The problem 'replication slave with the same configurations fail on "packet to big" for such a situation' will affect other people. The 'mysqlbinlog can not read the log' only occurs for such large buffers.
[3 Feb 2006 16:19] Valeriy Kravchuk
Yes, when I used the same parameters and test case with real replication, I've got the following messages on slave:

060131  9:35:59 [ERROR] Error reading packet from server: log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master ( server_errno=1236)
060131  9:35:59 [ERROR] Got fatal error 1236: 'log event entry exceeded max_allowed_packet; Increase max_allowed_packet on master' from master when reading data from binary log
060131  9:35:59 [Note] Slave I/O thread exiting, read up to log 'mysql-bin.003', position 265

So, it is a problem that should be solved. Thank you for your patience.
[17 May 2006 23:56] Lars Thalmann
See also BUG#19402
[30 Aug 2006 18:34] Trudy Pelzer
Change from verified to won't fix was an error;
setting back to verified. Apologies for the confusion.
[9 Nov 2006 16:26] Andrei Elkin
The bug does not happen on 5.0 after patching with bug#19402 fix.
However on 4.1 it remains because of binlog contains an event of create_file type,
which size has low bound of the size of the file. max_allowed_packet is neglected
at time when the even is written into binlog (which might be a 4.1 version specific bug itself).

To fix this bug on 4.1 it'd be possible with using one_shot feature to
set @@max_allowed_packet = `size of the create_file event` for applying on slave
and be valid during exectution of the create_file event.
[13 Nov 2006 18:06] Andrei Elkin
bug#19402 fix should mend version 5.0. My tests of replicating
  LOAD DATA infile file_sizeof_gt_1GB
passed ok.

Fixed only in 5.0.
[20 Nov 2006 15:43] Lars Thalmann
This is fixed with the patch for BUG#19402.
There is no other patch for this particular bug.
[16 Aug 2007 19:32] Morgan Tocker
Bug #30435 seems related to this.