Bug #1100 LOAD DATA INFILE is badly filtered by binlog-*-db rules
Submitted: 19 Aug 2003 14:06 Modified: 20 Aug 2003 15:27
Reporter: Guilhem Bichot Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Replication Severity:S3 (Non-critical)
Version:4.0 OS:Any (all)
Assigned to: Guilhem Bichot

[19 Aug 2003 14:06] Guilhem Bichot
Description:
Here is an example involving only a master, no slave.
Start master with binlog-do-db=test2 (which means all other dbs will be ignored).
Then do
USE test2;
load data infile 'a_file' into table test.t;
This will be HALF-logged (manual says it should be 100% logged, but that's true for queries, not for LOAD DATA INFILE, which is the bug):
# at 4
#030819 22:00:34 server id 1  log_pos 4         Start: binlog v 3, server v 4.0.15-debug-log created 030819 22:00:34 at startup
# at 79
#030819 22:14:52 server id 1  log_pos 79
#Append_block: file_id: 2  block_len: 131072
<cut>
# at 1146106
#030819 22:14:52 server id 1  log_pos 1146106
#Exec_load: file_id=2

Append_block and Exec_load are logged, while Create_file is not!
Such problems may also appear on a slave started with some binlog-*-db rules and log-slave-updates, even if on the master the LOAD DATA INFILE was in the default database. We have to check this.
We also have to check if replicate-*-db rules suffer from the same problem as binlog-*-db rules.

How to repeat:
see description.

Suggested fix:
The reason in code is that only Create_file has a 'db' member; Append_block and Exec_load don't, instead they pretend their db is the thread's db.
So Create_file is ignored, but not others.
All 3 events should of course have the same db as far as testing of binlog-*-db is concerned. It should be the thread's db, purely for consistency reasons. Because that's the case for Query events. Picking the table's db would make LOAD DATA INFILE an exception to the rule "binlog-*-db rules rely on the current database (the one selected by USE) only".
But this would work for a normal thread, not the slave thread.
That is, if on the master you had USE test2; LOAD ... INTO TABLE test.t;
then in the master's binlog, the "test2" information is LOST. Only the "test" information remains (that's what stored in 'db' of Create_file). So the slave will think the current database was "test", leading to different filtering on the master and slave, even if servers have similar rules: binlog-do-db=test2 for the master, replicate-do-db=test2 for the slave. LOAD would be logged on master, and skipped on slave... Not consistent either.
So the best way is probably make LOAD DATA an exception to "binlog-*-db rules rely on the current database (the one selected by USE) only" and document it.
In other words, use the table's db as the db of Append_block, Exec_load and Delete_file. This can be achieved by testing the rules once in mysql_load(), not in every event's MYSQL_LOG::write() (because in the latter, we don't know what to do, as we don't know the db, for Append_block event for example).
Decision will be taken after we verify other issues (on slave).
[20 Aug 2003 7:22] Guilhem Bichot
1) No problem with replicate-*-db rules, they are tested in Load_log_event::exec_event(), that is, tested only once (not: once for Create_file, once for Append_block etc). The problems we are discussing here arise from the fact that several related events are not filtered the same way.

2) Yes problems (as reported by customer) can appear on a slave running with --binlog-do-db and --log-slave-updates. Example: my slave has binlog-do-db=test2; on the master I do
use test;
load data infile ... into table u;

then:

# before LOAD is done on the master

SLAVE> select count(*) from test.u;
+----------+
| count(*) |
+----------+
|   200000 |
+----------+
1 row in set (0.01 sec)

# after LOAD is done on the master

SLAVE> select count(*) from test.u;
+----------+
| count(*) |
+----------+
|   300000 |
+----------+
1 row in set (0.00 sec)

# so it has been replicated, ok

SLAVE> show binlog events in 'gbichot2-bin.002'; # the first binlog
+------------------+---------+--------------+-----------+--------------+------------------------------+
| Log_name         | Pos     | Event_type   | Server_id | Orig_log_pos | Info                         |
+------------------+---------+--------------+-----------+--------------+------------------------------+
| gbichot2-bin.002 | 4       | Append_block | 1         | 7794487      | ;file_id=12;block_len=131072 |
<cut>
| gbichot2-bin.002 | 1146031 | Exec_load    | 1         | 8940514      | ;file_id=12

# The bug: there is no Create_file event, this is wrong and will cause any slave of this slave to fail when it attempts to execute the first Append_block. The problem is that Create_file event has db=test, so it's not logged by the slave, but Append_block and Exec_load don't have their own db: for the test of rules, no db is available (the 'thd' member of the event is null as usual in replication, so there is no 'thd->db' available). When there is no db, the rules are not tested. That's why Append_block and Execute_load are logged.
Example of confusion of a slave of this slave:
030820 16:09:02  Slave: Error in Append_block event: could not open file '/tmp/SQL_LOAD-1-48-12.data', Error_code: 2
[20 Aug 2003 8:24] Guilhem Bichot
Making myself clearer: what was wrong in the example with a slave having --log-slave-updates, is not that Create_file is not logged; I meant it's that Append_block and Exec_load (or Delete_file) are logged. THAT is wrong. The fact that Create_file was not logged is good in itself, but its absence causes the error message when Append_block is executed (but that's Append_block's fault, it should not be there).
[20 Aug 2003 9:31] Guilhem Bichot
This is not one bug, but two bugs, absolutely unrelated.
First bug (on the master) has already been explained (inside mysql_load(), Append_block uses the thread's db, which is different from the table's db, for testing rules).
But second bug (on the slave with --log-slave-updates) has nothing to do with it: it's that in the slave thread, mysql_load() does no logging (never); logging is done by Append_block_log_event::exec_event() (this Append_block() has been read from the relay log, it has thd==0; this is different from mysql_load() where Append_block's thd is not null). 
This is done on purpose (for failsafe) but makes the bug un-solvable, because only Create_file_log_event contains the db; Append_block, which can be much after in the relay log, with other events intermixed, does not know the db.
So I have to change it: Create_file_log_event::exec_event(), Append_block_log_event::exec_event() etc should not write to the binlog (that's already the case for Query_log_event::exec_event(): write is in mysql_parse()); write should be done in mysql_load() called by Load_log_event::exec_event() called by Execute_load_log_event::exec_event()).
[20 Aug 2003 15:27] Guilhem Bichot
Thank you for your bug report. This issue has been committed to our
source repository of that product and will be incorporated into the
next release.

If necessary, you can access the source repository and build the latest
available version, including the bugfix, yourself. More information 
about accessing the source trees is available at
    http://www.mysql.com/doc/en/Installing_source_tree.html

ChangeSet@1.1581, 2003-08-21 00:23:39+02:00, guilhem@mysql.com