Bug #99951 MySQL master slave replication
Submitted: 22 Jun 2020 15:37 Modified: 26 Jun 2020 16:25
Reporter: DBAJJ J Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server Severity:S2 (Serious)
Version:5.6 OS:Any
Assigned to: MySQL Verification Team CPU Architecture:Any
Tags: MySQL, replication

[22 Jun 2020 15:37] DBAJJ J
Description:
I have enabled master-slave replication with few tables  replicating from master to slave.

consider I have database 1, database 2, database 3 in master and only have database 1 in slave.

In database 1, table 1, table 2 and table 3 are replicated from master to slave using replicate-do-table option enabled in slave. the slave conf looks like

replicate-do-table = Database 1.table 1
replicate-do-table = Database 1.table 2
replicate-do-table = Database 1.table 3

Now I have deleted database 3 in master and got the below error

Last Errorno: 1008
Last Error Message: Error 'Can't drop database ' Database 3.

The thing is database 3 is neither replicated nor created in slave, but getting the error
 

How to repeat:
consider I have database 1, database 2, database 3 in master and only have database 1 in slave.

In database 1, table 1, table 2 and table 3 are replicated from master to slave using replicate-do-table option enabled in slave. the slave conf looks like

replicate-do-table = Database 1.table 1
replicate-do-table = Database 1.table 2
replicate-do-table = Database 1.table 3

Now I have deleted database 3 in master and got the below error
[22 Jun 2020 18:11] MySQL Verification Team
Hi,

MySQL replication filters mostly look at "default" and not "targeted" database so if you have

USE X;
MODIFY Y.T;
DROP Y;
...

the X is what filters look at and not Y.

Please look at the details here:
https://dev.mysql.com/doc/refman/5.7/en/replication-rules.html

The way you described the bug I'm not reproducing it so, if after reading the replication-rules document I linked, you still think this is a bug, please create a reproducible test case (namely, provide my.cnf for master and slave, script to create initial data and script that will expose the bug).

Thanks
Bogdan
[23 Jun 2020 5:45] DBAJJ J
Master my.cnf

[mysqld]

innodb_buffer_pool_size = 19000M
skip-name-resolve

innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 3
innodb-log-file-size           = 1024M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table          = 1

 
tmp-table-size                  = 64M
max-heap-table-size             = 64M
max-connections                 = 2500
thread-cache-size               = 500
open-files-limit                = 65535
table-definition-cache          = 4096
#table-open-cache        	= 8000
table_open_cache               = 8000
net_buffer_length 		= 64K
read_buffer_size 		= 1M
myisam_sort_buffer_size 	= 8M
query_cache_size 		= 512M
query_cache_limit 		= 512M
query_cache_type		= 1

log_bin				= MASTER-SERVER-bin
sync-binlog     		= 1
expire_logs_days 		= 31

connect_timeout=180
binlog_format=mixed
innodb_log_buffer_size=2048M

port = 3306
server_id = 1

key_buffer_size = 16M
join_buffer_size = 2G
sort_buffer_size = 2M
read_rnd_buffer_size = 2M 

#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES 

# Timestamp default value
explicit_defaults_for_timestamp = true

# Slow Query Log
slow_query_log = 1
slow_query_log_file = mysql_slow.log

#Slow Query Cutoff time
long_query_time = 1

# Maximum size of packet to allow
max_allowed_packet 	= 64M
max-connect-errors      = 1000000

# Wait time out to automatically close inactive connections
wait_timeout = 180
interactive_timeout = 180

# To specify maximum length while group concatting
group_concat_max_len = 1000000
log_bin_trust_function_creators = 1

*************************************************************************
SLAVE my.cnf

[mysqld]

 innodb_buffer_pool_size = 8000M
 skip-name-resolve

# Flushing data to disk
innodb-flush-method            = O_DIRECT
innodb-log-files-in-group      = 2
innodb-log-file-size           = 128M
innodb-flush-log-at-trx-commit = 1
innodb-file-per-table          = 1

connect_timeout=180
binlog_format=mixed
innodb_log_buffer_size=2048M

# CACHES AND LIMITS #
tmp-table-size                  = 32M
max-heap-table-size             = 32M
max-connections                 = 1200
thread-cache-size               = 500
open-files-limit                = 65535
table-definition-cache          = 4096
table-open-cache                = 8000
net_buffer_length               = 64K
read_buffer_size                = 1M
myisam_sort_buffer_size         = 8M
query_cache_size                = 128M
query_cache_limit               = 128M
query_cache_type                = 1

log_bin                         = SLAVE-SERVER-bin
sync-binlog                     = 1
expire_logs_days                = 31

replicate-do-table              =       db1.table1
replicate-do-table              =       db1.table2
replicate-do-table              =       db1.table3

log-slave-updates               =       ON

port = 3306
server_id = 110

key_buffer_size = 2000M
join_buffer_size = 2000M
sort_buffer_size = 2M
read_rnd_buffer_size = 2M

#sql_mode=NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES

# Timestamp default value
explicit_defaults_for_timestamp = true

# Slow Query Log
slow_query_log = 1
slow_query_log_file = mysql_slow.log

#Slow Query Cutoff time
long_query_time = 0.5

# Maximum size of packet to allow
max_allowed_packet      = 16M
max-connect-errors      = 1000000

# To specify maximum length while group concatting
group_concat_max_len = 1000000
log_bin_trust_function_creators = 1

*****************************************************************

Scripts to reproduce

MASTER

create database db1;
create database db2;
create database db3;

use db1;

create table table1 (
id int auto_increment primary key,
vname varchar(100) not null,
addr varchar(200) not null
);

create table table2 (
id int auto_increment primary key,
vname varchar(100) not null,
addr varchar(200) not null
);
create table table3 (
id int auto_increment primary key,
vname varchar(100) not null,
addr varchar(200) not null
);
create table table4 (
id int auto_increment primary key,
vname varchar(100) not null,
addr varchar(200) not null
);
create table table5 (
id int auto_increment primary key,
vname varchar(100) not null,
addr varchar(200) not null
);

Take dump of table1,table2 and table3 from master.

Slave:

create database db1;

Restore the dump.

Enable master slave replication and start the slave.

Now all the updates in master is being replicated to slave.

And then in MASTER:

Drop database db3;

Resultset: DROP DATABASE `db3` 1 row(s) affected	0.343 sec

Now in slave:

Slave IO running: yes
Slave SQL running: NO
Last Errorno: 1008
Last Error Message: Error 'Can't drop database 'db3'; database doesn't exist' on query. Default database
[26 Jun 2020 15:36] MySQL Verification Team
Hi,

This is expected behavior, not a bug.

If you look at
https://dev.mysql.com/doc/refman/5.6/en/replication-options-slave.html#option_mysqld_repli...
or 

https://dev.mysql.com/doc/refman/5.7/en/replication-options-slave.html#option_mysqld_repli...

[quote]
This option affects only statements that apply to tables. It does not affect statements that apply only to other database objects, such as stored routines. To filter statements operating on stored routines, use one or more of the --replicate-*-db options. 
[/quote]

so DROP DATABASE; is not affected by the replicate-*-table filter as described in the manual.

kind regards
Bogdan
[26 Jun 2020 16:25] DBAJJ J
So does that mean, if I enable replicate-do-table option for few tables in a database and I drop some other database in master, the replication will break?
[26 Jun 2020 16:28] MySQL Verification Team
Hi,

> So does that mean, if I enable replicate-do-table option for few tables in a database and I drop some other database in master, the replication will break?

Yes. 

replicate-*-table affect those tables only, it does not effect DB, so a DROP DB is not filtered out, DROP/CREATE procedure is not filtered out etc etc. 

You need to use replicate-*-db filters, but for them make sure you understand how they work (filtering for "default" database and not "affected" database).

Kind regards
Bogdan