Bug #80180 the slave is stop and notice follow error.(error_no 1756)
Submitted: 28 Jan 2016 7:01 Modified: 1 Feb 2016 6:41
Reporter: feihong huang Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: Replication Severity:S3 (Non-critical)
Version:5.7.9 5.7.10 OS:Red Hat (7.1)
Assigned to: CPU Architecture:Any
Tags: replication 1756

[28 Jan 2016 7:01] feihong huang
Description:
the slave is stop and notice follow error.(error_no 1756).please help me to solve the problem ,thank you!

The slave coordinator and worker threads are stopped, possibly leaving data in inconsistent state. A restart should restore consistency automatically, although using non-transactional storage for data or info tables or DDL queries could lead to problems. In such cases you have to examine your data (see documentation for details).

How to repeat:
master.cnf
#replication
server-id=1
slave-net-timeout=20
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
rpl_semi_sync_master_enabled=1
rpl_semi_sync_master_wait_no_slave=1
rpl_semi_sync_master_timeout=3000
log_slave_updates=1
binlog-ignore-db=performance_schema
binlog-ignore-db=information_schema
binlog-ignore-db=sys
master-info-repository = TABLE
relay-log-info-repository = TABLE
sync_master_info = 1
sync_relay_log = 1
sync_relay_log_info = 1
binlog-row-image = minimal
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=16

slave.cnf
#replication
server-id=2
slave-net-timeout=20
plugin_load = "rpl_semi_sync_master=semisync_master.so;rpl_semi_sync_slave=semisync_slave.so"
rpl_semi_sync_slave_enabled=1
relay_log_recovery=1
master-info-repository = TABLE
relay-log-info-repository = TABLE
sync_master_info = 1
sync_relay_log = 1
sync_relay_log_info = 1
slave-parallel-type=LOGICAL_CLOCK
slave-parallel-workers=16
[30 Jan 2016 9:51] feihong huang
由于binlog在prepare阶段未写,因此主库中看来,此分布式事务最终提交了,但是此事务的操作并未写到binlog中,因此也就未能成功复制到备库,从而导致主备库数据不一致的情况出现。
[1 Feb 2016 6:41] MySQL Verification Team
Hello feihong,

Thank you for the report.
This is most likely duplicate of Bug #77239, please see Bug #77239

Thanks,
Umesh