Bug #84796 GR Member status is wrong
Submitted: 2 Feb 2017 14:13 Modified: 4 Mar 2020 10:45
Reporter: Kenny Gryp Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Group Replication Severity:S3 (Non-critical)
Version:8.0.4 OS:Any
Assigned to: Alfranio Tavares Correia Junior CPU Architecture:Any

[2 Feb 2017 14:13] Kenny Gryp
Description:
When I network partition a node, it remains ONLINE, nothing is indicating there's a problem.

How to repeat:
mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | ONLINE       |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

[vagrant@gr-3 ~]$ sudo ifconfig eth1 down

5 minutes later:

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | ONLINE       |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

mysql> select * from performance_schema.replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14859996142057166:10
                         MEMBER_ID: 74dc6ab2-e1cc-11e6-92aa-08002789cd2e
       COUNT_TRANSACTIONS_IN_QUEUE: 1
        COUNT_TRANSACTIONS_CHECKED: 0
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 72149827-e1cc-11e6-9daf-08002789cd2e:1,
740e1fd2-e1cc-11e6-a8ec-08002789cd2e:1-2,
74dc6ab2-e1cc-11e6-92aa-08002789cd2e:1-2,
da7aba5e-dead-da7a-ba55-da7aba5e57ab:1-438:1000041-1000503:2000041-2000648
    LAST_CONFLICT_FREE_TRANSACTION: 
1 row in set (0.00 sec)

mysql> show global status like '%group_rep%';
+----------------------------------+--------------------------------------+
| Variable_name                    | Value                                |
+----------------------------------+--------------------------------------+
| Com_group_replication_start      | 5                                    |
| Com_group_replication_stop       | 4                                    |
| group_replication_primary_member | 72149827-e1cc-11e6-9daf-08002789cd2e |
+----------------------------------+--------------------------------------+
3 rows in set (0.00 sec)

The node still look ok, but clearly it is not!

(nothing in the error log)

Suggested fix:
The GR member should detect the node is not primary and/or have an easy way to see that the node is non-primary!
[2 Feb 2017 14:23] Kenny Gryp
Reproducing again:

gr-3> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | ONLINE       |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

gr-3> select * from replication_group_members_stats\G
ERROR 1146 (42S02): Table 'performance_schema.replication_group_members_stats' doesn't exist
gr-3> select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14860449946972589:2
                         MEMBER_ID: 74dc6ab2-e1cc-11e6-92aa-08002789cd2e
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 0
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 
    LAST_CONFLICT_FREE_TRANSACTION: 
1 row in set (0.00 sec)

[vagrant@gr-3 ~]$ sudo ifconfig eth1 down

gr-3> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | UNREACHABLE  |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

marked unreachable

gr-3> select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14860449946972589:2
                         MEMBER_ID: 74dc6ab2-e1cc-11e6-92aa-08002789cd2e
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 0
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 72149827-e1cc-11e6-9daf-08002789cd2e:1,
740e1fd2-e1cc-11e6-a8ec-08002789cd2e:1-2,
74dc6ab2-e1cc-11e6-92aa-08002789cd2e:1-2,
da7aba5e-dead-da7a-ba55-da7aba5e57ab:1-440:1000041-1000503:2000041-2000648
    LAST_CONFLICT_FREE_TRANSACTION: 
1 row in set (0.00 sec)

view_id remains the same

[vagrant@gr-3 ~]$ sudo ifconfig eth1 up  

wait for a while:

gr-3> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | ONLINE       |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

gr-3> select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14860449946972589:2
                         MEMBER_ID: 74dc6ab2-e1cc-11e6-92aa-08002789cd2e
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 0
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 72149827-e1cc-11e6-9daf-08002789cd2e:1,
740e1fd2-e1cc-11e6-a8ec-08002789cd2e:1-2,
74dc6ab2-e1cc-11e6-92aa-08002789cd2e:1-2,
da7aba5e-dead-da7a-ba55-da7aba5e57ab:1-440:1000041-1000503:2000041-2000648
    LAST_CONFLICT_FREE_TRANSACTION: 

now both nodes are marked online

but if we go to gr-1, it still says gr-3 is unreachable:

mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 72149827-e1cc-11e6-9daf-08002789cd2e | gr-1        |        3306 | ONLINE       |
| group_replication_applier | 74dc6ab2-e1cc-11e6-92aa-08002789cd2e | gr-3        |        3306 | UNREACHABLE  |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
2 rows in set (0.00 sec)

mysql> select * from replication_group_member_stats\G
*************************** 1. row ***************************
                      CHANNEL_NAME: group_replication_applier
                           VIEW_ID: 14860449946972589:2
                         MEMBER_ID: 72149827-e1cc-11e6-9daf-08002789cd2e
       COUNT_TRANSACTIONS_IN_QUEUE: 0
        COUNT_TRANSACTIONS_CHECKED: 0
          COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 72149827-e1cc-11e6-9daf-08002789cd2e:1,
740e1fd2-e1cc-11e6-a8ec-08002789cd2e:1-2,
74dc6ab2-e1cc-11e6-92aa-08002789cd2e:1-2,
da7aba5e-dead-da7a-ba55-da7aba5e57ab:1-440:1000041-1000503:2000041-2000648
    LAST_CONFLICT_FREE_TRANSACTION: 
1 row in set (0.00 sec)
[9 Feb 2017 10:15] MySQL Verification Team
Hi,

I'm having issues reproducing this, it all behaves as expected... takes few seconds for the statuses to update but they do update.

Few questions, 
 - can you upload your config files from all nodes
 - do you have any traffic going on while you are reproducing this?

thanks
Bogdan
[17 Feb 2017 14:07] Kenny Gryp
Hi Bogdan,

I tried again and I can still reproduce this:

- setup 2 node group replication setup, make sure they are both online
- do `ifconfig eth1 down` (or the interface that is used for GR traffic)
- check the status of group replication on both nodes: `select * from performance_schema.replication_group_members`.
- always one of the nodes will have both nodes still marked as `ONLINE`
[17 Feb 2017 14:07] Kenny Gryp
Configuration:

```
[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
group_replication_group_name="da7aba5e-dead-da7a-ba55-da7aba5e57ab"
group_replication_start_on_boot=off
group_replication_local_address= "gr-1:24901"
group_replication_group_seeds= "gr-1:24901,gr-2:24901,gr-3:24901"
group_replication_bootstrap_group= off
```
[17 Feb 2017 14:08] Kenny Gryp
node 2: 
```# For advice on how to change settings please see
# http://dev.mysql.com/doc/refman/5.7/en/server-configuration-defaults.html

[mysqld]
#
# Remove leading # and set to the amount of RAM for the most important data
# cache in MySQL. Start at 70% of total RAM for dedicated server, else 10%.
# innodb_buffer_pool_size = 128M
#
# Remove leading # to turn on a very important data integrity option: logging
# changes to the binary log between backups.
# log_bin
#
# Remove leading # to set options mainly useful for reporting servers.
# The server defaults are faster for transactions and fast SELECTs.
# Adjust sizes as needed, experiment to find the optimal values.
# join_buffer_size = 128M
# sort_buffer_size = 2M
# read_rnd_buffer_size = 2M
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock

# Disabling symbolic-links is recommended to prevent assorted security risks
symbolic-links=0

log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid

server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
relay-log=gr-2-relay-bin
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
group_replication_group_name="da7aba5e-dead-da7a-ba55-da7aba5e57ab"
group_replication_start_on_boot=on
super_read_only=on
group_replication_local_address= "gr-2:24901"
group_replication_group_seeds= "gr-1:24901,gr-2:24901,gr-3:24901"
group_replication_bootstrap_group= off
```
[2 Mar 2017 14:21] Alfranio Tavares Correia Junior
Hi Kenny,

There are two communication channels (i.e. sockets): inbound and
outbound.

         ------              ------
         |    | --- CH 1-->  |    |
         | S1 |              | S2 |
         |    | <---CH 2---  |    |
         ------              ------

If there is a glitch in one direction, the problem reported in the bug
will happen. In my environment, this only happened:

 1 - when a DHCP assigned a different IP address to one of the servers.
 The node could not renew the address because the interface was down
 and then got another address when the interface became up.

 2 - when there was a routing issue in one direction.
 This could be emulated, for example, by dropping packets through
 "iptables".

The problem happens because the failure detector only looks at incoming
messages to declare whether a node might have failed or not. It must
check, however, both directions: whether it is able to receive and send
messages to a node or not. If one or the other is broken, the node should
be declared unreachable.

Note, however, that I could not reproduce the problem as you have
reported here. Could you double check if one of the cases above is
happening in your environment?

Anyway, I will mark the bug as verified.
[7 Mar 2017 14:53] Kenny Gryp
Thanks for looking into it.

I have this problem when the whole network interface was down, so communication from both sides is gone.
[16 Apr 2018 4:42] Kenny Gryp
I am still having this with 5.7.21:

node1 which has it's network disabled:

mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | bbbda443-3ea2-11e8-a9a3-08002789cd2e | node1       |        3306 | ONLINE       |
| group_replication_applier | be05eb9d-3ea2-11e8-a55b-08002789cd2e | node2       |        3306 | UNREACHABLE  |
| group_replication_applier | be7bc9fd-3ea2-11e8-a33f-08002789cd2e | node3       |        3306 | UNREACHABLE  |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)

node2 and node3 are part of the primary partition, but they still think node1 is online:

mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | bbbda443-3ea2-11e8-a9a3-08002789cd2e | node1       |        3306 | ONLINE       |
| group_replication_applier | be05eb9d-3ea2-11e8-a55b-08002789cd2e | node2       |        3306 | ONLINE       |
| group_replication_applier | be7bc9fd-3ea2-11e8-a33f-08002789cd2e | node3       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)
[16 Apr 2018 4:45] Kenny Gryp
version 8.0.4 as well:

node1: 
mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | d84addc8-3f73-11e8-b28d-08002789cd2e | node1       |        3306 | ONLINE       | SECONDARY   | 8.0.4          |
| group_replication_applier | d8a4abfc-3f73-11e8-b4fe-08002789cd2e | node2       |        3306 | UNREACHABLE  | SECONDARY   | 8.0.4          |
| group_replication_applier | d8fe8a34-3f73-11e8-b753-08002789cd2e | node3       |        3306 | UNREACHABLE  | PRIMARY     | 8.0.4          |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

node2, node3:
mysql> select * from replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | d84addc8-3f73-11e8-b28d-08002789cd2e | node1       |        3306 | ONLINE       | SECONDARY   | 8.0.4          |
| group_replication_applier | d8a4abfc-3f73-11e8-b4fe-08002789cd2e | node2       |        3306 | ONLINE       | SECONDARY   | 8.0.4          |
| group_replication_applier | d8fe8a34-3f73-11e8-b753-08002789cd2e | node3       |        3306 | ONLINE       | PRIMARY     | 8.0.4          |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)
[16 Apr 2018 4:57] Kenny Gryp
Note: in both these tests in 5.7.21 and 8.0.4, I can confirm that Alfranio's explanation is exactly what is happening in my test environment. 

There is still traffic possible from the partitioned node to the other nodes through another interface, but not the other way around.

Still, I consider this sub-optimal, debugging these situations is made more difficult as MySQL is giving incorrect member status information.
[16 Apr 2018 4:57] Kenny Gryp
.
[4 Mar 2020 10:45] Margaret Fisher
Posted by developer:
 
Changelog entry added for MySQL 8.0.21:

Group Replication's tracking of connections to other group members only took into account the incoming connections, not the outgoing connections. This meant if the outgoing connection from member A to member B was broken, for example by a firewall configuration issue, but the incoming connection from member B to member A was intact, member A would display member B's status as ONLINE, although member A's messages were not reaching member B. Member B would display member A's status as UNREACHABLE. Now, if a group member starts to receive pings from another group member to which it has an active connection (in this case, if member A received pings from member B), this is treated as an indicator of an issue with the connection. If sufficient pings are received, the connection is shut down by the recipient of the pings (in this case, member A), so that the status of the connection is consistent for both members.