
Initially, GR cluster status healthy:
gr01 > select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1f912efe-c880-11e8-9592-525400cae48b | gr02        |        3306 | ONLINE       | SECONDARY   | 8.0.12         |
| group_replication_applier | 20c6a737-c880-11e8-bf58-525400cae48b | gr03        |        3306 | ONLINE       | SECONDARY   | 8.0.12         |
| group_replication_applier | 76df8268-c95e-11e8-b55d-525400cae48b | gr01        |        3306 | ONLINE       | PRIMARY     | 8.0.12         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

After a few minutes of test.sh run + network broken on primary with:
tc qdisc add dev eth1 root netem delay 20ms loss 40% 65% corrupt 5% 45%
second node crashed due to network expell:
gr01 > select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 20c6a737-c880-11e8-bf58-525400cae48b | gr03        |        3306 | ONLINE       | SECONDARY   | 8.0.12         |
| group_replication_applier | 76df8268-c95e-11e8-b55d-525400cae48b | gr01        |        3306 | ONLINE       | PRIMARY     | 8.0.12         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

gr02 > select * from performance_schema.replication_group_members;
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier |           |             |        NULL | OFFLINE      |             |                |
+---------------------------+-----------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.01 sec)

But the cluster was already inconsistent, primary had different GTID executed then secondaries:
gr01 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302657 |
+-----------------------------------------------+
1 row in set (0.00 sec)

gr02 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302650 |
+-----------------------------------------------+
1 row in set (0.00 sec)

gr03 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302650 |
+-----------------------------------------------+
1 row in set (0.00 sec)

I removed the network issue:
[root@gr01 ~]# tc qdisc del dev eth1 root netem delay 20ms loss 40% 65% corrupt 5% 45%

and brought back gr02 back to cluster - it recovered without errors:
gr02 > START GROUP_REPLICATION;                                                                                                                                                                                    Query OK, 0 rows affected (3.49 sec)

gr02 > select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 1f912efe-c880-11e8-9592-525400cae48b | gr02        |        3306 | ONLINE       | SECONDARY   | 8.0.12         |
| group_replication_applier | 20c6a737-c880-11e8-bf58-525400cae48b | gr03        |        3306 | ONLINE       | SECONDARY   | 8.0.12         |
| group_replication_applier | 76df8268-c95e-11e8-b55d-525400cae48b | gr01        |        3306 | ONLINE       | PRIMARY     | 8.0.12         |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

Still, the GTID are totally inconsistent:

gr01 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302658 |
+-----------------------------------------------+
1 row in set (0.00 sec)

gr02 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302651 |
+-----------------------------------------------+
1 row in set (0.00 sec)

gr03 > select @@GLOBAL.gtid_executed;
+-----------------------------------------------+
| @@GLOBAL.gtid_executed                        |
+-----------------------------------------------+
| aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302651 |
+-----------------------------------------------+
1 row in set (0.00 sec)

gr01 > select * from performance_schema.replication_group_member_stats\G
*************************** 1. row ***************************
                              CHANNEL_NAME: group_replication_applier
                                   VIEW_ID: 15399414486931668:7
                                 MEMBER_ID: 1f912efe-c880-11e8-9592-525400cae48b
               COUNT_TRANSACTIONS_IN_QUEUE: 0
                COUNT_TRANSACTIONS_CHECKED: 0
                  COUNT_CONFLICTS_DETECTED: 0
        COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
        TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302651
            LAST_CONFLICT_FREE_TRANSACTION: 
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
         COUNT_TRANSACTIONS_REMOTE_APPLIED: 0
         COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
         COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
*************************** 2. row ***************************
                              CHANNEL_NAME: group_replication_applier
                                   VIEW_ID: 15399414486931668:7
                                 MEMBER_ID: 20c6a737-c880-11e8-bf58-525400cae48b
               COUNT_TRANSACTIONS_IN_QUEUE: 0
                COUNT_TRANSACTIONS_CHECKED: 7108
                  COUNT_CONFLICTS_DETECTED: 0
        COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
        TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302651
            LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:302650
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
         COUNT_TRANSACTIONS_REMOTE_APPLIED: 7110
         COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
         COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
*************************** 3. row ***************************
                              CHANNEL_NAME: group_replication_applier
                                   VIEW_ID: 15399414486931668:7
                                 MEMBER_ID: 76df8268-c95e-11e8-b55d-525400cae48b
               COUNT_TRANSACTIONS_IN_QUEUE: 0
                COUNT_TRANSACTIONS_CHECKED: 7115
                  COUNT_CONFLICTS_DETECTED: 0
        COUNT_TRANSACTIONS_ROWS_VALIDATING: 18
        TRANSACTIONS_COMMITTED_ALL_MEMBERS: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-302651
            LAST_CONFLICT_FREE_TRANSACTION: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:302657
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
         COUNT_TRANSACTIONS_REMOTE_APPLIED: 4
         COUNT_TRANSACTIONS_LOCAL_PROPOSED: 7115
         COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
3 rows in set (0.00 sec)

Binary log event example:

gr01:
SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:302650'/*!*/;
# at 73553
#181019  9:42:18 server id 10  end_log_pos 73632        Query   thread_id=172   exec_time=0     error_code=0
SET TIMESTAMP=1539942138/*!*/;
BEGIN
/*!*/;
# at 73632
#181019  9:42:18 server id 10  end_log_pos 73691        Table_map: `db1`.`sbtest10` mapped to number 74
# at 73691
#181019  9:42:18 server id 10  end_log_pos 74101        Update_rows: table id 74 flags: STMT_END_F
### UPDATE `db1`.`sbtest10`
### WHERE
###   @1=5033
###   @2=5017
###   @3='21759127669-63668217269-39797092749-18244152331-94364923428-04345446880-50083041598-25650719869-57244780767-32444021659'
###   @4='80389317033-41896291447-12980585023-24963262932-32167546745'
### SET
###   @1=5033
###   @2=5018

gr02 anb gr03:
SET @@SESSION.GTID_NEXT= 'aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:302650'/*!*/;
# at 82620
#181019  9:42:18 server id 10  end_log_pos 82685        Query   thread_id=167   exec_time=3     error_code=0
SET TIMESTAMP=1539942138/*!*/;
BEGIN
/*!*/;
# at 82685
#181019  9:42:18 server id 10  end_log_pos 82743        Table_map: `db1`.`sbtest8` mapped to number 70
# at 82743
#181019  9:42:18 server id 10  end_log_pos 83153        Update_rows: table id 70 flags: STMT_END_F
### UPDATE `db1`.`sbtest8`
### WHERE
###   @1=5001
###   @2=4997
###   @3='91205097249-46887235882-00171649078-62986381756-52810912529-44290019545-39664818460-12881235408-15314683337-17143273718'
###   @4='07245384194-26692976721-69222596368-17450145796-02358063506'
### SET
###   @1=5001
###   @2=4998
