Description:
There has normal requirement who wants migrate single mysql to group_replication cluster. The basic operation will be:
1. server A is the mysql instance, and use mysqldump for an full data of A
2. server B1 is the primary node of group_replication, first load the mysqldump output to B1
3. build group_replication for B1
4. on B1, change master to A and start slave
5. join B2, B3 to group_replication
6. during above operation, the database can always provide service, no need to offline
now, group_replication act backup of A, and we can change system's router to B1 whenever group_replication sync to A, and no need offline long time.
While, in step 5, B2, B3 cannot join group_replication.
How to repeat:
1. First, init some data to A
mysql> create table t1 (c1 int primary key, c2 int) engine=innodb; Query OK, 0 rows affected (0.04 sec)
mysql> insert into t1 values (1,1); Query OK, 1 row affected (0.03 sec)
mysql> insert into t1 values (2,2);
Query OK, 1 row affected (0.02 sec)
mysql> show master status; +--------------------------+----------+--------------+------------------+------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+--------------------------+----------+--------------+------------------+------------------------------------------+
| server-binary-log.000001 | 935 | | | 67ec744f-14f6-11ec-84a5-ec5c6826bca3:1-3 |
+--------------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)
2. now, dump the data
mysqldump --user=root -P13000 -h127.0.0.1 test > t1.sql
3. load data to B1
mysql> source t1.sql;
mysql> show master status; +--------------------------+----------+--------------+------------------+------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+--------------------------+----------+--------------+------------------+------------------------------------------+
| server-binary-log.000001 | 156 | | | 67ec744f-14f6-11ec-84a5-ec5c6826bca3:1-3 |
+--------------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)
4. init group_replication on B1
mysql> set GLOBAL group_replication_bootstrap_group=ON; Query OK, 0 rows affected (0.00 sec)
mysql> set persist group_replication_group_name='7f6f31bb-597c-11eb-a2d9-c8f7507e5000',persist group_replication_group_seeds='127.0.0.1:33061,127.0.0.1:33062,127.0.0.1:33063',persist group_replication_local_address='127.0.0.1:33061';CHANGE MASTER TO MASTER_USER="root" FOR CHANNEL "group_replication_recovery";start group_replication;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected, 3 warnings (0.01 sec)
Query OK, 0 rows affected (2.12 sec)
5. B1 change master to A
mysql> change master to master_host='127.0.0.1',master_user='root',master_port=13000;
Query OK, 0 rows affected, 5 warnings (0.03 sec)
mysql> start slave;
Query OK, 0 rows affected, 1 warning (0.02 sec)
6. B2 join to group_replication
mysql> set persist group_replication_group_name='7f6f31bb-597c-11eb-a2d9-c8f7507e5000',persist group_replication_group_seeds='127.0.0.1:33061,127.0.0.1:33062,127.0.0.1:33063',persist group_replication_local_address='127.0.0.1:33062';CHANGE MASTER TO MASTER_USER="root" FOR CHANNEL "group_replication_recovery";start group_replication;
Query OK, 0 rows affected (0.00 sec)
Query OK, 0 rows affected, 3 warnings (0.02 sec)
Query OK, 0 rows affected (2.62 sec)
7. although start group_replication success on B2, while the state become ERROR in a little while
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
| group_replication_applier | 67ff0cf3-14f6-11ec-bb46-ec5c6826bca3 | 127.0.0.1 | 13006 | ERROR | | 8.0.25 |
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
1 row in set (0.00 sec)
8. Find below error log message on B2
2021-09-14T01:08:33.543955Z 0 [Warning] [MY-013470] [Repl] Plugin group_replication reported: 'This member will start distributed recovery using clone. It is due to no ONLINE member has the missing data for recovering in its binary logs.'
2021-09-14T01:08:34.551750Z 0 [ERROR] [MY-013464] [Repl] Plugin group_replication reported: 'There was an issue when configuring the remote cloning process: The clone plugin is not present or active in this server.'
2021-09-14T01:08:34.551856Z 0 [ERROR] [MY-013467] [Repl] Plugin group_replication reported: 'No valid or ONLINE members exist to get the missing data from the group. For cloning check if donors of the same version and with clone plugin installed exist. For incremental recovery check if you have donors where the required data was not purged from the binary logs.'
2021-09-14T01:08:34.552084Z 0 [ERROR] [MY-011712] [Repl] Plugin group_replication reported: 'The server was automatically set into read only mode after an error was detected.'
2021-09-14T01:08:35.554841Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 127.0.0.1:13003, 127.0.0.1:13006 on view 16315816054725950:2.'
2021-09-14T01:08:38.638812Z 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has left the group.'
Suggested fix:
Better provide a solution for such requirement. When user try migrate normal master-slave topology to group_replication, they want first use group_replcation as backup first, after some stable period, then chose suitable time to change database' router to group_replication.