-- build used cat docs/INFO_SRC commit: c586d55f06bf915d6506e599deb87dbb89f2496a date: 2020-12-10 06:56:16 +0100 build-date: 2020-12-11 07:43:19 +0000 short: c586d55f06b branch: mysql-8.0.23-release MySQL source 8.0.23 -- Setup 3 node cluster, next follow steps from report export PATH=$PATH:/home/umshastr/work/binaries/ga/mysql-8.0.23/bin:/home/umshastr/work/binaries/ga/mysql-shell-8.0.23/bin bin/mysqlsh --log-level=debug3 MySQL Shell 8.0.23 Copyright (c) 2016, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > dba.deploySandboxInstance(3310) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3310 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3310 successfully deployed and started. Use shell.connect('root@localhost:3310') to connect to the instance. MySQL JS > dba.deploySandboxInstance(3320) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3320 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3320 successfully deployed and started. Use shell.connect('root@localhost:3320') to connect to the instance. MySQL JS > dba.deploySandboxInstance(3330) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3330 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3330 successfully deployed and started. Use shell.connect('root@localhost:3330') to connect to the instance. MySQL JS > \connect root@localhost:3310 Creating a session to 'root@localhost:3310' Please provide the password for 'root@localhost:3310': Save password for 'root@localhost:3310'? [Y]es/[N]o/Ne[v]er (default No): Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 11 Server version: 8.0.23 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL localhost:3310 ssl JS > cluster = dba.createCluster("myCluster") A new InnoDB cluster will be created on instance 'localhost:3310'. Validating instance configuration at localhost:3310... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3310 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. Use the localAddress option to override. Creating InnoDB cluster 'myCluster' on '127.0.0.1:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3320") NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.20 MB transferred in about 1 second (~72.20 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster. MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3330") NOTE: The target instance '127.0.0.1:3330' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3330' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): Validating instance configuration at localhost:3330... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3330 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33301'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3330 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ============================================================ 0% In Progress PAGE COPY ============================================================ 0% Not Started REDO COPY ============================================================ 0% Not Started NOTE: 127.0.0.1:3330 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3330 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.20 MB transferred in about 1 second (~72.20 MB/s) Incremental state recovery is now in progress. * Waiting for distributed recovery to finish... NOTE: '127.0.0.1:3330' is being recovered from '127.0.0.1:3320' * Distributed recovery has finished The instance '127.0.0.1:3330' was successfully added to the cluster. MySQL localhost:3310 ssl JS > cluster.status() { "clusterName": "myCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.23" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.23" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.23" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } MySQL localhost:3310 ssl JS > ######### Follow steps from report bin/mysql -uroot -S ./sandboxdata/mysqld.sock --prompt='Node1>' Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 56 Server version: 8.0.23 MySQL Community Server - GPL Copyright (c) 2000, 2021, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. Node1>SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 9e7d3629-76a4-11eb-9e35-02001701fbd2 | 127.0.0.1 | 3310 | ONLINE | PRIMARY | 8.0.23 | | group_replication_applier | b29c4821-76a4-11eb-b050-02001701fbd2 | 127.0.0.1 | 3320 | ONLINE | SECONDARY | 8.0.23 | | group_replication_applier | be7ac0d1-76a4-11eb-b961-02001701fbd2 | 127.0.0.1 | 3330 | ONLINE | SECONDARY | 8.0.23 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec) Node1> Node1>show global variables where Variable_name in ('slave_preserve_commit_order','slave_parallel_workers'); +-----------------------------+-------+ | Variable_name | Value | +-----------------------------+-------+ | slave_parallel_workers | 4 | | slave_preserve_commit_order | ON | +-----------------------------+-------+ 2 rows in set (0.01 sec) Node1>STOP GROUP_REPLICATION; Query OK, 0 rows affected (4.08 sec) Node1>SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 9e7d3629-76a4-11eb-9e35-02001701fbd2 | 127.0.0.1 | 3310 | OFFLINE | | | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 1 row in set (0.00 sec) Node1>SET GLOBAL slave_preserve_commit_order=0; Query OK, 0 rows affected (0.00 sec) Node1>show global variables where Variable_name in ('slave_preserve_commit_order','slave_parallel_workers'); +-----------------------------+-------+ | Variable_name | Value | +-----------------------------+-------+ | slave_parallel_workers | 4 | | slave_preserve_commit_order | OFF | +-----------------------------+-------+ 2 rows in set (0.00 sec) Node1>START GROUP_REPLICATION; ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log. Node1> Node1>show errors; +-------+------+------------------------------------------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------------------------------------------------------------------------------+ | Error | 3092 | The server is not configured properly to be an active member of the group. Please see more details on error log. | +-------+------+------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) Node1>show warnings; +-------+------+------------------------------------------------------------------------------------------------------------------+ | Level | Code | Message | +-------+------+------------------------------------------------------------------------------------------------------------------+ | Error | 3092 | The server is not configured properly to be an active member of the group. Please see more details on error log. | +-------+------+------------------------------------------------------------------------------------------------------------------+ 1 row in set (0.00 sec) - excerpts from node1's error log 2021-02-24T13:41:13.456268Z 56 [Warning] [MY-011682] [Repl] Plugin group_replication reported: 'Group Replication requires slave-preserve-commit-order to be set to ON when using more than 1 applier threads.'