## Setup 3 node cluster, Setup router and follow steps from report to reproduce export PATH=$PATH:/export/home/tmp/ushastry/mysql-8.0.29/bin:/export/home/tmp/ushastry/mysql-shell-8.0.29GA/bin bin/mysqlsh --log-level=debug3 MySQL Shell 8.0.29 Copyright (c) 2016, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type '\help' or '\?' for help; '\quit' to exit. MySQL JS > dba.deploySandboxInstance(3310) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3310 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3310 successfully deployed and started. Use shell.connect('root@localhost:3310') to connect to the instance. MySQL JS > dba.deploySandboxInstance(3320) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3320 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3320 successfully deployed and started. Use shell.connect('root@localhost:3320') to connect to the instance. MySQL JS > dba.deploySandboxInstance(3330) A new MySQL sandbox instance will be created on this host in /home/umshastr/mysql-sandboxes/3330 Warning: Sandbox instances are only suitable for deploying and running on your local machine for testing purposes and are not accessible from external networks. Please enter a MySQL root password for the new instance: Deploying new MySQL instance... Instance localhost:3330 successfully deployed and started. Use shell.connect('root@localhost:3330') to connect to the instance. MySQL JS > \connect root@localhost:3310 Creating a session to 'root@localhost:3310' Please provide the password for 'root@localhost:3310': Save password for 'root@localhost:3310'? [Y]es/[N]o/Ne[v]er (default No): Fetching schema names for autocompletion... Press ^C to stop. Your MySQL connection id is 11 Server version: 8.0.29 MySQL Community Server - GPL No default schema selected; type \use to set one. MySQL localhost:3310 ssl JS > MySQL localhost:3310 ssl JS > cluster = dba.createCluster("myCluster") A new InnoDB cluster will be created on instance 'localhost:3310'. Validating instance configuration at localhost:3310... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3310 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33101'. Use the localAddress option to override. Creating InnoDB cluster 'myCluster' on '127.0.0.1:3310'... Adding Seed Instance... Cluster successfully created. Use Cluster.addInstance() to add MySQL instances. At least 3 instances are needed for the cluster to be able to withstand up to one server failure. MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3320") NOTE: The target instance '127.0.0.1:3320' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3320' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): Validating instance configuration at localhost:3320... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3320 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33201'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3320 is being cloned from 127.0.0.1:3310 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3320 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3320 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in about 1 second (~72.61 MB/s) State recovery already finished for '127.0.0.1:3320' The instance '127.0.0.1:3320' was successfully added to the cluster. MySQL localhost:3310 ssl JS > cluster.addInstance("root@localhost:3330") NOTE: The target instance '127.0.0.1:3330' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it. The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of '127.0.0.1:3330' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'. The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'. Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): Validating instance configuration at localhost:3330... NOTE: Instance detected as a sandbox. Please note that sandbox instances are only suitable for deploying test clusters for use within the same host. This instance reports its own address as 127.0.0.1:3330 Instance configuration is suitable. NOTE: Group Replication will communicate with other members using '127.0.0.1:33301'. Use the localAddress option to override. A new instance will be added to the InnoDB cluster. Depending on the amount of data on the cluster this might take from a few seconds to several hours. Adding instance to the cluster... Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background. Clone based state recovery is now in progress. NOTE: A server restart is expected to happen as part of the clone process. If the server does not support the RESTART command or does not come back after a while, you may need to manually start it back. * Waiting for clone to finish... NOTE: 127.0.0.1:3330 is being cloned from 127.0.0.1:3320 ** Stage DROP DATA: Completed ** Clone Transfer FILE COPY ############################################################ 100% Completed PAGE COPY ############################################################ 100% Completed REDO COPY ############################################################ 100% Completed NOTE: 127.0.0.1:3330 is shutting down... * Waiting for server restart... ready * 127.0.0.1:3330 has restarted, waiting for clone to finish... ** Stage RESTART: Completed * Clone process has finished: 72.61 MB transferred in 40 sec (1.82 MB/s) State recovery already finished for '127.0.0.1:3330' The instance '127.0.0.1:3330' was successfully added to the cluster. MySQL localhost:3310 ssl JS > cluster.status() { "clusterName": "myCluster", "defaultReplicaSet": { "name": "default", "primary": "127.0.0.1:3310", "ssl": "REQUIRED", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "127.0.0.1:3310": { "address": "127.0.0.1:3310", "memberRole": "PRIMARY", "mode": "R/W", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.29" }, "127.0.0.1:3320": { "address": "127.0.0.1:3320", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.29" }, "127.0.0.1:3330": { "address": "127.0.0.1:3330", "memberRole": "SECONDARY", "mode": "R/O", "readReplicas": {}, "replicationLag": null, "role": "HA", "status": "ONLINE", "version": "8.0.29" } }, "topologyMode": "Single-Primary" }, "groupInformationSourceMember": "127.0.0.1:3310" } MySQL localhost:3310 ssl JS > ######### Set Up the Router bin/mysqlrouter --bootstrap root@localhost:3310 --directory /export/home/tmp/ushastry/mysql-router-8.0.29/107651 --conf-use-sockets --force Please enter MySQL password for root: # Bootstrapping MySQL Router instance at '/export/home/tmp/ushastry/mysql-router-8.0.29/107651'... - Creating account(s) (only those that are needed, if any) - Verifying account (using it to run SQL queries that would be run by Router) - Storing account in keyring - Adjusting permissions of generated files - Creating configuration /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlrouter.conf # MySQL Router configured for the InnoDB Cluster 'myCluster' After this MySQL Router has been started with the generated configuration $ bin/mysqlrouter -c /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlrouter.conf InnoDB Cluster 'myCluster' can be reached by connecting to: ## MySQL Classic protocol - Read/Write Connections: localhost:6446, /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysql.sock - Read/Only Connections: localhost:6447, /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlro.sock ## MySQL X protocol - Read/Write Connections: localhost:6448, /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlx.sock - Read/Only Connections: localhost:6449, /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlxro.sock ---- start router ./start.sh PID 6511 written to '/export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlrouter.pid' stopping to log to the console. Continuing to log to filelog ######## Follow steps from report 1. Connect to MySQL Router on port 6446 2. Create a new user bin/mysql -u root -p -S /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysql.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 627 Server version: 8.0.29 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> SELECT @@port; +--------+ | @@port | +--------+ | 3310 | +--------+ 1 row in set (0.00 sec) mysql> CREATE USER IF NOT EXISTS 'connecttest'@'%' IDENTIFIED BY 'mysql123'; Query OK, 0 rows affected (0.01 sec) mysql> show grants for 'connecttest'@'%'; +-----------------------------------------+ | Grants for connecttest@% | +-----------------------------------------+ | GRANT USAGE ON *.* TO `connecttest`@`%` | +-----------------------------------------+ 1 row in set (0.00 sec) 3. Connect via socket (the password is correct) bin/mysql -u connecttest -p -S /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysql.sock Enter password: ERROR 1045 (28000): Access denied for user 'connecttest'@'localhost' (using password: YES) 4. Connect via TCP bin/mysql -u connecttest -p -h localhost -P 6446 --protocol=tcp Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1362 Server version: 8.0.29 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> 5. Connect via socket - at this point the connection starts working bin/mysql -u connecttest -p -S /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysql.sock Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1436 Server version: 8.0.29 MySQL Community Server - GPL Copyright (c) 2000, 2022, Oracle and/or its affiliates. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> 6. Connect via ro socket - this never works bin/mysql -u connecttest -p -S /export/home/tmp/ushastry/mysql-router-8.0.29/107651/mysqlro.sock Enter password: ERROR 1045 (28000): Access denied for user 'connecttest'@'localhost' (using password: YES)