Bug #90468 "Access denied" when execute cluster.forceQuorumUsingPartitionOf
Submitted: 17 Apr 2018 9:30 Modified: 9 May 2018 11:18
Reporter: chen chen Email Updates:
Status: Analyzing Impact on me:
None 
Category:Shell General / Core Client Severity:S3 (Non-critical)
Version:1.0.11,8.0.4 OS:Red Hat (7.4)
Assigned to: CPU Architecture:x86
Tags: InnoDB Cluster

[17 Apr 2018 9:30] chen chen
Description:
Basic Info:
MySQL 5.7.21,MySQL Shell 1.0.11,8.0.4

Here it is,

mysql-js> n1
cluster_admin:cluster_pass@192.168.244.10:3306
mysql-js> shell.connect(n1) 
Creating a Session to 'cluster_admin@192.168.244.10:3306'
Your MySQL connection id is 210
No default schema selected; type \use <schema> to set one.
mysql-js> cluster=dba.getCluster()
<Cluster:mycluster>

i can use cluster_admin to login 192.168.244.10.
now,i exucete "cluster.forceQuorumUsingPartitionOf",i first use n1,but it displayed "Access denied for user 'root'@'192.168.244.10'",then i added the root passsword '123456',but it displayed "Access denied for user 'cluster_admin'@'192.168.244.10'"

mysql-js> cluster.forceQuorumUsingPartitionOf(n1)
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [192.168.244.10:3306,192.168.244.30:3306]

Restoring the InnoDB cluster ...

Cluster.forceQuorumUsingPartitionOf: Access denied for user 'root'@'192.168.244.10' (using password: YES) (MySQL Error 1045)
mysql-js> cluster.forceQuorumUsingPartitionOf(n1,'123456')
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [192.168.244.10:3306,192.168.244.30:3306]

Restoring the InnoDB cluster ...

Cluster.forceQuorumUsingPartitionOf: Access denied for user 'cluster_admin'@'192.168.244.10' (using password: YES) (MySQL Error 1045)

i tried to use another login method,i showed the same error.

mysql-js> cluster.forceQuorumUsingPartitionOf({host:'192.168.244.10', user:'cluster_admin',port: 3306,password:'cluster_pass'})
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [192.168.244.10:3306,192.168.244.30:3306]

Restoring the InnoDB cluster ...

Cluster.forceQuorumUsingPartitionOf: Access denied for user 'root'@'192.168.244.10' (using password: YES) (MySQL Error 1045)

Only use this method below can succeed

Cluster.forceQuorumUsingPartitionOf: Access denied for user 'cluster_admin'@'192.168.244.10' (using password: YES) (MySQL Error 1045)
mysql-js> cluster.forceQuorumUsingPartitionOf({host:'192.168.244.10', user:'root',port:3306,password:'123456'})
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [192.168.244.10:3306,192.168.244.30:3306]

Restoring the InnoDB cluster ...

Cluster.forceQuorumUsingPartitionOf: Variable 'group_replication_force_members' can't be set to the value of '192.168.244.10:13306,192.168.244.30:13306' (MySQL Error 1231)

How to repeat:
just as above
[5 May 2018 0:10] MySQL Verification Team
Hi,
Thanks for the report, verified as stated
all best
Bogdan
[9 May 2018 11:05] Miguel Araujo
Posted by developer:
 
Hi Bogdan,

I could not reproduce this issue using the latest version of the Shell: 8.0.11.

The steps were the following:

1) Deployed 3 instances and created admin user by using dba.configureInstance("root@localhost:3310", {clusterAdmin: "foo", clusterAdminPassword: "bar"})

2) Connected to the first instance using the new admin user: \c foo@localhost:3310

3) Created the cluster: dba.createCluster("testCluster")

4) Simulated a quorum loss by killing 2 instances of the cluster:

c.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "localhost:3310", 
        "ssl": "REQUIRED", 
        "status": "NO_QUORUM", 
        "statusText": "Cluster has no quorum as visible from 'localhost:3310' and cannot process write transactions. 2 members are not active", 
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3320": {
                "address": "localhost:3320", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "UNREACHABLE"
            }, 
            "localhost:3330": {
                "address": "localhost:3330", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "UNREACHABLE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://foo@localhost:3310"
}

5) Restored the quorum using <Cluster>.forceQuorumUsingPartitionOf()

c.forceQuorumUsingPartitionOf("foo@localhost:3310")
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310]

Please provide the password for 'foo@localhost:3310': ***
Restoring the InnoDB cluster ...

The InnoDB cluster was successfully restored using the partition from the instance 'foo@localhost:3310'.

WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset are removed or joined back to the group that was restored.

6) Rejoined the instances back to the cluster and simulated again the quorum loss

7) Restored the quorum loss but using the 'root' user this time

c.forceQuorumUsingPartitionOf("root@localhost:3310")
Restoring replicaset 'default' from loss of quorum, by using the partition composed of [localhost:3310]

Please provide the password for 'root@localhost:3310': ***
Restoring the InnoDB cluster ...

The InnoDB cluster was successfully restored using the partition from the instance 'root@localhost:3310'.

WARNING: To avoid a split-brain scenario, ensure that all other members of the replicaset are removed or joined back to the group that was restored.

...

How did you create the cluster? And how did you create the cluster admin user?

Please verify and provide all the relevant steps if the issue is still seen.