Bug #107061 Can't create ClusterSet
Submitted: 20 Apr 2022 2:31 Modified: 21 Apr 2022 15:12
Reporter: Marcos Albe (OCA) Email Updates:
Status: Duplicate Impact on me:
None 
Category:Shell AdminAPI InnoDB Cluster / ReplicaSet Severity:S2 (Serious)
Version:8.0.27 OS:Any
Assigned to: CPU Architecture:Any

[20 Apr 2022 2:31] Marcos Albe
Description:
After I perform a rolling upgrade of my 8.0.26 to 8.0.27, I'm unable to convert the InnoDB Cluster into a  ClusterSet for later adoption of a secondary cluster.  

Once the upgrade is complete I ran  dba.upgradeMetadata(), and then I attempted

c = dba.getCluster();
c.createClusterSet('cs1');

This complains:
=======================================================================
 MySQL  127.0.0.1:33060+ ssl  JS > c.createClusterSet('cs1');
A new ClusterSet will be created based on the Cluster 'cluster1'.

* Validating Cluster 'cluster1' for ClusterSet compliance.

ERROR: The cluster is not configured to use group_replication_view_change_uuid. Please use <Cluster>.rescan() to repair the issue.
Cluster.createClusterSet: group_replication_view_change_uuid not configured. (MYSQLSH 51609)
=======================================================================

So I do c.rescan():
=======================================================================
MySQL  127.0.0.1:33060+ ssl  JS > c.rescan()
Rescanning the cluster...

Result of the rescanning operation for the 'cluster1' cluster:
{
    "name": "cluster1",
    "newTopologyMode": null,
    "newlyDiscoveredInstances": [],
    "unavailableInstances": [],
    "updatedInstances": []
}

NOTE: The Cluster's group_replication_view_change_uuid is not set
Generating and setting a value for group_replication_view_change_uuid...
WARNING: The Cluster must be completely taken OFFLINE and restarted (dba.rebootClusterFromCompleteOutage()) for the settings to be effective
Updating group_replication_view_change_uuid in the Cluster's metadata...
=======================================================================

To take the cluster totally OFFLINE I do "STOP GROUP_REPLICATION" on every node, and once I do it in the last node, I do dba.rebootClusterFromCompleteOutage:
=======================================================================
MySQL  127.0.0.1:33060+ ssl  JS > \sql stop group_replication;
Query OK, 0 rows affected (4.1040 sec)

 MySQL  127.0.0.1:33060+ ssl  JS > dba.rebootClusterFromCompleteOutage()
Restoring the cluster 'cluster1' from complete outage...

The instance '10.124.33.180:3306' was part of the cluster configuration.
Would you like to rejoin it to the cluster? [y/N]: y

The instance '10.124.33.34:3306' was part of the cluster configuration.
Would you like to rejoin it to the cluster? [y/N]: y

y
* Waiting for seed instance to become ONLINE...
10.124.33.254:3306 was restored.
Rejoining '10.124.33.180:3306' to the cluster.
Rejoining instance '10.124.33.180:3306' to cluster 'cluster1'...

The instance '10.124.33.180:3306' was successfully rejoined to the cluster.

Rejoining '10.124.33.34:3306' to the cluster.
Rejoining instance '10.124.33.34:3306' to cluster 'cluster1'...

The instance '10.124.33.34:3306' was successfully rejoined to the cluster.

The cluster was successfully rebooted.

<Cluster:cluster1>
=======================================================================

That appears to have worked, so I attempt createClusterSet again, but the error repeats itself:
=======================================================================
MySQL  127.0.0.1:33060+ ssl  JS > c = dba.getCluster();
<Cluster:cluster1>
 MySQL  127.0.0.1:33060+ ssl  JS > c.status()
{
    "clusterName": "cluster1",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "10.124.33.254:3306",
        "ssl": "REQUIRED",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "10.124.33.180": {
                "address": "10.124.33.180:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.27"
            },
            "10.124.33.254:3306": {
                "address": "10.124.33.254:3306",
                "memberRole": "PRIMARY",
                "mode": "R/W",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.27"
            },
            "10.124.33.34": {
                "address": "10.124.33.34:3306",
                "memberRole": "SECONDARY",
                "mode": "R/O",
                "readReplicas": {},
                "replicationLag": null,
                "role": "HA",
                "status": "ONLINE",
                "version": "8.0.27"
            }
        },
        "topologyMode": "Single-Primary"
    },
    "groupInformationSourceMember": "10.124.33.254:3306"
}

MySQL  127.0.0.1:33060+ ssl  JS > c.createClusterSet('cs1');
A new ClusterSet will be created based on the Cluster 'cluster1'.

* Validating Cluster 'cluster1' for ClusterSet compliance.

ERROR: The cluster is not configured to use group_replication_view_change_uuid. Please use <Cluster>.rescan() to repair the issue.
Cluster.createClusterSet: group_replication_view_change_uuid not configured. (MYSQLSH 51609)
======================================================================= 

How to repeat:
See above

Suggested fix:
Make it possible to move to ClusterSet without having to destroy the cluster (dissolve/drop metadata) and induce longer blackouts.

Also would be lovely to have on-line method to avoid re-bootstrap of the router when we transition to ClusterSet
[21 Apr 2022 0:12] MySQL Verification Team
Hi Marcos,

Thanks for the report. Verified.
[21 Apr 2022 15:12] MySQL Verification Team
Hi,

This is fixed in 8.0.29
[21 Apr 2022 15:15] MySQL Verification Team
Duplicate of Bug #106442