Description:
Greetings,
The updates to the my.cnf settings using the mycnf option in the InnoDB cluster won't work after the initial cluster provisioning. It works at the initial cluster provisioning, but the initconf configmap won't get updated if we make changes to the manifest later on and then do an update using commands like kubectl apply, kubectl edit, etc
How to repeat:
Create an InnoDB cluster using the manifest below:
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: innodb-test
spec:
secretName: mypwds
instances: 3
version: 8.2.0
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
router:
instances: 1
version: 8.2.0
mycnf: |
[mysqld]
max_connections=10000
log_error_verbosity=2
It will work fine and the resulting innodb-test-initconf configmap will have the custom my.cnf options
kubectl get cm innodb-test-initconf -o yaml | grep -E 'max_connections|log_error_verbosity'
max_connections=10000
log_error_verbosity=2
Now update the manifest and make changes to the mycnf option.
apiVersion: mysql.oracle.com/v2
kind: InnoDBCluster
metadata:
name: innodb-test
spec:
secretName: mypwds
instances: 3
version: 8.2.0
tlsUseSelfSigned: true
datadirVolumeClaimTemplate:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
router:
instances: 1
version: 8.2.0
mycnf: |
[mysqld]
max_connections=20000
log_error_verbosity=1
Ideally, the operator should update the configmap and restart the cluster to bring the changes into effect, but nothing happens and the configmap remains the same. The only method to make it work is to manually edit the configmap and then restart the cluster pods.
kubectl get cm innodb-test-initconf -o yaml | grep -E 'max_connections|log_error_verbosity'
max_connections=10000
log_error_verbosity=2