| Bug #116265 | Cant install helm chart with innodb-cluster | ||
|---|---|---|---|
| Submitted: | 28 Sep 2024 21:46 | Modified: | 30 Sep 2024 10:17 |
| Reporter: | Piotr G | Email Updates: | |
| Status: | Closed | Impact on me: | |
| Category: | Shell AdminAPI InnoDB Cluster / ReplicaSet | Severity: | S3 (Non-critical) |
| Version: | 2.2.1 | OS: | Ubuntu |
| Assigned to: | MySQL Verification Team | CPU Architecture: | x86 |
[28 Sep 2024 21:47]
Piotr G
Update helm chart version
[28 Sep 2024 22:42]
Piotr G
I think that it have to add
initdb-localroot.sql: |
set sql_log_bin=0;
INSTALL PLUGIN auth_socket SONAME 'auth_socket.so';
# Create socket authenticated localroot@localhost account
CREATE USER localroot@localhost IDENTIFIED WITH auth_socket AS 'mysql';
GRANT ALL ON *.* TO localroot@localhost WITH GRANT OPTION;
GRANT PROXY ON ''@'' TO localroot@localhost WITH GRANT OPTION;
# Drop the default account created by the docker image
DROP USER IF EXISTS healthchecker@localhost;
# Create account for liveness probe
CREATE USER mysqlhealthchecker@localhost IDENTIFIED WITH auth_socket AS 'mysql';
set sql_log_bin=1;
[29 Sep 2024 10:03]
Piotr G
After fix auth_socket.so now I received other error. Innodb cluster not create cluster and not create mysqlrouter:
Logs from mysql operator:
[2024-09-29 10:02:11,020] kopf.objects [INFO ] on_pod_create: cluster create time None
on_pod_create: first pod created
on_pod_created: probing cluster
on_pod_created: pod=my-mysql-innodbcluster-0 primary=None cluster_state=ClusterDiagStatus.INITIALIZING
Time to create the cluster
[2024-09-29 10:02:11,206] kopf.objects [INFO ] cluster probe: status=ClusterDiagStatus.INITIALIZING online=[]
[2024-09-29 10:02:11,209] kopf.objects [INFO ] Creating cluster at my-mysql-innodbcluster-0
[2024-09-29 10:02:11,210] kopf.objects [INFO ] Using PASSWORD GR authentication
[2024-09-29 10:02:11,332] kopf.objects [INFO ] server_id=1 server_uuid=657096f4-7e49-11ef-8c03-c295e64b59d5 report_host=None gtid_executed=None gtid_purged=None
[2024-09-29 10:02:11,335] kopf.objects [INFO ] CREATE CLUSTER: seed=my-mysql-innodbcluster-0, options={'gtidSetIsComplete': True, 'manualStartOnBoot': True, 'memberSslMode': 'REQUIRED', 'exitStateAction': 'ABORT_SERVER'}
A new InnoDB Cluster will be created on instance 'my-mysql-innodbcluster-0:3306'.
Validating instance configuration at my-mysql-innodbcluster-0.my-mysql-innodbcluster-instances.default.svc.cluster.local:3306...
This instance reports its own address as my-mysql-innodbcluster-0:3306
NOTE: Some configuration options need to be fixed:
+--------------------------+---------------+----------------+--------------------------------------------------+
+--------------------------+---------------+----------------+--------------------------------------------------+
| Variable | Current Value | Required Value | Note |+--------------------------+---------------+----------------+--------------------------------------------------+
Some variables need to be changed, but cannot be done dynamically on the server.
NOTE: Please use the dba.configure_instance() command to repair these issues.
ERROR: Instance must be configured and validated with dba.check_instance_configuration() and dba.configure_instance() before it can be used in an InnoDB cluster.
[2024-09-29 10:02:11,479] kopf.objects [ERROR ] Handler 'on_pod_create' failed with an exception. Will retry.
Traceback (most recent call last):
File "/usr/lib/mysqlsh/python-packages/kopf/_core/actions/execution.py", line 279, in execute_handler_once
result = await invoke_handler(
File "/usr/lib/mysqlsh/python-packages/kopf/_core/actions/execution.py", line 374, in invoke_handler
result = await invocation.invoke(
File "/usr/lib/mysqlsh/python-packages/kopf/_core/actions/invocation.py", line 139, in invoke
await asyncio.shield(future) # slightly expensive: creates tasks
File "/usr/lib64/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/lib/mysqlsh/python-packages/mysqloperator/controller/innodbcluster/operator_cluster.py", line 827, in on_pod_create
cluster_ctl.on_pod_created(pod, logger)
File "/usr/lib/mysqlsh/python-packages/mysqloperator/controller/innodbcluster/cluster_controller.py", line 755, in on_pod_created
shellutils.RetryLoop(logger).call(self.create_cluster, pod, logger)
File "/usr/lib/mysqlsh/python-packages/mysqloperator/controller/shellutils.py", line 93, in call
return f(*args)
File "/usr/lib/mysqlsh/python-packages/mysqloperator/controller/innodbcluster/cluster_controller.py", line 285, in create_cluster
self.dba_cluster = dba.create_cluster(
RuntimeError: Dba.create_cluster: Instance check failed
[2024-09-29 10:02:11,535] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'progress': {'on_pod_create': {'started': '2024-09-29T09:58:08.491889', 'stopped': None, 'delayed': '2024-09-29T10:03:11.479937', 'purpose': 'create', 'retries': 6, 'success': False, 'failure': False, 'message': 'Dba.create_cluster: Instance check failed\n', 'subrefs': None}}}, None),)
[2024-09-29 10:02:11,666] kopf.objects [INFO ] Handler 'on_pod_event' succeeded.
[2024-09-29 10:02:18,213] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'dummy': '2024-09-29T10:02:18.161294'}, None),)
[2024-09-29 10:02:18,342] kopf.objects [INFO ] Handler 'on_pod_event' succeeded.
| enforce_gtid_consistency | OFF | ON | Restart the server || gtid_mode | OFF | ON | Restart the server || server_id | 1 | <unique ID> | Update read-only variable and restart the server |on_pod_create: pod=my-mysql-innodbcluster-2 ContainersReady=True Ready=False gate[configured]=True
[2024-09-29 10:02:18,372] kopf.objects [INFO ] on_pod_create: cluster create time None
on_pod_created: probing cluster
on_pod_created: pod=my-mysql-innodbcluster-2 primary=None cluster_state=ClusterDiagStatus.INITIALIZING
[2024-09-29 10:02:18,461] kopf.objects [INFO ] cluster probe: status=ClusterDiagStatus.INITIALIZING online=[]
[2024-09-29 10:02:18,465] kopf.objects [ERROR ] Handler 'on_pod_create' failed temporarily: Cluster is not yet ready
[2024-09-29 10:02:18,535] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'progress': {'on_pod_create': {'started': '2024-09-29T09:58:08.600874', 'stopped': None, 'delayed': '2024-09-29T10:02:33.465453', 'purpose': 'create', 'retries': 16, 'success': False, 'failure': False, 'message': 'Cluster is not yet ready', 'subrefs': None}}}, None),)
[2024-09-29 10:02:18,665] kopf.objects [INFO ] Handler 'on_pod_event' succeeded.
[2024-09-29 10:02:23,271] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'dummy': '2024-09-29T10:02:23.206974'}, None),)
[2024-09-29 10:02:23,401] kopf.objects [INFO ] Handler 'on_pod_event' succeeded.
on_pod_create: pod=my-mysql-innodbcluster-1 ContainersReady=True Ready=False gate[configured]=True
[2024-09-29 10:02:23,430] kopf.objects [INFO ] on_pod_create: cluster create time None
on_pod_created: probing cluster
on_pod_created: pod=my-mysql-innodbcluster-1 primary=None cluster_state=ClusterDiagStatus.INITIALIZING
[2024-09-29 10:02:23,519] kopf.objects [INFO ] cluster probe: status=ClusterDiagStatus.INITIALIZING online=[]
[2024-09-29 10:02:23,522] kopf.objects [ERROR ] Handler 'on_pod_create' failed temporarily: Cluster is not yet ready
[2024-09-29 10:02:23,576] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'progress': {'on_pod_create': {'started': '2024-09-29T09:58:08.555028', 'stopped': None, 'delayed': '2024-09-29T10:02:38.523211', 'purpose': 'create', 'retries': 16, 'success': False, 'failure': False, 'message': 'Cluster is not yet ready', 'subrefs': None}}}, None),)
[2024-09-29 10:02:23,705] kopf.objects [INFO ] Handler 'on_pod_event' succeeded.
[29 Sep 2024 10:45]
Piotr G
This is a very strange issue, because this issue occurs only in kind cluster (k8s in docker). I install helm in cluster emulator not working I try different version of kind cluster.
[29 Sep 2024 11:10]
Piotr G
Issue resolved. There was a issue with mysql-router. I have installed mysql-router in host. When I uninstall mysql-router from host everything works fine. If you have similar issue please check apparmor more info: https://github.com/moby/moby/issues/7512#issuecomment-61787845

Description: Container initmysql in pod innodb-cluster raise error when try to create localroot user. Cant install innodb-cluster with helm chart. Sidecar container during create user raise error. Cant install and run innodb cluster # Logs from innodb-cluster-0 initmysql container Warning: Unable to load '/usr/share/zoneinfo/zone1970.tab' as time zone. Skipping it. [Entrypoint] GENERATED ROOT PASSWORD: %+EL4z53_P#jz#8?vlz,f?x8Rq9r7Q6V [Entrypoint] running /docker-entrypoint-initdb.d/initdb-localroot.sql ERROR 1524 (HY000) at line 3: Plugin 'auth_socket' is not loaded 2024-09-28T21:38:42.790139Z 12 [System] [MY-013172] [Server] Received SHUTDOWN from user root. Shutting down mysqld (Version: 9.0.1). 2024-09-28T21:38:44.021780Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 9.0.1) MySQL Community Server - GPL. 2024-09-28T21:38:44.021798Z 0 [System] [MY-015016] [Server] MySQL Server - end. [Entrypoint] Server shut down [Entrypoint] MySQL init process done. Ready for start up. [Entrypoint] MYSQL_INITIALIZE_ONLY is set, exiting without starting MySQL... # Logs from container sidecar: │ [2024-09-28 21:40:40,100] sidecar [INFO ] MySQL Operator/sidecar_main.py=2.2.1 timestamp=2024-07 │ │ -18T16:52:09 kopf=1.35.4 uid=27 │ │ [2024-09-28 21:40:40,116] sidecar [INFO ] My pod is innodb-cluster-0 in default │ │ [2024-09-28 21:40:40,116] sidecar [INFO ] Bootstrapping │ │ [2024-09-28 21:40:40,118] sidecar [CRITICAL] Unexpected MySQL error during connection: MySQL Error │ │ (1045): Shell.connect: Access denied for user 'localroot'@'localhost' (using password: NO) │ │ Exception happened in entrypoint sidecar. The message is: MySQL Error (1045): Shell.connect: Access denied for u │ │ ser 'localroot'@'localhost' (using password: NO) # Logs from mysql-operator [2024-09-28 21:40:51,727] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'progress': {'on_pod_create': {'started': '2024-09-28T21:38:20.756294', 'stopped': None, 'delayed': '2024-09-28T21:41:21.684107', 'purpose': 'create', 'retries': 6, 'success': False, 'failure': False, 'message': 'Sidecar of innodb-cluster-2 is not yet configured', 'subrefs': None}}}, None),) [2024-09-28 21:40:51,797] kopf.objects [INFO ] Handler 'on_pod_event' succeeded. on_pod_create: pod=innodb-cluster-1 ContainersReady=False Ready=False gate[configured]=None [2024-09-28 21:40:51,808] kopf.objects [ERROR ] Handler 'on_pod_create' failed temporarily: Sidecar of innodb-cluster-1 is not yet configured [2024-09-28 21:40:51,840] kopf.objects [INFO ] Handler 'on_pod_event' succeeded. [2024-09-28 21:40:51,857] kopf.objects [WARNING ] Patching failed with inconsistencies: (('remove', ('status', 'kopf'), {'progress': {'on_pod_create': {'started': '2024-09-28T21:38:20.768728', 'stopped': None, 'delayed': '2024-09-28T21:41:21.808934', 'purpose': 'create', 'retries': 6, 'success': False, 'failure': False, 'message': 'Sidecar of innodb-cluster-1 is not yet configured', 'subrefs': None}}}, None),) [2024-09-28 21:40:51,858] kopf.objects [INFO ] Handler 'on_pod_event' succeeded. [2024-09-28 21:40:51,974] kopf.objects [INFO ] Handler 'on_pod_event' succeeded. How to repeat: k8s kind cluster (cluster in docker) kind.yaml: ```yaml kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 name: innodb nodes: - role: control-plane - role: worker - role: worker - role: worker ``` # Create cluster kind create cluster --config ./kind.yaml # Install operator helm install my-mysql-operator mysql-operator/mysql-operator \ --namespace mysql-operator --create-namespace # Create cluster # in case the namespace doesn't exist, please pass --create-namespace helm install innodb-cluster mysql-operator/mysql-innodbcluster -n default \ --version 2.2.1 \ --set tls.useSelfSigned=true \ -f ./values.yaml values.yaml ```yaml image: pullPolicy: IfNotPresent pullSecrets: enabled: false secretName: credentials: root: user: root password: root host: "%" tls: useSelfSigned: false # caSecretName: # serverCertAndPKsecretName: # routerCertAndPKsecretName: # our use router.certAndPKsecretName #serverVersion: 8.0.31 serverInstances: 3 routerInstances: 2 # or use router.instances baseServerId: 1000 ```