Description:
We are using the MySQL Operator on a k3s cluster. We regularly encounter pod restarts with the cause OOMKilled. I deployed a new mysql-innodbcluster pod with default values and no load, but I noticed that the pod restarts once a day.
Memory usage increases and reaches the limit, causing the pod to restart.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 13m (x21 over 19m) kubelet Liveness probe failed: command "/livenessprobe.sh" timed out
Normal Killing 13m kubelet Container mysql failed liveness probe, will be restarted
Warning FailedPreStopHook 12m kubelet Exec lifecycle hook ([sh -c sleep 60 && mysqladmin -ulocalroot shutdown]) for Container "mysql" in Pod "mysql-0_default(f5ac257c-cc85-4f53-945d-dd0adc74e556)" failed - error: command 'sh -c sleep 60 && mysqladmin -ulocalroot shutdown' exited with 137: , message: ""
Normal Pulled 12m (x2 over 26h) kubelet Container image "container-registry.oracle.com/mysql/community-server:8.4.3" already present on machine
Normal Created 12m (x2 over 26h) kubelet Created container mysql
Normal Started 12m (x2 over 26h) kubelet Started container mysql
How to repeat:
Helm chart:
mysql-operator-2.2.2
mysql-innodbcluster-2.2.2
Deploy mysql-operator with values file values.yaml
credentials:
root:
user: root
password: "xxxxx"
host: "%"
serverVersion: 8.4.3
tls:
useSelfSigned: true
serverInstances: 1
router:
instances: 1
podSpec:
containers:
- name: mysql
resources:
requests:
memory: "768Mi"
limits:
memory: "1Gi"
datadirVolumeClaimTemplate:
storageClassName: standard
resources:
requests:
storage: 10Gi
serverConfig:
mycnf: |
[mysqld]
default-time-zone = 'Europe/Kiev'