Bug #118682 connection_memory_limit and global_connection_memory_limit don't work
Submitted: 18 Jul 3:52 Modified: 24 Jul 6:31
Reporter: Xiaocong Ding (OCA) Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: Options Severity:S2 (Serious)
Version:8.0, 8.0.42 OS:Any
Assigned to: CPU Architecture:Any
Tags: OOM

[18 Jul 3:52] Xiaocong Ding
Description:
When a non-root user executes a query with a very large IN clause, the MySQL server (mysqld) exceeds the cgroup memory limit and crashes due to OOM, even if connection_memory_limit is set. Expected behavior: The server should reject the query early with an error ER_CONN_LIMIT instead of consuming uncontrolled memory.

How to repeat:
1. The buffer pool is set to 1G to ensure that the server layer has enough memory for the memory limit class parameters to take effect, and my.cnf key parameters are as follows:

...
innodb_buffer_pool_size = 1G
connection_memory_limit=2097152
global_connection_memory_limit=16777216
...

2. Prepare table and non-root account:

CREATE SCHEMA testdb;
USE testdb;
CREATE TABLE `your_table` (
  `id` int NOT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE USER 'normal'@'%' IDENTIFIED BY 'abc1234!';
GRANT SELECT ON testdb.your_table TO 'normal'@'%';

3. Limit mysqld's max memory to 4GB through cgroup.

mkdir /sys/fs/cgroup/memory/rule8002
echo 4294967296 > /sys/fs/cgroup/memory/rule8002/memory.limit_in_bytes
echo "pid of mysqld" > /sys/fs/cgroup/memory/rule8002/memory.limit_in_bytes/cgroup.procs

4. Prepare a very memory-consuming SQL, which is implemented here with a very large IN clause:

cat generate_huge_sql.sh
#!/bin/bash

OUTPUT_FILE="huge_in_clause.sql"
TARGET_SIZE_MB=100
RECORDS_PER_BATCH=1000

BATCHES=$(( (TARGET_SIZE_MB * 1024 * 1024) / (11 * RECORDS_PER_BATCH) ))

echo "SELECT * FROM your_table WHERE id IN (" > "$OUTPUT_FILE"

for ((i=1; i<=BATCHES; i++)); do
    seq -s, $(( (i-1)*RECORDS_PER_BATCH + 1 )) $(( i*RECORDS_PER_BATCH )) >> "$OUTPUT_FILE"
    echo -n "," >> "$OUTPUT_FILE"
    echo -ne "进度: $((i * 100 / BATCHES))% \r"
done

truncate -s-1 "$OUTPUT_FILE"
echo -e "\n);" >> "$OUTPUT_FILE"

sh ./generate_huge_sql.sh

5. Execute the memory-consuming SQL, and mysqld will OOM which is unexpected.

mysql -unormal -pabc1234! -h127.0.0.1 -P3306 -Dtestdb < ./huge_in_clause.sql
[18 Jul 13:30] Frederic Descamps
Hello, 

Instead of using cgroup for this, as you do, have you tried using GLOBAL_CONNECTION_MEMORY_TRACKING? See https://dev.mysql.com/doc/refman/8.4/en/server-system-variables.html#sysvar_global_connect...

SQL> set global global_connection_memory_tracking=1;

SQL> set <global/session> connection_memory_limit=2200000;
SQL> set global global_connection_memory_limit=536870912000;
[21 Jul 2:43] Xiaocong Ding
Hello Frederic,

Even if global_connection_memory_tracking is set, mysqld will still OOM. If the cgroup limit on 4G memory is removed, you can observe through top that the memory has risen to 8G, and the client will receive an deferred error:

"ERROR 4082 (HY000) at line 8: Connection closed. Connection memory limit 2097152 bytes exceeded. Consumed 75166856 bytes."

It can be seen that there is a huge delay in the effect of connection_memory_limit or global_connection_memory_limit. It usually fails to protect mysqld from OOM.

Key parameters in my.cnf:
...
connection_memory_limit=2097152
global_connection_memory_limit=16777216
global_connection_memory_tracking=ON
...

Execute the following SQL and mysqld will still OOM under 4G cgroup limit:
mysql -unormal -pabc1234! -h127.0.0.1 -P3306 -Dtestdb < ./huge_in_clause.sql
[21 Jul 2:55] Xiaocong Ding
The sql which consumes huge memory

Attachment: huge_in_clause.sql (application/octet-stream, text), 42.58 MiB.

[24 Jul 6:31] MySQL Verification Team
Hello Xiaocong Ding,

Thank you for the report and feedback.

regards,
Umesh