Bug #117104 MySQL 8.4 - High memory consumption
Submitted: 3 Jan 13:30 Modified: 2 Feb 15:59
Reporter: Charles Rodrigues Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:8.4.3 OS:Ubuntu
Assigned to: CPU Architecture:x86

[3 Jan 13:30] Charles Rodrigues
Description:
Mysql with only 33 connections consuming 53GB of RAM.
innodb_buffer_pool_size configured with 34 GB
Server with 72 GB of RAM

Ubuntu 24.04.1 LTS
/usr/sbin/mysqld  Ver 8.4.3 for Linux on x86_64 (MySQL Community Server - GPL)

mysql> status
--------------
mysql  Ver 8.4.3 for Linux on x86_64 (MySQL Community Server - GPL)

Connection id:		52024
Current database:	
Current user:	       user@localhost
SSL:			Not in use
Current pager:		stdout
Using outfile:		''
Using delimiter:	;
Server version:		8.4.3 MySQL Community Server - GPL
Protocol version:	10
Connection:		Localhost via UNIX socket
Server characterset:	utf8mb4
Db     characterset:	utf8mb4
Client characterset:	utf8mb4
Conn.  characterset:	utf8mb4
UNIX socket:		/var/run/mysqld/mysqld.sock
Binary data as:		Hexadecimal
Uptime:			5 days 16 hours 30 min 46 sec

Threads: 32  Questions: 955098704  Slow queries: 170  Opens: 674859  Flush tables: 3  Open tables: 992  Queries per second avg: 1943.445
--------------

MY.INI

[mysqld]
# === Required Settings ===
user                            = mysql
pid-file                        = /var/run/mysqld/mysqld.pid
bind_address                    = 0.0.0.0
datadir                         = /dados/mysql-data
mysql_native_password           = ON
max_allowed_packet              = 256M
max_connect_errors              = 1000000
pid_file                        = /var/run/mysqld/mysqld.pid
port                            = 3307
skip_external_locking
skip_name_resolve
socket                          = /var/run/mysqld/mysqld.sock
tmpdir                          = /dados/mysql-data/tmp

# === SQL Compatibility Mode ===
sql_mode='STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION'

# === InnoDB Settings ===
innodb_buffer_pool_instances    = 8 
innodb_buffer_pool_size         = 34G
innodb_file_per_table           = 1
innodb_flush_log_at_trx_commit  = 2
innodb_flush_method             = O_DIRECT
innodb_log_buffer_size          = 64M
innodb_sort_buffer_size         = 1M
innodb_stats_on_metadata        = 0
innodb_use_fdatasync            = 1
transaction_isolation           = READ-COMMITTED

innodb_redo_log_capacity        = 536870912

#innodb_thread_concurrency      = 4

innodb_read_io_threads          = 8 
innodb_write_io_threads         = 8 
innodb_io_capacity              = 10000
innodb_io_capacity_max          = 20000

key_buffer_size                 = 4M   # UPD

# === Connection Settings ===
max_connections                 = 200

back_log                        = 512
thread_cache_size               = 100

interactive_timeout             = 3600
wait_timeout                    = 3600

# === Buffer Settings ===
join_buffer_size                = 8M
read_buffer_size                = 1M
read_rnd_buffer_size            = 1M
sort_buffer_size                = 4M

# === Table Settings ===
table_definition_cache          = 1000
table_open_cache                = 1000
open_files_limit                = 8000
                                        

max_heap_table_size             = 16M
tmp_table_size                  = 32M

# === Binary Logging ===
server_id                       = 6
log_bin                         = binlog
binlog_format                   = ROW
gtid_mode                       = ON
enforce_gtid_consistency        = ON
binlog_expire_logs_seconds      = 345600
replica_parallel_workers        = 10 
log_replica_updates             = 0
relay_log                       = relaybinlog

# === Error & Slow Query Logging ===
log_error                       = /var/log/mysql/error.log
log_queries_not_using_indexes   = 0
long_query_time                 = 1
slow_query_log                  = 1
slow_query_log_file             = /dados/mysql-data/mysql_slow.log
general_log                     = 0
general_log_file                = /dados/mysql-data/mysql_general.log
log_timestamps                  = SYSTEM

lower_case_table_names=1
local_infile=1
log_error_verbosity = 3

slave-skip-errors=1062,1032,1735,1452

[mysqldump]
quick
quote_names
max_allowed_packet              = 256M

+---------------------------+---------------+
| code_area                 | current_alloc |
+---------------------------+---------------+
| memory/innodb             | 34.92 GiB     |
| memory/sql                | 2.43 GiB      |
| memory/performance_schema | 267.69 MiB    |
| memory/temptable          | 13.00 MiB     |
| memory/mysys              | 6.56 MiB      |
| memory/mysqld_openssl     | 756.57 KiB    |
| memory/refcache           | 21.46 KiB     |
| memory/mysqlx             | 3.45 KiB      |
| memory/vio                | 3.34 KiB      |
| memory/myisam             |  728 bytes    |
| memory/csv                |  120 bytes    |
| memory/blackhole          |  120 bytes    |
+---------------------------+---------------+
12 rows in set (0.01 sec)

It has already increased by 4GB from yesterday to today. MySQL is consuming 57.7 GB

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                   
2140222 mysql     20   0   62.0g  **57.7g**  41856 S  47.8  81.6     34,43 mysqld

This did not occur in MySQL 5.7.

How to repeat:
I don't know how to reproduce.
[6 Jan 19:27] Charles Rodrigues
PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                                                   
2140222 mysql     20   0   64.4g  **62.8g**  41856 S  15.0  88.9     44,48 mysqld
[9 Jan 15:50] MySQL Verification Team
Hi Mr. Rodrigues,

Thank you very much for your bug report.

However, this is not a bug.

That is (very much) expected behaviour with most operating systems, but especially with Linux.

Namely, OS is not releasing memory after MySQL releases it by free() or other functions or method. That is the case only when MySQL server is one of the daemons that takes most of the memory on the operating system.

That is how malloc library on OS functions.

Instead of releasing memory, malloc system library  keeps it tied to the process that is running. Why ???  Because if it did not keep it with the process, malloc library would have to ask Linux kernel for memory and that is VERY expensive and (relatively) time consuming process. Instead, memory is not freed, so when a new request for memory comes from mysqld, malloc library simply takes unused memory already attached to the process and immediately makes it available.

You can check that easily.

Just write a very small program in C, containging only  calloc() call, that will ask for more RAM that is currently available. Then ,mysqld size will shrink.

Do note , however, that MySQL server will run slower after that.

Not a bug.
[9 Jan 20:45] Charles Rodrigues
Hi,

I believe it is a bug, this did not occur in MySQL 5.7.

When the server runs out of memory, it will cause an OOM.

MySQL has a memory leak

Migrating from MySQL 5.7 to 8.4 and now I have to restart MYSQL every 7 days to avoid OOM,
[10 Jan 11:08] MySQL Verification Team
Hi Mr. Rodrigues,

Thank you very much for your reply.

There is always a possibility that some MySQL release has a memory leak. Please, also confirm that kernel does kill MySQL server process. How frequently does that happen ??

However, let us inform you about this forum's rules.

Anyone who reports a bug must send a fully repeatable test case. Every such test case should consist of the set of SQL statements (and settings that you have already provided). Bug processing team is then running that set of SQL statements and, by using memory checking tools, tries to repeat the problem.

In short, without a fully repeatable test case, we can not proceed .....

Can't repeat.
[10 Jan 13:09] Jean-François Gagné
> There is always a possibility that some MySQL release has a memory leak. Please, also confirm that kernel does kill MySQL server process. How frequently does that happen ??
>
> Anyone who reports a bug must send a fully repeatable test case.

I agree that this report is not easy to act on and does not have enough information.  But there might be a way to gather more data about this memory problem.

> Mysql with only 33 connections consuming 53GB of RAM.
> innodb_buffer_pool_size configured with 34 GB
> Server with 72 GB of RAM
>
> I believe it is a bug, this did not occur in MySQL 5.7.
> MySQL [8.4.3] has a memory leak

Maybe a way to investigate this would be to look at Performance Schema Memory instruments, link to doc below.  I would suggest looking at below, taking a snapshot of the memory tables after MySQL startup, after being in production for some time, and a little later; and then to do a "diff analysis" to find what is allocating (and maybe not deallocating) memory.  This might provide information for creating a repeatable test-case, and if not, sharing it in this bug report might allow others, including people at Oracle, to figure something out.

https://dev.mysql.com/doc/refman/8.4/en/performance-schema-memory-summary-tables.html
[10 Jan 16:26] Charles Rodrigues
Hi Jean-François Gagné,

Here's how to reproduce the problem.

Erro:

2025-01-10T13:09:19.393929-03:00 teste kernel: oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/system.slice/mysql.service,task=mysqld,pid=1518,uid=115
2025-01-10T13:09:19.393931-03:00 teste kernel: Out of memory: Killed process 1518 (mysqld) total-vm:4644648kB, anon-rss:3436040kB, file-rss:3584kB, shmem-rss:0kB, UID:115 pgtables:8064kB oom_score_adj:0
2025-01-10T13:09:19.669502-03:00 teste systemd[1]: mysql.service: A process of this unit has been killed by the OOM killer.
2025-01-10T13:09:20.493278-03:00 teste systemd[1]: mysql.service: Main process exited, code=killed, status=9/KILL
2025-01-10T13:09:20.493783-03:00 teste systemd[1]: mysql.service: Failed with result 'oom-kill'.
2025-01-10T13:09:20.493978-03:00 teste systemd[1]: mysql.service: Consumed 30min 17.510s CPU time.
2025-01-10T13:09:20.812248-03:00 teste systemd[1]: mysql.service: Scheduled restart job, restart counter is at 1.
2025-01-10T13:09:20.827854-03:00 teste systemd[1]: Starting mysql.service - MySQL Community Server...
2025-01-10T13:09:21.178386-03:00 teste kernel: audit: type=1400 audit(1736525361.176:144): apparmor="STATUS" operation="profile_replace" info="same as current profile, skipping" profile="unconfined" name="/usr/sbin/mysqld" pid=7855 comm="apparmor_parser"
2025-01-10T13:09:33.008478-03:00 teste systemd[1]: Started mysql.service - MySQL Community Server.

Test:
sysbench /usr/share/sysbench/oltp_read_write.lua --threads=48  --tables=10 --table-size=1000000 --db-driver=mysql --mysql-db=sbtest --mysql-user=sysbench --mysql-password=sysbenchtest prepare

Run the test several times until the OOM occurs:

sysbench /usr/share/sysbench/oltp_read_write.lua --threads=400 --time=300 --tables=10 --table-size=1000000--db-driver=mysql --mysql-db=sbtest --mysql-user=sysbench --mysql-password='sysbenchtest' run

Info:

Linux teste 6.8.0-51-generic #52-Ubuntu SMP PREEMPT_DYNAMIC Thu Dec  5 13:09:44 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Ubuntu 24.04.1 LTS
2vcpu Intel(R) Xeon(R) CPU E5-2660 v3 @ 2.60GHz
memory: 4GB

sysbench 1.0.20

/usr/sbin/mysqld  Ver 8.4.3 for Linux on x86_64 (MySQL Community Server - GPL)

my.ini

[mysqld]
# === Required Settings ===
user                            = mysql
pid-file                        = /var/run/mysqld/mysqld.pid
bind_address                    = 0.0.0.0
mysql_native_password           = ON
max_allowed_packet              = 256M
max_connect_errors              = 1000000
pid_file                        = /var/run/mysqld/mysqld.pid
port                            = 3306
skip_external_locking
skip_name_resolve
socket                          = /var/run/mysqld/mysqld.sock
max_prepared_stmt_count         = 65528

# === SQL Compatibility Mode ===
sql_mode='STRICT_TRANS_TABLES,NO_ENGINE_SUBSTITUTION'

# === InnoDB Settings ===
innodb_buffer_pool_instances    = 2 
innodb_buffer_pool_size         = 2G
innodb_file_per_table           = 1
innodb_flush_log_at_trx_commit  = 2
innodb_flush_method             = O_DIRECT
innodb_log_buffer_size          = 64M
innodb_sort_buffer_size         = 1M
innodb_stats_on_metadata        = 0
innodb_use_fdatasync            = 1
transaction_isolation           = READ-COMMITTED

innodb_redo_log_capacity        = 536870912

innodb_read_io_threads          = 4 
innodb_write_io_threads         = 4 
innodb_io_capacity              = 10000
innodb_io_capacity_max          = 20000

key_buffer_size                 = 2M   # UPD

# === Connection Settings ===
max_connections                 = 500

back_log                        = 512
thread_cache_size               = 100

interactive_timeout             = 3600
wait_timeout                    = 3600

# === Buffer Settings ===
join_buffer_size                = 8M
read_buffer_size                = 8k 
read_rnd_buffer_size            = 256k 
sort_buffer_size                = 512k 

# === Table Settings ===
table_definition_cache          = 40000
table_open_cache                = 40000
open_files_limit                = 60000
                                        

max_heap_table_size             = 8M
tmp_table_size                  = 8M

# === Binary Logging ===
server_id                       = 1
log_bin                         = binlog
binlog_format                   = ROW
gtid_mode                       = ON
enforce_gtid_consistency        = ON
binlog_expire_logs_seconds      = 345600

# === Error & Slow Query Logging ===
log_error                       = /var/log/mysql/error.log
log_queries_not_using_indexes   = 0
long_query_time                 = 1
slow_query_log                  = 1
general_log                     = 0
log_timestamps                  = SYSTEM

lower_case_table_names=1
local_infile=1
log_error_verbosity = 3

[mysqldump]
quick
quote_names
max_allowed_packet              = 256M
[10 Jan 16:40] Jean-François Gagné
> Hi Jean-François Gagné,
>
> Here's how to reproduce the problem.

Thanks Charles.  Just to be clear, I am not from Oracle, just a Community Member interested in memory consumption of MySQL 8.4.

Maybe Oracle will be able to reproduce the bug with the information you just provided, let's wait for the reply of MySQL Verification Team.

Thanks for providing a way to repro with sysbench.
[13 Jan 11:29] MySQL Verification Team
Hi Mr. Rodrigues,

We have some questions and  interim news for you.

According to our calculations, for 400 threads, when you sum up all the local (per thread) variables , multiply by 400 (threads), you get MUCH larger usage of memory than 4 Gb. Hence, you should either consider increasing your limit of 4 Gb or drastically reduce many of the local variables and InnoDB bufferr pool size.

When we summed up all the values of local variables that you used, then added defaults for the other local variables and multiplied by 400 (threads), add 2 Gb for the buffer pool, plus the additional default values for the other global variables,  the total value is far larger then 4 Gb.

Hence, running out of memory is, actually, expected behaviour.
[13 Jan 14:04] Charles Rodrigues
hi,

But my server with 33 threads and 72GB in PROD is running out of memory after 5 to 6 days.

I took the test to show the behavior.

You can do the test yourself with 10 threads and you will notice that after several tests MySQL will consume all of the server's memory and even without any further connection the memory already consumed is not released as it happened in 5.7
[13 Jan 14:13] MySQL Verification Team
Hi Mr. Rodrigues,

We do not have that experience.

However, we use our official binary from htttps://dev.mysql.com.

We do recommend you to use our official binary as it is fully checked for all memory leaks.

Also, we ran your sysbench on the server with 120 Gb and we have not hit no problem after 2 (two) days of running. The binary we used is our officially binary, but built for memory leak checking . It is not available to the public.

However, it is mandatory that you use our official binary.

If you run into problems with our official binary, please make sysbench LUA file with your tables and your queries.

Thanks in advance.
[13 Jan 19:29] Charles Rodrigues
But I'm using the official version of MYSQL.

If you have been testing for several days, you can see that MySQL consumption is over 80%.

Stop testing and see that the memory is not returned.

See my server:

	PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND                                                                                                   
2140222 mysql 20 0 64.0g 62.9g 21120 S 74.3 89.0 4d+11h mysqld

The temporary alternative I am doing is to decrease innodb_buffer_pool_size, which is now dynamic, and then immediately return to the previous value.

This procedure frees up memory for the OS.

The only downside is that the change generates some slow queries.

Github itself had problems with IN clauses in version 8.

There are other open bugs about memory leaks.

It's a serious bug but I see that MySQL doesn't want to look at it.

https://github.blog/engineering/upgrading-github-com-to-mysql-8-0/#queries-would-pass-ci-b...
[14 Jan 11:23] MySQL Verification Team
Hi Mr. Rodrigues,

First of all, your MySQL server simply runs out of memory. This is due to the size of global variables, local variables and number of connections. Simple arithmetics.

Second, the fact that memory size of MySQL is always increasing on Linux is a very well known phenomenon.
Namely, this is optimisation in the malloc library. Simply, when MySQL server frees memory, malloc library does not return that memory to OS, if MySQL daemon is the largest consumer of memory. What malloc library does (which is very well documented in Linux kernel) is that it keeps that memory attached to the process. So, when the next time MySQL malloc's memory, then malloc library does NOT have to make kernel call (which is quite expensive), but simply gives the memory back to MySQL server, from the memory that is attached. In this way, malloc library makes your MySQL server to run faster.

This truly looks like MySQL size is ever increasing. You do not have to reduce any variable in order to reduce mysqld size. You simply write a simple program in C, which will only do one calloc() call with which you will ask for more memory then what is available in RAM. This will reduce MySQL memory's footprint, but it will also slowdown MySQL response, for some time.

Regarding memory leaks, we would welcome any repeatable test case which would show where memory leaks occur. We thoroughly check each of these reports.

Although you have not provided a test case with Valgrind or with ASAN/UBSAN, we have ran those sysbench tests and both memory checking methods returned without any reports on memory leaks.

We hope that we have answered all of your questions.
[14 Jan 13:48] Charles Rodrigues
Team MySQL,

There is a difference between the 5.7 and the 8, because this does not happen with the 5.7.

What tool can I use to prove to you that mySQL is having a memory allocation problem?
[14 Jan 14:01] Charles Rodrigues
I ran the progeama in C and it still didn't release the memory.

gcc test_free_mem.c -o test_free_mem

./test_free_mem

Program:
#include <stdlib.h>
#include <stdio.h>

int main() {
  // Attempts to allocate a block of memory larger than the available RAM
  size_t size = (size_t)1024 * 1024 * 1024 * 2; // 2 GB
  void *ptr = calloc(1, size);

  if (ptr == NULL) {
    perror("Memory allocation error");
    return 1;
  }

  printf("Memory allocated successfully.\n");

  // Keeps allocated memory for a while
  getchar();

  // Frees allocated memory
  free(ptr);

  return 0;
}
[14 Jan 14:10] MySQL Verification Team
Hi Mr. Rodrigues,

You should calloc() more memory then the one that is available ...... Use `free` utility to see the available  memory ......  If it says that 1 Gb is available, then allocate 1.1 Gb.

No need to wait for key to be pressed , nor should you free() any memory !!!!!!

Regarding tools for memory leak, use valgrind ....... or ASAN with UBSAN ....
[14 Jan 14:37] Charles Rodrigues
I made another program in C that allocates and uses memory.

When I run Linux through an OOM in MYSQL.

So I will have to change the buffer_pool dynamically once a day, what a bad situation.

Running several other databases and then I have to do this MYSQL because I migrated to 8.4
[14 Jan 14:43] Charles Rodrigues
The first code had 121 MB free, I put 1 GB, it still had 121 MB.

The second code freed up memory but Linux killed MYSQL.
[14 Jan 14:47] MySQL Verification Team
Mr. Rodrigues,

If you have 121 Mb free, please calloc just 122 Mb .......
[14 Jan 14:57] Charles Rodrigues
And how am I going to free up the extra 22GB that MySQL allocated unnecessarily?
[14 Jan 16:00] MySQL Verification Team
Has the size of mysqld processed decreased or not , immediately after calloc() ???
[14 Jan 17:16] Charles Rodrigues
Has the size of mysqld processed decreased or not , immediately after calloc() ???

Not.

Veja, cloquei o calloc com 1GB.

Before:

```
Tasks: 165 total,   1 running, 164 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.6 sy,  0.0 ni, 99.4 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem :   3916.0 total,    557.3 free,   3205.8 used,    372.2 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    710.2 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                  
  29019 root      20   0       0      0      0 I   1.2   0.0   0:15.29 kworker/0:2-events                       
  34974 mysql     20   0 4490064   2.7g  39424 S   1.2  70.4   4:44.51 mysqld                                   
  37379 root      20   0   11912   5888   3712 R   1.2   0.1   0:00.03 top                                      

```

After running the:

root@teste:/home/charles# ./test_free_mem 
Memory allocated successfully.

```
Tasks: 165 total,   1 running, 164 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.2 us,  0.3 sy,  0.0 ni, 99.5 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st 
MiB Mem :   3916.0 total,    557.3 free,   3205.8 used,    372.3 buff/cache     
MiB Swap:      0.0 total,      0.0 free,      0.0 used.    710.2 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                  
  34974 mysql     20   0 4490064   2.7g  39424 S   1.0  70.4   4:45.17 mysqld                                   
  29019 root      20   0       0      0      0 I   0.7   0.0   0:15.60 kworker/0:2-events                       
  37386 root      20   0   11912   5888   3712 R   0.7   0.1   0:00.03 top                     
```
[15 Jan 11:34] MySQL Verification Team
Hi,

Please, try with slightly higher calloc(), without free() until it does .....
[15 Jan 21:48] Charles Rodrigues
Same situation, memory is not released.

Another point:

I noticed something else, I have another server with 128 GB and a replica of this server. The replica does not receive read queries, it is just a replica, and the memory of this replica is the same as the main node, it just increases. But since it is a replica, it increases at a slower speed and it shouldn't either.

It is clear to me that MySQL has a problem.
[16 Jan 11:33] MySQL Verification Team
Hi Mr. Rodrigues,

No, it is not a problem.

That is how Linux's malloc library works.

Also, this is a forum only for reports with fully repeatable test cases and we could not repeat your test case with sysbench.

Hence, there is nothing we can do with this report.

However, we also have a forum for a free support and a forum for paid support. If you wish, we can provide you with URLs for both.
[16 Jan 13:20] Charles Rodrigues
Hi,

But if calloc(), without free() not freeing memory is not a problem?

This has already been proven, and you can reproduce it.
[16 Jan 14:37] MySQL Verification Team
Hi,

No , we can not reproduce it .....

We always find the right amount to calloc and mysqld size gets smaller.

Not a bug.
[17 Jan 7:21] MySQL Verification Team
Hi,

For overall better performance and memory management, I'd recommend installing libjemalloc and using that instead.

Search for "malloc" in here:
https://dev.mysql.com/doc/refman/8.4/en/using-systemd.html
https://dev.mysql.com/doc/refman/8.4/en/mysqld-safe.html#option_mysqld_safe_malloc-lib

and use pldd `pidof mysqld` | grep jemalloc

to verify it is loaded after restarting. See if there's any positive impact with memory usage after that?

-- 
Shane, MySQL Senior Principal Technical Support Engineer
Oracle Corporation
http://dev.mysql.com/
[17 Jan 17:49] Charles Rodrigues
Hi,

Now MySQL is freeing the memory.

According to the percona article. But as of 2023, I changed some things.

I installed libjemalloc2: apt install libjemalloc2

Include the variable in /lib/systemd/system/mysql.service:

Environment="LD_PRELOAD=/usr/lib/x86_64-linux-gnu/libjemalloc.so.2"

Shouldn't MySQL 8 by default use libjemalloc ?

Link: https://www.percona.com/blog/dont-start-using-your-mysql-server-until-youve-configured-you...

The tests were carried out in a test environment, now I will configure it in PROD.
[20 Jan 10:58] MySQL Verification Team
Hi Mr. Rodriguez,

As you can see , this is not MySQL bug.

Also, we do not provide free support on this forum.
[2 Feb 15:59] Charles Rodrigues
Hi,

Using jemalloc the problem was solved.
[3 Feb 5:57] Mark Callaghan
Charles - this kind of problem is hard to debug. But that doesn't excuse the tone of the replies that you were getting from MySQL support on this bug report.

I used to use InnoDB a lot, but then I switched to MyRocks. With MyRocks, there is much more malloc/free than with InnoDB, at least with IO-bound workloads, because MyRocks will do malloc for each page read from storage (and then call free when that page is evicted). And jemalloc is much better than glibc malloc at minimizing fragmentation -- and fragmentation can greatly increase RSS for the mysqld process.

So I was surprised that you needed to switch from glibc malloc to jemalloc while using InnoDB (I know jemalloc helps with MyRocks, I didn't know it helps with InnoDB). But perhaps things other than InnoDB are triggering the problem.

Again, these problems are hard to debug. Possible causes are:
1) memory leak in MySQL
2) fragmentation caused by the allocator
3) allocator uses too much memory in per-thread caches

Given that switching from glibc malloc to jemalloc solved the problem, I assume you can rule out 1) and debugging 2) and 3) will be difficult. But perhaps that isn't your problem given that jemalloc resolved this for you.