Bug #101779 mysql router memory leak
Submitted: 27 Nov 2020 3:28 Modified: 21 Apr 2021 9:34
Reporter: bin zhang Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Router Severity:S2 (Serious)
Version:8.0.21 OS:CentOS
Assigned to: CPU Architecture:Any
Tags: router memory leak

[27 Nov 2020 3:28] bin zhang
Description:
when i use mysql router with mysql innodb cluster .
mysql router occurred  memory leak  .
mysql router took 1mb more memory about few minutes and never stop until linux kill it.
i did not even do any query .
this happened each time ,not accident

How to repeat:
mysql innodb cluster (one RW,two RO)
mysql router start with --bootstrap
all default options
[27 Nov 2020 7:27] bin zhang
This problem was not found in the old version(8.0.16)
[27 Nov 2020 7:29] bin zhang
mysql router 8.0.22 occured another problem cpu high to 15%-16%
[27 Nov 2020 9:45] MySQL Verification Team
Hi,

I just tested with 8.0.22 and I don't see any leak.
Can you retest with 8.0.22

Thanks
Bogdan
[27 Nov 2020 9:56] bin zhang
Thank you for your reply
I've tested it many times, and it comes up every time .
Is it possible that the problem is caused by different OS,which OS did you test with?
[27 Nov 2020 10:09] MySQL Verification Team
Hi,

testing 8.0.22 mysql server and mysql router binaries from dev.mysql.com
os: centos 7 64bit

what os/binaries are you using?

I don't see neither the memory issue nor cpu issue.

thanks
Bogdan
[17 Dec 2020 11:57] MATTHEW GOTT
I am having a similar experience with increasing memory usage by mysqlrouter process.
We are running 8.0.21 on RHEL 6.10.
Not starting with --bootstrap.

Capturing changes to total process memory using pmap once per minute indicates size of each increase is 64M. 
This occurs approx 2 to 4 times per day on each instance:
- process running since Oct 4 is now 15.3GB
- process running since Nov 8 is now 11.0GB
- process restarted 15:41 GMT Dec 16 was initially 1153300K, now 1284372K ( 2 x 64M increases in about 20 hours).

No link established yet to events in application. The level of activity between this instances does not appear to make much difference. A further instance running with just one client connection to the DB is 7.8GB from Nov 8th.
[28 Dec 2020 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[5 Jan 2021 12:01] MATTHEW GOTT
Memory continued to leak in very linear fashion since 17th December.
Step increases in memory usage are always 64MB.
Tracking this on two servers with significantly different numbers of MySQL clients: 
- one VM running 2 Jboss nodes with connection pool size of 40 each, plus one Java client. Memory increase is every 505 to 506 minutes. Server has 16GB RAM.
- another VM running 8 Jboss nodes with connection pool size of 40 each, plus one Java client. Memory increase is every 546 to 547 minutes. Server has 64GB RAM.
[6 Jan 2021 15:03] MySQL Verification Team
https://bugs.mysql.com/bug.php?id=102165 marked as duplicate of this one.
[7 Jan 2021 11:51] MATTHEW GOTT
Yesterday I shut down all application clients on the first of the two servers previously showing the issue. The 64MB increases in memory usage have continued unchanged, at the same intervals:
6/1/2020 13:41,1349908K
-> application clients shut down at 17:27
6/1/2020 22:06,1415444K
7/1/2020 06:30,1480980K
DEBUG logging level still set on MySQL router. /var/log/mysqlrouter/mysqlrouter.log confirms all routing DEBUG messages ceased after 17:27 yesterday, so definitely no connections made to the DB cluster via this router.
[8 Jan 2021 9:24] MATTHEW GOTT
Fault reproduced from scratch with no application involvement or any db connections made through the router at all:
- mysqlrouter stopped and re-started at 2021-01-07 11:53:03
- total mem usage according to pmap initially 606396KB
- increased to 671932KB at 2021-01-07 between 20:00 and 20:01
- increased to 727468KB at 2021-01-08 between 04:24 and 04:25
[8 Jan 2021 9:28] MATTHEW GOTT
mysqlrouter.conf file in use:

# File automatically generated during MySQL Router bootstrap
[DEFAULT]
name=system
user=mysqlrouter
keyring_path=/var/lib/mysqlrouter/keyring
master_key_path=/etc/mysqlrouter/mysqlrouter.key
connect_timeout=15
read_timeout=30
dynamic_state=/var/lib/mysqlrouter/state.json

[logger]
#level = INFO
level = DEBUG

[metadata_cache:UitCluster]
cluster_type=gr
router_id=2
user=mysql_router2_8snojl2jv0j9
metadata_cluster=UitCluster
ttl=0.5
auth_cache_ttl=-1
auth_cache_refresh_interval=2
use_gr_notifications=0

[routing:UitCluster_rw]
bind_address=0.0.0.0
bind_port=6446
socket=/tmp/mysql.sock
destinations=metadata-cache://UitCluster/?role=PRIMARY
routing_strategy=first-available
protocol=classic

[routing:UitCluster_ro]
bind_address=0.0.0.0
bind_port=6447
socket=/tmp/mysqlro.sock
destinations=metadata-cache://UitCluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=classic

[routing:UitCluster_x_rw]
bind_address=0.0.0.0
bind_port=64460
socket=/tmp/mysqlx.sock
destinations=metadata-cache://UitCluster/?role=PRIMARY
routing_strategy=first-available
protocol=x

[routing:UitCluster_x_ro]
bind_address=0.0.0.0
bind_port=64470
socket=/tmp/mysqlxro.sock
destinations=metadata-cache://UitCluster/?role=SECONDARY
routing_strategy=round-robin-with-fallback
protocol=x
[8 Jan 2021 9:42] MATTHEW GOTT
/etc/my.cnf from primary db server:

[mysqld]
server-id=300 # Change as required
character-set-server=utf8
collation-server=utf8_general_ci

binlog-format=ROW
log-slave-updates = 1
gtid-mode = ON
enforce-gtid-consistency = ON
master-info-repository=TABLE
relay-log-info-repository=TABLE
relay-log-recovery=1
sync-master-info=1
slave-parallel-workers=2
binlog-checksum = NONE
master-verify-checksum=1
slave-sql-verify-checksum=1
binlog-rows-query-log_events=1
report-port=3306
port=3306

#REPLICATION
binlog_checksum=NONE
enforce_gtid_consistency=ON
gtid_mode=ON
slave_preserve_commit_order=1
slave-parallel-type=LOGICAL_CLOCK
log_slave_updates=1
transaction_write_set_extraction=XXHASH64

innodb_flush_log_at_trx_commit=1
sync_binlog=1
autocommit=OFF
transaction_isolation=READ-COMMITTED
innodb_file_per_table=1
innodb_file_format=barracuda
innodb_large_prefix=1

socket=/usr/local/mysql/data/zdbkirdccd1.sock # Change as required
datadir=/usr/local/mysql/data/
report-host=zdbkirdccd1 # Change as required

sql_mode=REAL_AS_FLOAT,PIPES_AS_CONCAT,ANSI_QUOTES,IGNORE_SPACE,TRADITIONAL
#sql_mode=ANSI,TRADITIONAL   # Changed following upgrade to 5.7
safe-user-create
symbolic-links = 0

local_infile=0
secure_file_priv=/usr/local/mysql/
log_warnings=2

skip-grant-tables=FALSE
log-raw=OFF

#LOGGING
slow_query_log=ON
slow_query_log_file=/usr/local/mysql/logs/zdbkirdccd1_slow.log # Change as required
general_log=OFF
general_log_file=/usr/local/mysql/logs/zdbkirdccd1_general.log # Change as required
log_error=/usr/local/mysql/logs/zdbkirdccd1_error.log # Change as required

#BINARY LOGS
log-bin=/usr/local/mysql/binlogs/zdbkirdccd1 # Change as required
log-bin-index=/usr/local/mysql/binlogs/zdbkirdccd1.index # Change as required

#TUNING
innodb_buffer_pool_size=10G
innodb_log_file_size=256M

auto_increment_increment = 1
auto_increment_offset = 2
loose_group_replication_allow_local_disjoint_gtids_join = OFF
loose_group_replication_allow_local_lower_version_join = OFF
loose_group_replication_auto_increment_increment = 7
loose_group_replication_bootstrap_group = OFF
loose_group_replication_components_stop_timeout = 31536000
loose_group_replication_compression_threshold = 1000000
loose_group_replication_enforce_update_everywhere_checks = OFF
loose_group_replication_exit_state_action = READ_ONLY
loose_group_replication_flow_control_applier_threshold = 25000
loose_group_replication_flow_control_certifier_threshold = 25000
loose_group_replication_flow_control_mode = QUOTA
loose_group_replication_force_members =
loose_group_replication_group_name = 0be74118-04c9-11eb-9c19-001a4a34ea16
loose_group_replication_group_seeds = zdbkirdccd2:33061,zdbkirdccd3:33061
loose_group_replication_gtid_assignment_block_size = 1000000
loose_group_replication_ip_whitelist = AUTOMATIC
loose_group_replication_local_address = zdbkirdccd1:33061
loose_group_replication_member_weight = 60
loose_group_replication_poll_spin_loops = 0
loose_group_replication_recovery_complete_at = TRANSACTIONS_APPLIED
loose_group_replication_recovery_reconnect_interval = 60
loose_group_replication_recovery_retry_count = 10
loose_group_replication_recovery_ssl_ca =
loose_group_replication_recovery_ssl_capath =
loose_group_replication_recovery_ssl_cert =
loose_group_replication_recovery_ssl_cipher =
loose_group_replication_recovery_ssl_crl =
loose_group_replication_recovery_ssl_crlpath =
loose_group_replication_recovery_ssl_key =
loose_group_replication_recovery_ssl_verify_server_cert = OFF
loose_group_replication_recovery_use_ssl = ON
loose_group_replication_single_primary_mode = ON
loose_group_replication_ssl_mode = REQUIRED
loose_group_replication_start_on_boot = ON
loose_group_replication_transaction_size_limit = 0
loose_group_replication_unreachable_majority_timeout = 0
super_read_only = ON

[client]
port=3306
socket=/usr/local/mysql/data/zdbkirdccd1.sock # Change as required
[8 Jan 2021 9:44] MATTHEW GOTT
Please ask if you need any more details to replicate, or any other logs, evidence etc. from our environment.
[11 Jan 2021 14:01] MySQL Verification Team
Hi,

I'm still not able to reproduce this. Anyhow, due to some other bugs in the router in the 8.0.22 lot of code that is related to where this leak might come from needs to be rewritten so before 8.0.23 is out I'm not sure there's much we can/should do.

thanks
[12 Feb 2021 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[12 Feb 2021 9:38] MATTHEW GOTT
Unfortunate timing :)
We are trying 8.0.23 commercial GA release today, to see if the leak still persists or not. Please re-open the ticket for now, I will post our findings later today.
[16 Feb 2021 11:28] MATTHEW GOTT
MySQL Router upgraded in our environment from 8.0.21 to 8.0.23 GA.
Bootstrap config stage re-run to complete install. REST API disabled.
The memory leak is still occurring, still 64MB chunks, every 6.5 hours.
We had to upgrade the J/Connector we're using too, that is now 8.0.23 as well (previously 5.1.41).
I have not re-tested with no application connecting - but previously the leak occurred even when no clients ever connected.
OS and database versions remain unchanged.
Please let me know what other information you need from me to try to replicate the issue. Thanks :)
[17 Feb 2021 15:00] MySQL Verification Team
Hi,

> MySQL Router upgraded in our environment from 8.0.21 to 8.0.23 GA.
> Bootstrap config stage re-run to complete install. REST API disabled.
> The memory leak is still occurring, still 64MB chunks, every 6.5 hours.

Weird. I'm still not reproducing this.

> We had to upgrade the J/Connector we're using too, that is now 8.0.23 as well (previously 5.1.41).

Well you really need to be using 8.x (and best, latest) connector with 8.x MySQL router/server. 

> I have not re-tested with no application connecting - 
> but previously the leak occurred even when no clients ever connected.
> OS and database versions remain unchanged.
> Please let me know what other information you need 
> from me to try to replicate the issue. Thanks :)

You said that you do a 3 node innodb cluster, start router and every 6.5 hours it jumps with 64M of ram. I have router running since yesterday noon, so ~28 hours and the ram usage is identical as when it started.

Since we are doing something different (as we are getting different results by using same CentOS and same Oracle provided binaries) I'd ask you to write exactly, step by step, what you did to get to the first 64M "jump". Just copy/paste the whole shell into a txt file please as we have also 102165 report about same/similar issue so the issue is def. there, I just need to pinpoint it.

Thanks
Bogdan
[17 Feb 2021 16:17] MATTHEW GOTT
Hi Bogdan, thanks for your reply.
I think we still need to converge a few things between our environments:
1. see my comment from 17/12/2020 - we are on RHEL 6.10, not CentOS (that must have been the original reporter, I expect). Let me know if you need any specific package version numbers.
2. Our DB cluster is on 5.7.31 - what version is yours? (sorry if you've said that already and I've missed it).  
3. the mysqlrouter.conf I sent on 8/1/2021 I expect was altered during the 8.0.23 upgrade. Do you want the latest version?
Cheers,
Matt
[17 Feb 2021 17:09] MATTHEW GOTT
And apologies for not providing a full step-by-step shell for you.
We have a separate DBA team who did the install of all the mysql components, so I can't easily provide a script for what they did, only describe what they've told me:
- install RPM package for router 8.0.23
- bootstrap to configure, including disable the REST API
- service mysqlrouter start
- start up our client processes (quantity=3) which use Connector/J 8.0.23 now
- wait - capture mem usage KB every minute the following script run from root's crontab:

#!/bin/bash
timestamp=$(date +"%d/%m/%Y %H:%M:%S")
pid=$(cat /var/run/mysqlrouter/mysqlrouter.pid)
mem=$(pmap $pid | grep total | cut -f3- -d' ' | sed 's/ //g' | sed 's/K//g')
logfile=/tmp/mysqlrouter_mem.log
echo "$timestamp,$pid,$mem" >> $logfile
exit 0
[17 Feb 2021 17:11] MATTHEW GOTT
I will also see if I can get a clean re-start of the mysqlrouter service tonight, and run the test without starting our clients at all; and then leave overnight to see if the leak still happens as it did in these conditions with 8.0.21.
[17 Feb 2021 19:22] MySQL Verification Team
Hi,

Lot of differences, I'm using all 8.x, will retest with 5

all best
Bogdan
[18 Feb 2021 10:19] MATTHEW GOTT
Another full re-start of mysqlrouter after closing all clients, this time not making any client connections after the re-start.
First increment of 64MB after 6h9m. Second increment of 64MB after a further 8h24m.
So, while the timing is slightly longer than the 6.5h seen with the clients up and running, it's still there without any clients.
Are you able to amend your test environment to RHEL 6.10 and MySQL DB nodes to 5.7.31 so it fully matches what we are using?
[18 Feb 2021 12:53] MySQL Verification Team
Hi,

> Another full re-start of mysqlrouter after closing all clients, 
> this time not making any client connections after the re-start.
> First increment of 64MB after 6h9m. Second increment of 64MB 
> after a further 8h24m.

You are sure nothing is trying to connect to that router?
Router is connected to 3 mysqld's in the back right?

> Are you able to amend your test environment to RHEL 6.10 
> and MySQL DB nodes to 5.7.31 so it fully matches what we are using?

No, rhel6 is too old I'm using modern rhel (centos actually) but I dropped mysqld's to 5.7 to see if that will help.

Let's see what will happen after a while of it running
Bogdan
[18 Feb 2021 13:32] MATTHEW GOTT
> You are sure nothing is trying to connect to that router?
Yes I'm sure.

> Router is connected to 3 mysqld's in the back right?
Yes. Here's the mysqlrouter.log entries from that connecting successfully when I re-started the mysqlrouter service yesterday:
2021-02-17 17:12:53 routing INFO [7fdd32629700] [routing:UitCluster_ro] stopped
2021-02-17 17:12:53 routing INFO [7fdd13fff700] [routing:UitCluster_x_rw] stopped
2021-02-17 17:12:53 routing INFO [7fdd30f14700] [routing:UitCluster_x_ro] stopped
2021-02-17 17:12:53 routing INFO [7fdd31b27700] [routing:UitCluster_rw] stopped
2021-02-17 17:12:53 io INFO [7f87100747e0] starting 2 io-threads, using backend 'linux_epoll'
2021-02-17 17:12:53 metadata_cache INFO [7f870d789700] Starting Metadata Cache
2021-02-17 17:12:53 metadata_cache INFO [7f870d789700] Connections using ssl_mode 'PREFERRED'
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700] Starting metadata cache refresh thread
2021-02-17 17:12:53 routing INFO [7f870cd88700] [routing:UitCluster_ro] started: listening on 0.0.0.0:6447, routing strategy = round-robin-with-fallback
2021-02-17 17:12:53 routing INFO [7f870cd88700] [routing:UitCluster_ro] started: listening using /tmp/mysqlro.sock
2021-02-17 17:12:53 routing INFO [7f86f75fe700] [routing:UitCluster_x_ro] started: listening on 0.0.0.0:64470, routing strategy = round-robin-with-fallback
2021-02-17 17:12:53 routing INFO [7f86f75fe700] [routing:UitCluster_x_ro] started: listening using /tmp/mysqlxro.sock
2021-02-17 17:12:53 routing INFO [7f86f7fff700] [routing:UitCluster_rw] started: listening on 0.0.0.0:6446, routing strategy = first-available
2021-02-17 17:12:53 routing INFO [7f86f7fff700] [routing:UitCluster_rw] started: listening using /tmp/mysql.sock
2021-02-17 17:12:53 routing INFO [7f86f6bfd700] [routing:UitCluster_x_rw] started: listening on 0.0.0.0:64460, routing strategy = first-available
2021-02-17 17:12:53 routing INFO [7f86f6bfd700] [routing:UitCluster_x_rw] started: listening using /tmp/mysqlx.sock
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700] Potential changes detected in cluster 'UitCluster' after metadata refresh
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700] Metadata for cluster 'UitCluster' has 1 replicasets:
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700] 'default' (3 members, single-primary)
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700]     zdbkirdccd1:3306 / 33060 - mode=RW
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700]     zdbkirdccd2:3306 / 33060 - mode=RO
2021-02-17 17:12:53 metadata_cache INFO [7f870c387700]     zdbkirdccd3:3306 / 33060 - mode=RO
2021-02-17 17:12:53 routing INFO [7f870c387700] Routing routing:UitCluster_x_ro listening on 64470 and named socket /tmp/mysqlxro.sock got request to disconnect invalid connections: metadata change
2021-02-17 17:12:53 routing INFO [7f870c387700] Routing routing:UitCluster_x_rw listening on 64460 and named socket /tmp/mysqlx.sock got request to disconnect invalid connections: metadata change
2021-02-17 17:12:53 routing INFO [7f870c387700] Routing routing:UitCluster_rw listening on 6446 and named socket /tmp/mysql.sock got request to disconnect invalid connections: metadata change
2021-02-17 17:12:53 routing INFO [7f870c387700] Routing routing:UitCluster_ro listening on 6447 and named socket /tmp/mysqlro.sock got request to disconnect invalid connections: metadata change

> No, rhel6 is too old I'm using modern rhel (centos actually) but I dropped mysqld's to 5.7 to see if that will help.
Thanks for going to mysqld 5.7 - patch .31?
We have RHEL6 on extended support, I'm told. When you say "too old" does that mean you don't have any available or can't build a test VM with 6.10, or that you're not supposed to because maintenance support EOL date has already passed?
I don't know whether more modern rhel / centos is a fair comparison or not - outside my expertise :). Maybe something in a shared library, which might be resolved at rhel7? It will be some time before our application VMs are upgraded from rhel6 to 7.

> Let's see what will happen after a while of it running
> Bogdan
Agreed - hopefully the switch to mysqld 5.7 will show something. 
Looking forward to hearing what happens.
I've left our current setup running as per last night/this morning, and will check again tomorrow morning to confirm the increments are still happening at the same/similar intervals.
[18 Feb 2021 14:25] MySQL Verification Team
Hi,

> When you say "too old" does that mean you don't have any available
> or can't build a test VM with 6.10, or that you're not supposed to
> because maintenance support EOL date has already passed?

MySQL 5.7 is still supported on Oracle Linux 6 (RHEL6, CentOS6 ..)
It is just that I don't have already 4 of them in my VM arsenal 
ATTM and I doubt it's the OS that's the problem here so testing first
on CentOS7 and if I can't reproduce I'll then spend time and install
4 RHEL6 vm's for test.

> Maybe something in a shared library, which might be resolved at
> rhel7? It will be some time before our application VMs are upgraded
> from rhel6 to 7.

Well, if you are using binaries from Oracle (our YUM repo or binaries) then I doubt any of the OS stuff should affect it as it is built fairly static. If you are using the RH binaries then all the bets are off as they use older version of MySQL and slap manually created patches on it and build using their procedures so the binary is heavily dependant on the OS.
[19 Feb 2021 12:56] MATTHEW GOTT
Overnight two further increments of 64MB, still with no clients connecting.
Intervals were 8h19m, 8h20m (checking every minute with pmap)

For OS comparisons, is it worth me uploading for you a list of installed packages? e.g. output of rpm -qa
[19 Feb 2021 14:35] MySQL Verification Team
Hi,

I reproduced the problem. 5.7 server, 8.0 router, no clients connected to router, router connected to 3x5.7 server, every ~9 hours ram usage jumps 64MB

I'm continuing to run it to see if it will cap after few days but this 64MB jumps are confirmed.

Thank you
Bogdan

p.s. when router 8 was connecting to mysqld 8 I was not reproducing this btw. did you try connecting to mysqld 8.0.23 ?
[19 Feb 2021 14:54] MATTHEW GOTT
Excellent news that you've reproduced it. I'm sure that will go a long way towards the fix in mysqlrouter. 

No we haven't got an available environment with mysqld 8 at all, so haven't tried that. Very interesting to hear that's what seems to make all the difference though.
I will discuss whether DB upgrade from 5.7.31 to 8.0.23 is something we can get into our planning, but it expect it won't happen for a few months at least. We only upgraded from Fabric to InnoDB cluster in November last year.
[19 Feb 2021 16:49] MySQL Verification Team
Hi,

Upgrade from 5 to 8 is a big deal and you have to plan and test it properly. Our MySQL Support team can help you with that and I believe personally it is worth every penny (Enterprise support for a year cost less than a decent DBA for a month), but of course, you can plan and test this yourself. I believe migration to 8 is worth the extra effort you need to put into migration so you should start checking it out ASAP (looking trough documentation, see what's new, what's different, do a test and migrate for e.g. your dev server or test your second stage server or... just start looking at it and testing it so you can make informed decision soon) totally irrelevant from this bug.

all best
Bogdan
[16 Apr 2021 5:01] Paul Peterson
Hi Bogdan,

I believe I am seeing a similar memory leak with the following setup:

MySQL Router Server:

* mysqlrouter - 8.0.23 Commercial
* OS - RHEL 8.2 (Ootpa)

MySQL DB Servers - 3 Node InnoDB Group Replication Cluster

* mysqld - 8.0.23 Commercial
* OS - RHEL 7.9 (Maipo)

Is this likely to be resolved in the upcoming 8.0.24 release?

Regards,
Paul Peterson.
[16 Apr 2021 5:07] Paul Peterson
Note: I am collecting the MySQL Router memory usage using pmap every 5 minutes and have the MySQL Router logs performing DEBUG level logging. If I can confirm similar jumps in memory usage, I can provide these logs to assist with any investigations.
[16 Apr 2021 10:36] MySQL Verification Team
Hi,

Thanks. I did not reproduce it on 8.0 but team will fix it on all versions when they find out how to fix it :)

I did have ram usage increase on 8.0 but it capped and never went over some value.

all best
Bogdan
[21 Apr 2021 9:34] bin zhang
mysql ruter version 8.0.16 is OK (no memory leak) maybe you can compare them  to find the problem ,thanks
[21 Apr 2021 9:34] bin zhang
mysql ruter version 8.0.16 is OK (no memory leak) maybe you can compare them  to find the problem ,thanks
[23 Apr 2021 10:07] MATTHEW GOTT
Hi Bogdan
No change at our end with component versions, OS version etc. We are re-starting our mysqlrouter processes monthly at the moment to workaround the issue.
However, I have tried running the mysqlrouter for a short while with no client connections using valgrind to see if it spots any memory leaks. It's come up with some small direct losses but also some steadily increasing indirect ones. Please see attached file for a 30 minute run on one server in our test environment. It may be that you and your colleagues have already done something like this in the environment where you replicated the issue, but hopefully it's still worthwhile my sharing this with you.
[23 Apr 2021 10:08] MATTHEW GOTT
valgrind output for 30 mins run of mysqlrouter 8.0.21 with no clients

Attachment: mysqlrouter.valgrind (application/octet-stream, text), 43.98 KiB.

[23 Apr 2021 10:16] MATTHEW GOTT
Correction to comment on file attachment - mysqlrouter version in use is 8.0.23
[23 Apr 2021 13:06] HICHEM BEDDIAF
Hello,

same problem or memory still growing for us.
configuration:

RHEL 6.10
Mysqlrouter 8.0.21 (same thing if we upgrade to 8.0.23).
Mysql 5.6

No application connected (project stopped for memory leak reason).

Still folowing this post hope we will have any solution from your team.

thanks.
[23 Apr 2021 13:29] MySQL Verification Team
Hi,

Bug is not yet fixed (this bug report will be updated when it gets fixed) so it is not expected that behavior changed on it's own.
[1 Jun 2021 16:23] HICHEM BEDDIAF
Hello,,is there any news about this issue ?
Thanks.
[3 Sep 2021 16:34] Vasya Pupkin
I have MySQL router 8.0.26 (extracted from https://dev.mysql.com/get/Downloads/MySQL-Router/mysql-router-8.0.26-linux-glibc2.17-x86_6...) running under Ubuntu 20.04 LTS. Under normal load (web server serving a single WordPress site) MySQL router memory usage almost instantly grows to 500-700 MiB. This is nonsense for an app that simply forwards packets to a MySQL server running on the same machine.
[10 Sep 2021 9:46] Andrzej Religa
Posted by developer:
 
Hi,

I've spent some time trying to reproduce this issue, no luck so far.

Would it be possible to get the output of the following command:
cat /proc/`pidof mysqlrouter`/status

64MB chunks sounds like virtual memory, would be interesting to see
how that looks on the RSS memory side.

Also as for the last comment about 500MB+ memory usage on start that
definitely sounds like virtual memory. Again the /proc/.../status
would tell more.

Thank you.
[10 Sep 2021 12:37] Jan Kneschke
Posted by developer:
 
The leak has been confirmed in combination with:

- openssl 1.0.1 (EL6, but not EL7 and later)
- mysql 8.0.20 and later

@ See Bug#33335046.
[10 Sep 2021 12:43] Jan Kneschke
Posted by developer:
 
A rough calculation:

- the leak (in combination with openssl 1.0.1) leaks 892 with each connect to the mysql-server
- each 500ms (metadata_cache.ttl) mysqlrouter opens a new connection to the server

892byte * 2 connect/sec * 60 sec/min * 60 min/h * 10h
= ~64Mbyte

Workaround:

- use a build against openssl 1.0.2-or-later
[10 Sep 2021 15:34] Jan Kneschke
Posted by developer:
 
Fixed by Bug#33335046
[30 Aug 2022 2:06] Tsubasa Tanaka
> Posted by developer:
>  
> Fixed by Bug#33335046

This commit describes "Fix Bug#33335046", should this report be closed as "Fixed 8.0.27" ?

https://github.com/mysql/mysql-server/commit/08f2ccde785eea422b2b4766af92e41bf9eec3a9