Bug #96525 | Huge malloc when open file limit is high | ||
---|---|---|---|
Submitted: | 13 Aug 2019 21:56 | Modified: | 4 Dec 2019 14:32 |
Reporter: | Andreas Hasenack | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: Compiling | Severity: | S3 (Non-critical) |
Version: | 5.7 | OS: | Ubuntu |
Assigned to: | CPU Architecture: | Any |
[13 Aug 2019 21:56]
Andreas Hasenack
[14 Aug 2019 13:36]
Terje Røsten
Hi! Thanks for report! Verified by code inspection.
[14 Aug 2019 13:41]
Andreas Hasenack
Thanks for the reply! Turns out this is not Arch linux specific, but was triggered by an upstream change in systemd 240: https://github.com/systemd/systemd/commit/a8b627aaed409a15260c25988970c795bf963812 Ubuntu's systemd 240 (in the upcoming Eoan release) has this change from debian, though: systemd (240-2) unstable; urgency=medium ... * Don't bump fs.nr_open in PID 1. In v240, systemd bumped fs.nr_open in PID 1 to the highest possible value. Processes that are spawned directly by systemd, will have RLIMIT_NOFILE be set to 512K (hard). So even with systemd 240, Debian and Ubuntu are not affected by this.
[4 Dec 2019 14:32]
Paul DuBois
Posted by developer: Fixed in 8.0.20. Setting open_files_limit to a large value, or setting it when the operating system rlimit had a value that was large but not equal to RLIM_INF could cause the server to run out of memory. As part of this fix, the server now caps the effective open_files_limit value to the the maximum unsigned integer value.
[16 Dec 2019 13:36]
Paul DuBois
Fixed in 8.0.19.