Bug #95328 | Crashes and out of memory Messages | ||
---|---|---|---|
Submitted: | 10 May 2019 10:35 | Modified: | 23 May 2019 14:25 |
Reporter: | Frederic Steinfels | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server | Severity: | S3 (Non-critical) |
Version: | 8.0.16 | OS: | Fedora (official rpm) |
Assigned to: | CPU Architecture: | x86 | |
Tags: | crash, out of memory |
[10 May 2019 10:35]
Frederic Steinfels
[10 May 2019 10:35]
Frederic Steinfels
mysqld error log
Attachment: mysql.log (text/plain), 11.01 KiB.
[11 May 2019 17:42]
Frederic Steinfels
I have reported similar bugs in the past. Some where my fault (underdefined WHERE Statement resulting in too many search results / memory exhaustion), some others where real bugs and have been fixed so far. In 8.0.16 there seems to be a new problem eating away my 64gigs of memory at about 500MB/hour until mysqld gets killed. With 8.0.15 I had the 2019-03-17T11:29:37.087498Z 0 [ERROR] [MY-010283] [Server] Error in accept: Too many open files error. You told me to change the tmp table format to internal_tmp_mem_storage_engine=MEMORY which I did. With 8.0.16 I changed it back, ie added the comment sign. Since then, it takes about 1.5 days for mysql to be killed because of memory exhaustion: 2019-05-02T06:08:31.347848Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-04T14:06:07.723600Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-04T14:06:07.741418Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-05T03:18:49.619089Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-05T03:18:49.633238Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-06T23:01:29.211684Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-06T23:01:29.850448Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-07T02:16:51.546462Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-07T02:16:51.557097Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-09T01:13:50.495064Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-09T01:13:50.506618Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-10T10:22:57.340767Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-10T10:22:57.349472Z 0 [System] [MY-010232] [Server] Crash recovery finished. 2019-05-11T14:12:01.945310Z 0 [System] [MY-010229] [Server] Starting crash recovery... 2019-05-11T14:12:01.957274Z 0 [System] [MY-010232] [Server] Crash recovery finished.
[13 May 2019 14:22]
MySQL Verification Team
Hi, Thank you for your bug report. Since 8.0.16, you can not use MEMORY engine any more. You have to set internal_tmp_disk_storage_engine to InnoDB only. If you continue to see problems after that, then you should consider seriously to tune your configuration variables , like internal-tmp-mem-storage-engine, tmp_table_size, etc ......
[15 May 2019 17:03]
Frederic Steinfels
Thanks for your feedback. When starting mysqld, its using 30gb. After 1.5 days, it is using maybe 50gb and then it is getting killed. So it seems to be a bug. I certainly can upload more debug files. I have been using older versions of mysqld with the same settings without running nout of memory.
[15 May 2019 18:45]
Frederic Steinfels
I do not think it has anything to do with the settings as older versions did not continually loose memory. The only relevant variables I am setting are these: binlog_format = row max_binlog_size = 100M innodb_flush_log_at_trx_commit=2 innodb_thread_concurrency=8 innodb_file_per_table=1 innodb_buffer_pool_size=32G innodb_log_file_size=2G innodb_buffer_pool_instances=20 innodb_log_buffer_size=8M innodb_flush_method=O_DIRECT transaction-isolation=READ-COMMITTED innodb_lock_wait_timeout=360 thread_stack=512K join_buffer_size=10M max_allowed_packet=16M table_open_cache=2K group_concat_max_len=512M max_connections=200 max_heap_table_size=10G tmp_table_size=10G
[16 May 2019 12:13]
MySQL Verification Team
Hi, As I wrote before, this is a simple matter of misconfiguration. Of course , 8.0 will itself use more memory, but these settings clearly explain the cause of your problem. thread_stack=512K join_buffer_size=10M group_concat_max_len=512M max_heap_table_size=10G tmp_table_size=10G Not a bug.
[17 May 2019 22:08]
Frederic Steinfels
I understand you think those variables are excessively large and that at some point, because those buffers are allocated, mysqld will get killed. So if I lowered those buffers and mysqld would still get killed on a regular base because its using up all available memory, would you then agree that we have a possible memory leak worth investigating? I do not really want to lower those: thread_stack=512K join_buffer_size=10M group_concat_max_len=512M As I had some SQL Statements failing at some point and then I increased those buffers. However Group_concat_max_len could be lowered to maybe 64MB. And those here, do you think 1G is ok instaed of 10G? max_heap_table_size=10G tmp_table_size=10G
[20 May 2019 12:39]
MySQL Verification Team
Hi, You should truly look at session variables that are defining memory to be allocated. Our Reference Manual has all that you need in order to do the tuning. Regarding memory leaking, we use many, many tools for so many tests to discover if there is a leak or corruption of memory and the like. We do not publish new release without all those tests. So, yes, this report could be treated as memory leak if you come up with test results from such a tool. Otherwise, this looks only just like the request for free support. And this is not a forum for free support.
[21 May 2019 12:50]
MySQL Verification Team
Hi, Everything that I wrote to you still stands. If you come with a repeatable test case that always lead to OOM situation, or if you come up with memory leak from some appropriate tool, we would welcome your bug report and we would verify it , provided that we can repeat it. Thank you for your contribution.
[23 May 2019 14:25]
Frederic Steinfels
I have to add that earlier this year I have already posted a similar bug that is closed now and in this bug, there is some logging. I found that I had an sql statement containing an error and therefore creating a excessive amount of results and allocating too much memory and I attributed the crash to that statement. However, I do no longer have such long running and excessive statements anymore that could explain the crashes therefore I made a new report.