Bug #114955 | MySQL 8.0.27 - Possible memory leak | ||
---|---|---|---|
Submitted: | 10 May 11:16 | Modified: | 24 Jun 12:34 |
Reporter: | momo momo | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | MySQL Server | Severity: | S3 (Non-critical) |
Version: | 8.0.27 | OS: | Debian (9) |
Assigned to: | CPU Architecture: | x86 |
[10 May 11:16]
momo momo
[10 May 12:00]
MySQL Verification Team
Hi Mr. momo, Thank you for your bug report. However, this is not a bug. MySQL is not leaking anything. Simply, when memory is allocated, it is not immediately added to the memory that is process using. On any Unix it is added to the memory only when that memory is written into. Also, most malloc libraries and all malloc libraries on Linux, keep memory that is freed by the process attached to the process itself, if that process is allocating frequently. In that manner, a number of calls to the OS kernel is vastly reduced. Hence, what you see with tools is not a true amount of memory used. Also, let us inform you that this is a forum for the bugs with a fully repeatable test cases. A test case should consist of a set of SQL statements that always leads to the bug that is reported. In case of the memory leaking, a test case must contain a proof that some memory leaking program, like Valgrind or Sanitizer has diagnosed the leak. Since your report does not have anything of what is required by this forum, we can not consider your report as a bug report. Not a bug.
[10 May 20:10]
Roy Lyseng
8.0.27 is a quite old release. Can you try your load with 8.0.37?
[11 May 9:26]
momo momo
1.The same behavior as 8036. memory/innodb/memory slowly growing up, ================ MySQL [sys]> select event_name,current_alloc from memory_global_by_current_bytes limit 10; +---------------------------------------------------------------+---------------+ | event_name | current_alloc | +---------------------------------------------------------------+---------------+ | memory/innodb/buf_buf_pool | 40.96 GiB | | memory/innodb/os0event | 997.57 MiB | | memory/innodb/memory | 360.77 MiB | | memory/innodb/log_buffer_memory | 128.00 MiB | | memory/sql/String::value | 101.83 MiB | | memory/sql/query_cache_map | 63.98 MiB | | memory/innodb/log0log | 48.04 MiB | | memory/sql/TABLE | 47.14 MiB | | memory/performance_schema/events_statements_summary_by_digest | 40.28 MiB | | memory/sql/THD::main_mem_root | 36.28 MiB | +---------------------------------------------------------------+---------------+
[13 May 9:55]
MySQL Verification Team
Hi Mr. momo, The explanation is provided in our previous comment. Besides ....... InnoDB uses memory beside buffer pool. There are many other allocations for InnoDB, as explained in our Reference Manual.
[13 May 10:13]
MySQL Verification Team
HI Mr. momo, We would like to take your report seriously. However, we should inform you that this is a forum for the reports with fully repeatable test cases. A test case should consist of the set of SQL statements that always leads to the bug that is reported. In your case, you should send us a set of SQL statements that will, in all runs, constantly and reliably lead to the increase in the memory used by InnoDB. Also, InnoDB system keeps tracks of the memory used. These are not leaks. Hence, InnoDB adds the memory to the stats each time that it allocates it and removes it from the stats each time that it frees it. Leaks would represent the memory that InnoDB lost track of, but did not release it. We are waiting on your repeatable test case.
[14 May 12:13]
momo momo
The following steps can be taken to reproduce it. 1. create table t2 (f1 bigint, f2 longblob, f3 longblob, index(f2(10), f3(10))) engine=innodb row_format=COMPACT DEFAULT CHARSET=utf8mb3; 2. DELIMITER $$ CREATE PROCEDURE execute_queries() BEGIN DECLARE i INT DEFAULT 0; WHILE i < 10000000000 DO DELETE FROM t2 WHERE f1 = 1; INSERT INTO t2 VALUES (1, REPEAT('%', 7000), REPEAT('+', 30)); UPDATE t2 SET f3 = REPEAT('[', 6000) WHERE f1 = 1; SET i = i + 1; END WHILE; END$$ DELIMITER ; 3. call execute_queries(); 4. select * from sys.memory_global_by_current_bytes where event_name ='memory/innodb/memory'; then you can find that `current_alloc` is growing.
[14 May 12:27]
MySQL Verification Team
HI, Sorry, but which one of the following buffers is increasing: memory/innodb/buf_buf_pool memory/innodb/memory memory/innodb/os0event memory/sql/String::value memory/innodb/hash0hash memory/sql/THD::main_mem_root memory/innodb/buf0rea memory/sql/query_cache_map memory/sql/TABLE memory/performance_schema/events_statements_summary_by_digest Regarding InnoDB's buffer pool it will increase continuously until it reaches the maximum size as configured. InnoDB's memory usage will increase as the number of concurrent transactions are increasing and then it will stop. os0event will increase with the increase of EVENT usage ....... it is not capped. class String's memory usage is managed by C++ STL and is out of our control. hash0hash will be increased in memory usagee as long as number of reads and writes are increasing. It is not capped. main_mem_root will be increased as long as number of threads are increasing until number of threads reaches the value that you have configured. buf0res will increase as long as the result size is increased. It is not capped. query_cache should not be used, since it is deprecated. TABLE will increase as long as the configured value of the table caches are not reached. P_S will use memory any time it is called. Hence, we do not understand what bug are you reporting here ??????
[14 May 12:31]
MySQL Verification Team
Hi, Most important of all is that you are using an old release, that had many bugs. The bug that you are reporting is already fixed in 8.0.37: RROR 1146 (42S02): Table 'performance_schema.memory_global_by_current_bytes' doesn't exist Not a bug.
[14 May 12:39]
momo momo
。。。。
[15 May 3:02]
momo momo
The increase in memory/innodb/memory is mainly caused by the additional memory allocated in `dtuple_convert_big_rec` within `dfield_t::clone`, with row format=compact, it requires an additional 788 bytes, dynamic format requires an additional 20 bytes. This part of memory remains in prebuilt->heap until the table is closed, so it keeps increasing. It appears to be an issue with the mechanism rather than an actual leak. This problem still exists in the latest 8037 version. I tried a modification like below where `prebuilt->heap` no longer continuously increases. 8037: dfield_copy(&upd_field.new_val, dfield->clone(upd->per_stmt_heap)); //dfield_copy(&upd_field.new_val, dfield->clone(upd->heap));
[15 May 10:02]
MySQL Verification Team
Thank you, Mr. momo, We have presented your findings to our Development team.
[20 May 10:10]
Jakub Lopuszanski
Thanks for the report! Indeed it looks like the `dtuple_convert_big_rec` sometimes adds new fields to `upd_t` object, and when it does so, it uses this object's `upd_t::heap` to allocate the memory for cloned field. This is perhaps the right thing to do in some sense (use the heap which has lifetime tied to the `upd_t`'s lifetime), but the problem is that this `upd_t` is the `ha_innobase::m_prebuilt->upd_node->update` which has a lifetime apparently tied to the whole handler instance, i.e. very long (could be hours). This leads to OOM killing the mysqld if you keep executing UPDATEs on this table which require `dtuple_convert_big_rec` to allocate a field, as the field's clone is allocated from a heap which is never freed. The reporter suggests to instead use a different heap, `upd->per_stmt_heap`, but it's not clear to me if that would be a correct thing to do - after all the `ha_innobase::m_prebuilt->upd_node->update` will be reused in subsequent queries and it would be a disaster if someone dereferenced `ha_innobase::m_prebuilt->upd_node->update->field[i]` which was already freed together with `per_stmt_heap`. I hope Annamalai, who was last to touch this code area could shed more light on this.
[20 May 10:43]
MySQL Verification Team
Hi Mr. momo, We have managed to repeat your test case with 8.0.37: vent_name current_count current_alloc current_avg_alloc high_count high_alloc high_avg_alloc memory/innodb/memory 2018 10.19 MiB 5.17 KiB 2086 10.47 MiB 5.14 KiB event_name current_count current_alloc current_avg_alloc high_count high_alloc high_avg_alloc memory/innodb/memory 3129 18.81 MiB 6.15 KiB 3245 19.23 MiB 6.07 KiB event_name current_count current_alloc current_avg_alloc high_count high_alloc high_avg_alloc memory/innodb/memory 4240 27.43 MiB 6.62 KiB 4368 27.94 MiB 6.55 KiB However, this does not mean that this is a bug. We will therefore run the tests again, with the full memory testing tools ...... If there is a leak there will be a report.
[20 May 11:26]
MySQL Verification Team
HI, This is just to inform you that we have ran your tests with mysqld binary which ran ALL possible memory checking tests, like ASAN, UBSAN, with a full range of ASAN_SCOPE and so on ....... As a proof, there is an error message that is always displayed at the start and the shutdown of mysqld server built with all these tools: mysqld(1902,0x7ff85870c100) malloc: nano zone abandoned due to inability to reserve vm space. However, before , during and after we ran your test case, which took hours, there was not A SINGLE warning nor error from the memory checking tools. Hence, we can not verify this bug report as a memory leak .........
[21 May 3:08]
momo momo
Hi Jakub Lopuszanski, I understand what you mean, it only releases the additionally allocated memory here, the original `ha_innobase::m_prebuilt->upd_node->update->field[i]` won't be released along with `per_stmt_heap`.
[24 Jun 12:34]
momo momo
Hi, Any updates from Annamalai?
[25 Jun 10:21]
MySQL Verification Team
HI Mr. momo, Not yet .... This is a very busy period for our Development.