Bug #77715 | table_open_cache_instances Does not Really Split table_open_cache | ||
---|---|---|---|
Submitted: | 14 Jul 2015 9:03 | Modified: | 21 May 2018 6:33 |
Reporter: | Jervin R | Email Updates: | |
Status: | Verified | Impact on me: | |
Category: | MySQL Server: DML | Severity: | S3 (Non-critical) |
Version: | 5.6.24 | OS: | Any |
Assigned to: | CPU Architecture: | Any |
[14 Jul 2015 9:03]
Jervin R
[21 May 2018 6:33]
MySQL Verification Team
I'm setting this verified based on an observation I made last week. Here's my finding: I'm on 5.7.22 here. I have 19644 tables on my instance, and table-open-cache 100000. running 'show table status' numerous times let's "Open tables: 100000" get to 100k eventually. Single threaded... I'm not sure how 100K tables can be "open". mysql> select count(*) from information_schema.tables; +----------+ | count(*) | +----------+ | 19644 | +----------+ 1 row in set (0.22 sec) mysql> select count(*) from information_schema.columns; +----------+ | count(*) | +----------+ | 180990 | +----------+ 1 row in set, 1245 warnings (1.51 sec) mysql> select event_name,CURRENT_COUNT_USED,CURRENT_NUMBER_OF_BYTES_USED from performance_schema.memory_summary_global_by_event_name order by CURRENT_NUMBER_OF_BYTES_USED desc limit 20 ; +-----------------------------------------------------------------------------+--------------------+------------------------------+ | event_name | CURRENT_COUNT_USED | CURRENT_NUMBER_OF_BYTES_USED | +-----------------------------------------------------------------------------+--------------------+------------------------------+ | memory/sql/TABLE | 812989 | 3568158076 | | memory/performance_schema/table_handles | 98 | 931266560 | | memory/innodb/mem0mem | 828256 | 914556860 | | memory/sql/TABLE_SHARE::mem_root | 100108 | 372569424 | | memory/innodb/ha_innodb | 62106 | 150321133 | | memory/innodb/buf_buf_pool | 1 | 137297920 | | memory/innodb/os0event | 579239 | 55606944 | | memory/performance_schema/file_instances | 18 | 51904512 | | memory/performance_schema/table_io_waits_summary_by_index_usage | 14 | 40370176 | | memory/innodb/log0log | 9 | 33563208 | | memory/innodb/fil0fil | 103835 | 33226163 | | memory/performance_schema/rwlock_instances | 240 | 31457280 | | memory/performance_schema/table_shares | 5 | 20971520 | | memory/performance_schema/events_statements_history_long | 1 | 14320000 | | memory/innodb/std | 159686 | 10304368 | | memory/performance_schema/events_statements_history_long.tokens | 1 | 10240000 | | memory/performance_schema/events_statements_history_long.sqltext | 1 | 10240000 | | memory/performance_schema/events_statements_summary_by_digest.tokens | 1 | 10240000 | | memory/performance_schema/events_statements_summary_by_thread_by_event_name | 1 | 9091072 | | memory/mysys/KEY_CACHE | 3 | 8390744 | +-----------------------------------------------------------------------------+--------------------+------------------------------+ 20 rows in set (0.00 sec) ALAS - I understand more why, it seems 16x the handles for each table.... !?!? mysql> select count(*),object_name from performance_schema.table_handles group by object_name order by count(*) desc limit 10; +----------+-------------+ | count(*) | object_name | +----------+-------------+ | 16 | t5016 | | 16 | t4152 | | 16 | t8184 | | 16 | t4144 | | 16 | t7798 | | 16 | t4039 | | 16 | t4422 | | 16 | t5994 | | 16 | t5053 | | 16 | t8334 | +----------+-------------+ 10 rows in set (0.16 sec) mysql> show global variables like '%table%cache%instances%'; +----------------------------+-------+ | Variable_name | Value | +----------------------------+-------+ | table_open_cache_instances | 16 | +----------------------------+-------+ 1 row in set (0.00 sec) Maybe related ^^^^^^^^^ ???? So I restarted with --table_open_cache_instances=1 and ... memory is in check again. It's also significantly faster to query show table status (gut feeling). mysql> select event_name,CURRENT_COUNT_USED,CURRENT_NUMBER_OF_BYTES_USED from performance_schema.memory_summary_global_by_event_name order by CURRENT_NUMBER_OF_BYTES_USED desc limit 20 ; +-----------------------------------------------------------------------------+--------------------+------------------------------+ | event_name | CURRENT_COUNT_USED | CURRENT_NUMBER_OF_BYTES_USED | +-----------------------------------------------------------------------------+--------------------+------------------------------+ | memory/sql/TABLE | 157963 | 677450412 | | memory/innodb/mem0mem | 747326 | 520449242 | | memory/sql/TABLE_SHARE::mem_root | 99290 | 369791360 | | memory/performance_schema/table_handles | 19 | 180551680 | | memory/innodb/ha_innodb | 73098 | 150630612 | | memory/innodb/buf_buf_pool | 1 | 137297920 | | memory/innodb/os0event | 579067 | 55590432 | | memory/performance_schema/file_instances | 18 | 51904512 | | memory/performance_schema/table_io_waits_summary_by_index_usage | 12 | 34603008 | | memory/innodb/log0log | 9 | 33563208 | | memory/innodb/fil0fil | 103835 | 33226163 | | memory/performance_schema/rwlock_instances | 240 | 31457280 | | memory/performance_schema/table_shares | 5 | 20971520 | | memory/performance_schema/events_statements_history_long | 1 | 14320000 | | memory/innodb/std | 159594 | 10298912 | | memory/performance_schema/events_statements_history_long.tokens | 1 | 10240000 | | memory/performance_schema/events_statements_history_long.sqltext | 1 | 10240000 | | memory/performance_schema/events_statements_summary_by_digest.tokens | 1 | 10240000 | | memory/performance_schema/events_statements_summary_by_thread_by_event_name | 1 | 9091072 | | memory/mysys/KEY_CACHE | 3 | 8390744 | +-----------------------------------------------------------------------------+--------------------+------------------------------+ 20 rows in set (0.00 sec) More reasonable open tables: Threads: 1 Questions: 50 Slow queries: 0 Opens: 19395 Flush tables: 1 Open tables: 19388 Queries per second avg: 0.200
[21 May 2018 6:40]
MySQL Verification Team
So, what is the solution to this? If a table is stored only once in cache it can be "hot" contention area too. How to split it then, by user, connection id, or something else?