Bug #77715 table_open_cache_instances Does not Really Split table_open_cache
Submitted: 14 Jul 2015 9:03 Modified: 21 May 2018 6:33
Reporter: Jervin R Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: DML Severity:S3 (Non-critical)
Version:5.6.24 OS:Any
Assigned to: CPU Architecture:Any

[14 Jul 2015 9:03] Jervin R
Description:
The manual says "To improve scalability by reducing contention among sessions, the open tables cache can be partitioned into several smaller cache instances of size table_open_cache / table_open_cache_instances".

However on our tests, this seems to be not the case. For example, if I have 100 sysbench tables, table_open_cache = 1156, table_open_cache_instances = 10 I would end up with the following:

| Open_table_definitions                                          | 170                      |
| Open_tables                                                     | 1035                     |
| Opened_files                                                    | 233                      |
| Opened_table_definitions                                        | 170                      |
| Opened_tables                                                   | 2711                     |

With table_open_cache_instances = 1:

| Open_table_definitions                                          | 170                      |
| Open_tables                                                     | 477                      |
| Opened_files                                                    | 232                      |
| Opened_table_definitions                                        | 170                      |
| Opened_tables                                                   | 484                      |

How to repeat:
my.cnf:

table_open_cache=1156
table_open_cache_instances=10

sysbench --test=/usr/share/doc/sysbench/tests/db/select.lua --num-threads=16 --oltp-tables-count=100 --oltp-table-size=1000 --max-requests=0 --max-time=86400 --mysql-user=sysbench --mysql-password=sysbench --mysql-db=sysbench --mysql-host=127.0.0.1 --mysql-port=56240 --rand-init=on --report-interval=1 --oltp-read-only=on run

Suggested fix:
Not sure, the expected fix is that Open_tables should not be showing near the limit when there is only a handful of actual tables in use and should be in the cache (of course including those from mysql schema).
[21 May 2018 6:33] MySQL Verification Team
I'm setting this verified based on an observation I made last week.   Here's my finding:

I'm on 5.7.22 here.

I have 19644 tables on my instance,  and table-open-cache 100000.
running 'show table status' numerous times let's "Open tables: 100000"
get to 100k eventually.

Single threaded...
I'm not sure how 100K tables can be "open".

mysql> select count(*) from information_schema.tables;
+----------+
| count(*) |
+----------+
|    19644 |
+----------+
1 row in set (0.22 sec)

mysql> select count(*) from information_schema.columns;
+----------+
| count(*) |
+----------+
|   180990 |
+----------+
1 row in set, 1245 warnings (1.51 sec)

mysql> select event_name,CURRENT_COUNT_USED,CURRENT_NUMBER_OF_BYTES_USED from  performance_schema.memory_summary_global_by_event_name order by CURRENT_NUMBER_OF_BYTES_USED desc limit 20 ;
+-----------------------------------------------------------------------------+--------------------+------------------------------+
| event_name      | CURRENT_COUNT_USED | CURRENT_NUMBER_OF_BYTES_USED |
+-----------------------------------------------------------------------------+--------------------+------------------------------+
| memory/sql/TABLE      |             812989 |                   3568158076 |
| memory/performance_schema/table_handles      |                 98 |                    931266560 |
| memory/innodb/mem0mem      |             828256 |                    914556860 |
| memory/sql/TABLE_SHARE::mem_root      |             100108 |                    372569424 |
| memory/innodb/ha_innodb      |              62106 |                    150321133 |
| memory/innodb/buf_buf_pool      |                  1 |                    137297920 |
| memory/innodb/os0event      |             579239 |                     55606944 |
| memory/performance_schema/file_instances      |                 18 |                     51904512 |
| memory/performance_schema/table_io_waits_summary_by_index_usage      |                 14 |                     40370176 |
| memory/innodb/log0log      |                  9 |                     33563208 |
| memory/innodb/fil0fil      |             103835 |                     33226163 |
| memory/performance_schema/rwlock_instances      |                240 |                     31457280 |
| memory/performance_schema/table_shares      |                  5 |                     20971520 |
| memory/performance_schema/events_statements_history_long      |                  1 |                     14320000 |
| memory/innodb/std      |             159686 |                     10304368 |
| memory/performance_schema/events_statements_history_long.tokens      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_history_long.sqltext      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_summary_by_digest.tokens      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name |                  1 |                      9091072 |
| memory/mysys/KEY_CACHE      |                  3 |                      8390744 |
+-----------------------------------------------------------------------------+--------------------+------------------------------+
20 rows in set (0.00 sec)

ALAS - I understand more why,  it seems 16x the handles for each table.... !?!?

mysql> select count(*),object_name from performance_schema.table_handles group by object_name order by count(*) desc limit 10;
+----------+-------------+
| count(*) | object_name |
+----------+-------------+
|       16 | t5016       |
|       16 | t4152       |
|       16 | t8184       |
|       16 | t4144       |
|       16 | t7798       |
|       16 | t4039       |
|       16 | t4422       |
|       16 | t5994       |
|       16 | t5053       |
|       16 | t8334       |
+----------+-------------+
10 rows in set (0.16 sec)

mysql> show global variables like '%table%cache%instances%';
+----------------------------+-------+
| Variable_name              | Value |
+----------------------------+-------+
| table_open_cache_instances | 16    |
+----------------------------+-------+
1 row in set (0.00 sec)

Maybe related ^^^^^^^^^ ????

So I restarted with --table_open_cache_instances=1 and ...

memory is in check again.  It's also significantly faster to query show table status (gut feeling).

mysql> select event_name,CURRENT_COUNT_USED,CURRENT_NUMBER_OF_BYTES_USED from  performance_schema.memory_summary_global_by_event_name order by CURRENT_NUMBER_OF_BYTES_USED desc limit 20 ;
+-----------------------------------------------------------------------------+--------------------+------------------------------+
| event_name      | CURRENT_COUNT_USED | CURRENT_NUMBER_OF_BYTES_USED |
+-----------------------------------------------------------------------------+--------------------+------------------------------+
| memory/sql/TABLE      |             157963 |                    677450412 |
| memory/innodb/mem0mem      |             747326 |                    520449242 |
| memory/sql/TABLE_SHARE::mem_root      |              99290 |                    369791360 |
| memory/performance_schema/table_handles      |                 19 |                    180551680 |
| memory/innodb/ha_innodb      |              73098 |                    150630612 |
| memory/innodb/buf_buf_pool      |                  1 |                    137297920 |
| memory/innodb/os0event      |             579067 |                     55590432 |
| memory/performance_schema/file_instances      |                 18 |                     51904512 |
| memory/performance_schema/table_io_waits_summary_by_index_usage      |                 12 |                     34603008 |
| memory/innodb/log0log      |                  9 |                     33563208 |
| memory/innodb/fil0fil      |             103835 |                     33226163 |
| memory/performance_schema/rwlock_instances      |                240 |                     31457280 |
| memory/performance_schema/table_shares      |                  5 |                     20971520 |
| memory/performance_schema/events_statements_history_long      |                  1 |                     14320000 |
| memory/innodb/std      |             159594 |                     10298912 |
| memory/performance_schema/events_statements_history_long.tokens      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_history_long.sqltext      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_summary_by_digest.tokens      |                  1 |                     10240000 |
| memory/performance_schema/events_statements_summary_by_thread_by_event_name |                  1 |                      9091072 |
| memory/mysys/KEY_CACHE      |                  3 |                      8390744 |
+-----------------------------------------------------------------------------+--------------------+------------------------------+
20 rows in set (0.00 sec)

More reasonable open tables:

Threads: 1  Questions: 50  Slow queries: 0  Opens: 19395  Flush tables: 1  Open tables: 19388  Queries per second avg: 0.200
[21 May 2018 6:40] MySQL Verification Team
So, what is the solution to this?  

If a table is stored only once in cache it can be "hot" contention area too.
How to split it then, by user, connection id, or something else?