Bug #70768 Persistent optimizer statistics often causes LOCK_open stalls
Submitted: 29 Oct 2013 22:41 Modified: 24 Jan 2014 19:42
Reporter: Yoshinori Matsunobu (OCA) Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: InnoDB storage engine Severity:S5 (Performance)
Version:5.6.14 OS:Any
Assigned to: Vasil Dimov CPU Architecture:Any

[29 Oct 2013 22:41] Yoshinori Matsunobu
Persistent optimizer statistics often causes stalls, in the following case.

1. dict_stats_thread holds an s_lock on dict_table_stats_lock and doing some disk reads
2. A client holds LOCK_open mutex and trying to hold an x_lock on dict_table_stats_lock: blocked
3. Other clients are blocked at LOCK_open mutex

Example stack traces:


Relevant source codes:
        dict_table_t*   table)  /*!< in/out: table */

        ut_a(table->n_ref_count == 0);

        dict_table_stats_lock(table, RW_X_LATCH);

void Table_cache::free_unused_tables_if_necessary(THD *thd)
    We have too many TABLE instances around let us try to get rid of them.

    Note that we might need to free more than one TABLE object, and thus
    need the below loop, in case when table_cache_size is changed dynamically,
    at server run time.
  if (m_table_count > table_cache_size_per_instance && m_unused_tables)
    while (m_table_count > table_cache_size_per_instance &&
      TABLE *table_to_free= m_unused_tables;
      intern_close_table(table_to_free); <---- calling dict_stats_deinit

How to repeat:
Read relevant source codes

Suggested fix:
I think dict_stats_deinit should be refactored. x_lock is always held here, but why is x_lock needed?
[30 Oct 2013 15:00] Vasil Dimov

First a minor correction to your analysis: the dict stats thread acquires a X lock on the table stats latch, not S one (see dict_stats_process_entry_from_recalc_pool() -> dict_stats_update() -> dict_stats_update_persistent() -> dict_table_stats_lock(RW_X_LATCH)). X or S is irrelevant in this case because it is incompatible with another X lock either way.

We need the stats deinit when closing a table in order to flag the stats in the table object for re-read from disk next time the table is opened. The dict_stats_deinit() function does that and it acquires a X lock because it modifies one of the table->stat_* variables. I do not think this can be changed without breaking the behavior wrt FLUSH TABLE after which we expect the stats to be re-read from disk upon the next open table.

Now, what I find strange is that, as I see it, dict_stats_deinit() cannot execute concurrently with a background dict_stats_update() on the same table. dict_stats_deinit() is only called from dict_table_close() if n_ref_count has reached 0, but the background stats thread is keeping the table open, so n_ref_count cannot become 0 while it is executing. What do I miss?
[31 Oct 2013 9:00] Vasil Dimov

As per Dmitry: Table_cache::free_unused_tables_if_necessary() should not free any unused tables normally. If such thing happens there is a chance that number of open table instances necessary for the workload is higher than the size of open tables cache, which is bad situation anyway. So if such thing happens often then one needs to check if his table_open_cache value (and other related settings is high enough for his workload.

So what are the values of:

mysql> show variables like 'table%cache%';

mysql> show status like 'Table_open_cache%'; (during or after you have experienced the problem described in this bug report)

and how many tables your workload involves (roughly)?

Further clarification from Dmitry:
"Having said that I should also note that there can be also situations in which table_open_cache value is sufficient for normal workload, but occasionally some event happens (e.g. global I_S scan, or backup of all tables), which forces to open much more table instances than during normal operation. And, indeed, in this case the above function will free some unused table instances."

Thank you!
[31 Oct 2013 20:30] Yoshinori Matsunobu

table_open_cache is already set high enough on my environment, but table_open_cache_misses/table_open_cache_overflows happen sometimes, and it's not realistic to make them to zero, as you point out some batch processing (backups/etl/etc) may scan many tables.

I think closing tables should be considered to happen normally. So "being blocked at closing a table because another background thread is reading multiple pages from disks" is problematic.
[1 Nov 2013 6:39] Yoshinori Matsunobu
The below stacks contain dict_table address information.

client thread:
#4  0x00000000009c5953 in rw_lock_x_lock_func (lock=lock@entry=0x13cd2c0 <dict_table_stats_latches+3584>, pass=pass@entry=0, file_name=file_name@entry=0xc84e48 "dict0dict.cc", line=line@entry=360) at sync0rw.cc:681
#5  0x0000000000a3b5ed in dict_table_stats_lock (latch_mode=2, table=0x7f852bf70390) at dict0dict.cc:360
#6  dict_stats_deinit (table=0x7f852bf70390) at dict0stats.ic:190

bg thread:
#12 0x0000000000a5e7c8 in dict_stats_update_persistent (table=0x7f857e7b2a90) at dict0stats.cc:2069

dict0dict.cc:360 is "rw_lock_s_lock(GET_TABLE_STATS_LATCH(table));", which is the following.
#define GET_TABLE_STATS_LATCH(table) \
        (&dict_table_stats_latches[ut_fold_ull((ib_uint64_t) table) \
                                   % DICT_TABLE_STATS_LATCHES_SIZE])

mod64 of table objects (0x7f852bf70390 and 0x7f857e7b2a90) are both 16. Even though these tables are different (names), they may conflict each other because of the mod 64 based latches.
[3 Feb 2014 11:28] Laurynas Biveinis
5.6$ bzr log -r 5682
revno: 5682
committer: Vasil Dimov <vasil.dimov@oracle.com>
branch nick: mysql-5.6
timestamp: Wed 2013-12-11 16:13:33 +0200
  Backport from mysql-trunk to mysql-5.6:
    ** revision-id: vasil.dimov@oracle.com-20131203163459-tlkyqdq93jysk9z0
    ** committer: Vasil Dimov <vasil.dimov@oracle.com>
    ** branch nick: mysql-trunk
    ** timestamp: Tue 2013-12-03 18:34:59 +0200
    ** message:
    **   Fix Bug#70768 Persistent optimizer statistics often causes LOCK_open stalls
    **   Protect each table's dict_table_t::stat* members with a latch dedicated
    **   for this table instead of using a global pool of 64 shared latches.
    **   With 6 tables, the chances of at least two ending up with the same latch
    **   is 23.9%. With a lots of tables, there are tons of collisions.
    **   Reviewed by:	Kevin (rb:3805)
[13 Feb 2014 15:48] Laurynas Biveinis
This fix has introduced a performance regression, see bug 71708.
[18 Feb 2014 14:07] Laurynas Biveinis
See also bug 71727.
[28 Feb 2014 6:39] Laurynas Biveinis
Sorry, wrong bug, ignore my "See also bug 71727" comment above.
[15 Feb 2018 5:32] MySQL Verification Team
*** DTPRICE  daniel.t.price 01/24/14 11:42 am *** 
Fixed as of 5.6.16, 5.7.4 and here's the changelog entry: 

Persistent optimizer statistics would cause stalls due to latch