Bug #93746 MySQL Crashes with deadlock at Mutex DICT_SYS created dict0dict.cc:1172
Submitted: 27 Dec 2018 10:14 Modified: 22 May 2019 15:05
Reporter: Anton Ravich Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S2 (Serious)
Version:5.7.23 OS:Linux
Assigned to: CPU Architecture:x86
Tags: deadlock crash

[27 Dec 2018 10:14] Anton Ravich
Description:
We face a bug that is similar to the #80919 one.
#80919 is closed as duplicate and we have no info about it during last year.
We are on the latest available version (in AWS RDS environment) and face the bug.

All our innodb status reports contain all the same info:

--Thread 9999999999999 has waited at dict0dict.cc line 1238 for XXX.00 seconds the semaphore:
Mutex at 0x2b0e021f3718, Mutex DICT_SYS created dict0dict.cc:1172, lock var 1

Sometimes MySQL self recoveries, sometimes it stays in deadlock for some time, connections number is growing constantly and all this is finished by crash. 
We are investigating such incidents, but all workarounds like set innodb_purge_threads to 1 or disable innodb_adaptive_hash_index don't work for us.

Please give us some info on those "internal-only bug" mentioned in bug #80919

How to repeat:
WIP
[29 Dec 2018 2:42] zhai weixiang
Providing the stack may be helpful to invesitgate/verify the problem ( by using pt-pmp or pstack)
[4 Jan 2019 12:23] Anton Ravich
We use Amazon RDS service so we cannot run any debug tooling on underlaying OS, only grab some info directly from MySQL running.
[9 Jan 2019 4:26] Anton Ravich
Any update here? We have a suspicion that it's related somehow to a big number of prepared statements queries we use. I request "internal bug" info one more time - do you have some work done on it?
[16 Jan 2019 15:22] MySQL Verification Team
Hi,

Your bug is not a duplicate of the bug you are mentioning. What you have is, most likely, an ordinary semaphore wait, which is  expected behaviour in most situations.

Next time you experience it, send us the output from SHOW ENGINE INNODB STATUS and the last few pages from your error log.
[17 Feb 2019 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[20 Feb 2019 15:11] MySQL Verification Team
Definitely, not a bug .....
[22 May 2019 15:05] Anton Ravich
Sorry for a long wait, I attached 2 examples of our innodb status output from the same MySQL instance as "22th of May example [12]" and "22th of May error log" files. We are run into this issue periodically on different MySQL instances in AWS RDS with version 5.7.23, but cannot find the workload that causes such issues. MySQL just freezes for 3-4 mins with no activity, no binlog data is written, we have a global MySQL lock during this time. All the graphs show that MySQL doesn't perform disk/network/CPU activity. Even simple single query can be stalled in closing state.
[23 May 2019 12:33] MySQL Verification Team
HI,

Thank you for your attachments.

I have analysed them all and my previous diagnosis still stands. It is a matter of tuning InnoDB SE. All you need to know about tuning is found in our Reference Manual. In your case, it is about updating statistics in InnoDB indices.

Also, code in that part has significantly change between the release that you are using and the latest 5.7 release ....  There are some improvements in the speed of those operations. Tuning is still a better option, but upgrading to latest release is recommended as well.

Not a bug.