Bug #91977 Dropping Large Table Causes Semaphore Waits; No Other Work Possible
Submitted: 11 Aug 2018 3:57
Reporter: Jesper wisborg Krogh Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S2 (Serious)
Version:5.7.17 OS:Any
Assigned to: CPU Architecture:Any

[11 Aug 2018 3:57] Jesper wisborg Krogh
Description:
When dropping large tables if can cause semaphore waits so that no other queries can be run while the DROP TABLE is in progress.

Examples of the semaphore waits:

--Thread 139622553683712 has waited at ha_innodb.cc line 5515 for 242.00 seconds the semaphore:
Mutex at 0xd8c28ca8, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

--Thread 139622584301312 has waited at dict0dict.cc line 1239 for 245.00 seconds the semaphore:
Mutex at 0xd8c28ca8, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

--Thread 139622453278464 has waited at dict0dict.cc line 496 for 245.00 seconds the semaphore:
Mutex at 0xd8c28ca8, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

--Thread 139652621719296 has waited at srv0srv.cc line 1968 for 243.00 seconds the semaphore:
X-lock on RW-latch at 0xd8c28c08 created in file dict0dict.cc line 1184
a writer (thread id 139624819394304) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0undo.cc line 319
Last time write locked in file /export/home/pb2/build/sb_0-21378219-1480364621.63/rpm/BUILD/mysql-5.7.17/mysql-5.7.17/storage/innobase/row/row0mysql.cc line 4295
--Thread 139622422660864 has waited at ha_innodb.cc line 5515 for 243.00 seconds the semaphore:
Mutex at 0xd8c28ca8, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

--Thread 139622575515392 has waited at ha_innopart.cc line 277 for 245.00 seconds the semaphore:
Mutex at 0xd8c28ca8, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

How to repeat:
The workflow in this particular case was:

1. Use pt-online-schema-change to created a compressed copy of the table.
2. At completion the two tables are swapped.
3. The old copy of the table (non-compressed and no partitions) is dropped causing the issue.
[17 Aug 2018 4:06] Jesper wisborg Krogh
Posted by developer:
 
I can easily reproduce in 5.7.17, 5.7.23, and 8.0.12:

1. Create large table (my table is 230-255GB large).
2. Start a load of the employees sample database.
3. While 2. is executing, drop the table from 1.
4. Monitor in a third connection with SHOW ENGINE INNODB STATUS.

The size of the table is not all that important. It just must be large enough that dropping it takes a while.

My my.cnf:

[mysqld]
# Paths
basedir                      = /mysql/mysql
datadir                      = /mysql/data
socket                       = /mysql/run/mysql.sock
pid_file                     = /mysql/run/mysql.pid

# Binary Log and Replication
log_bin
server_id                    = 80123708

# InnoDB
innodb_buffer_pool_size      = 135420790784
innodb_buffer_pool_instances = 8
innodb_page_cleaners         = 8
innodb_write_io_threads      = 16
innodb_flush_method          = O_DIRECT
innodb_stats_persistent      = OFF
innodb_stats_auto_recalc     = OFF
innodb_log_file_size         = 1G

----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 1217717
--Thread 140216876447488 has waited at ha_innodb.cc line 6080 for 205.00 seconds the semaphore:
Mutex at 0x7f86cc429748, Mutex DICT_SYS created dict0dict.cc:1019, lock var 1

--Thread 140171929843456 has waited at srv0srv.cc line 1863 for 207.00 seconds the semaphore:
X-lock on RW-latch at 0x7f86cc4296a8 created in file dict0dict.cc line 1028
a writer (thread id 140216922081024) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file not yet reserved line 0
Last time write locked in file ../../../mysqlcom-pro-8.0.12/storage/innobase/row/row0mysql.cc line 3739
--Thread 140171552880384 has waited at fts0opt.cc line 2703 for 208.00 seconds the semaphore:
Mutex at 0x7f86cc429748, Mutex DICT_SYS created dict0dict.cc:1019, lock var 1

OS WAIT ARRAY INFO: signal count 2852691
RW-shared spins 3949247, rounds 4964579, OS waits 1019
RW-excl spins 4021701, rounds 16398794, OS waits 1641
RW-sx spins 309, rounds 9191, OS waits 304
Spin rounds per wait: 1.26 RW-shared, 4.08 RW-excl, 29.74 RW-sx
------------
TRANSACTIONS
------------
Trx id counter 3321
Purge done for trx's n:o < 3300 undo n:o < 0 state: running but idle
History list length 10
LIST OF TRANSACTIONS FOR EACH SESSION:
---TRANSACTION 421691969498768, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 421691969497848, not started
0 lock struct(s), heap size 1136, 0 row lock(s)
---TRANSACTION 3309, ACTIVE 208 sec dropping table
0 lock struct(s), heap size 1136, 0 row lock(s)
MySQL thread id 8, OS thread handle 140216922081024, query id 3640 localhost 127.0.0.1 root checking permissions
DROP TABLE large_table
[20 Sep 2018 8:17] jia liu
drop or truncate large table will crash after long semaphore wait

Attachment: drop or truncate large table will crash after long semaphore wait.txt (text/plain), 200.84 KiB.

[20 Sep 2018 8:30] jia liu
In my case, this long semaphore wait will eventually cause innodb crash.
the example of long semaphore wait and crash can be seen in the file I uploaded in "drop or truncate large table will crash after long semaphore wait.txt"

And it may ended up with some .ibd files can't be find. 

2018-09-20T14:48:31.895480+08:00 0 [ERROR] InnoDB: Tablespace 44422 was not found at ./db249/customer_quan_1.ibd.
2018-09-20T14:48:31.895487+08:00 0 [ERROR] InnoDB: Set innodb_force_recovery=1 to ignore this and to permanently lose all changes to the tablespace.
2018-09-20T14:48:32.279791+08:00 0 [ERROR] InnoDB: Cannot continue operation.

Then I have to use innodb_force_recovery=1 to start the mysqld server. And then it will report some problem with primary key problem with that table, the log repeated very fast, the error log grow up to gigabytes just a few minutes later.I have to recreate a new instance to solve this problem.

2018-09-20T15:06:38.921766+08:00 0 [ERROR] InnoDB: Trying to load index `PRIMARY` for table `db249`.`customer_quan_1`, but the index tree has been freed!
2018-09-20T15:06:38.921754+08:00 0 [ERROR] InnoDB: Trying to load index `PRIMARY` for table `db249`.`customer_quan_1`, but the index tree has been freed!
[20 Sep 2018 8:43] MySQL Verification Team
I've also filed this before:
https://bugs.mysql.com/bug.php?id=92044
(FR: configurable timeout for long semaphore wait until server dies)
[27 Sep 2018 14:16] Tibor Korocz
Running alter table on a large table also can trigger this issue. When alter table is finished and drops the old table that block every other queries.

A lot of semaphores on the Dictionary:
--Thread 139484881417984 has waited at dict0dict.cc line 1239 for 25.00 seconds the semaphore:
Mutex at 0x7f2000c55798, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1

--Thread 139486302754560 has waited at dict0dict.cc line 1239 for 25.00 seconds the semaphore:
Mutex at 0x7f2000c55798, Mutex DICT_SYS created dict0dict.cc:1173, lock var 1
[27 Sep 2018 16:25] jia liu
A supplement for my (liu jia) case:

How to trigger: after delete a really lot of rows in a large table, then truncate or drop the table.
More precisely, delete 180 million rows from a 600 million rows table with 128 threads, then drop that table.

Recently, I tried to wait 2 hours after the delete operation, then drop that table, I felt is less likely to trigger this problem.

MySQL version is 5.7.19
I am using innodb_purge_threads=1, It is a single table in my case, so set innodb_purge_threads to a larger value seems won't help.

I will try to wait all purge is done, then drop the table, to see if it will be a workaround for this problem.
[10 Oct 2018 3:31] jia liu
I have tried to wait all purge work done, then drop table, and it still triggers this problem.

It seems that a drop\truncate will update some Innodb table rows of system table, then purge thread will kick in, if the table is large enough, MySQL server will crash.

There is no work around for this problem, at least I can't find one. 

The drop table will stuck:
| 240 | hotdb_datasource | 10.10.0.206:50194 | db249 | Query   |   424 | checking permissions            | DROP TABLE IF EXISTS `CUSTOMER_VT_1`                                                                 |

Innodb will report long semaphore:
2018-10-10T02:07:12.928132+08:00 0 [Warning] InnoDB: A long semaphore wait:
--Thread 140334474446592 has waited at row0purge.cc line 862 for 241.00 seconds the semaphore:
S-lock on RW-latch at 0x4690b4d8 created in file dict0dict.cc line 1184
a writer (thread id 140333909505792) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 862
Last time write locked in file /export/home2/pb2/build/sb_1-23948522-1498141788.79/rpm/BUILD/mysql-5.7.19/mysql-5.7.19/storage/innobase/row/row0mysql.cc line 4304
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
InnoDB: Pending preads 0, pwrites 0

=====================================
2018-10-10 02:07:14 0x7fa22b7fe700 INNODB MONITOR OUTPUT
=====================================
Per second averages calculated from the last 60 seconds
-----------------
BACKGROUND THREAD
-----------------
srv_master_thread loops: 11030 srv_active, 0 srv_shutdown, 14222 srv_idle
srv_master_thread log flush and writes: 25251
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 337377730
--Thread 140334474446592 has waited at row0purge.cc line 862 for 243.00 seconds the semaphore:
S-lock on RW-latch at 0x4690b4d8 created in file dict0dict.cc line 1184
a writer (thread id 140333909505792) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 862
Last time write locked in file /export/home2/pb2/build/sb_1-23948522-1498141788.79/rpm/BUILD/mysql-5.7.19/mysql-5.7.19/storage/innobase/row/row0mysql.cc line 4304
--Thread 140334482839296 has waited at srv0srv.cc line 1982 for 241.00 seconds the semaphore:
X-lock on RW-latch at 0x4690b4d8 created in file dict0dict.cc line 1184
a writer (thread id 140333909505792) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: 0
Last time read locked in file row0purge.cc line 862
Last time write locked in file /export/home2/pb2/build/sb_1-23948522-1498141788.79/rpm/BUILD/mysql-5.7.19/mysql-5.7.19/storage/innobase/row/row0mysql.cc line 4304
OS WAIT ARRAY INFO: signal count 235449349
RW-shared spins 0, rounds 216151404, OS waits 91840888
RW-excl spins 0, rounds 8826855278, OS waits 104926215
RW-sx spins 20126930, rounds 563228313, OS waits 15719619
Spin rounds per wait: 216151404.00 RW-shared, 8826855278.00 RW-excl, 27.98 RW-sx

I have tried wait the purge to finish before drop, but where always another 2 trx :
Trx id counter 15022221402
Purge done for trx's n:o < 15022221400 undo n:o < 0 state: running
History list length 2

And after long wait enough, the crash:
InnoDB: ###### Diagnostic info printed to the standard error stream
2018-10-10T02:19:04.946209+08:00 0 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 600 seconds. We intentionally crash the server because it appears to be hung.
2018-10-10 02:19:04 0x7fa22bfff700  InnoDB: Assertion failure in thread 140334499624704 in file ut0ut.cc line 916
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
18:19:04 UTC - mysqld got signal 6 ;

After crash recoery, the table is in a wrong state:
2018-10-10T09:55:25.456884+08:00 14 [ERROR] InnoDB: Trying to load index `PRIMARY` for table `db249`.`customer_vt_1`, but the index tree has been freed!
2018-10-10T09:55:25.456914+08:00 14 [Warning] InnoDB: Cannot open table db249/customer_vt_1 from the internal data dictionary of InnoDB though the .frm file for the table exists. 

And the table can't be drop or recreate normally:
frm file is still there, and table can be seen in show tables, but can't access normally.
mysql> show tables;
+------------------+
| Tables_in_db249  |
+------------------+
...
| customer_vt_1    |
...
+------------------+
11 rows in set (0.00 sec)

mysql> drop table customer_vt_1;
ERROR 1051 (42S02): Unknown table 'db249.customer_vt_1'
Warning (Code 1146): Table 'db249.customer_vt_1' doesn't exist
Error (Code 1051): Unknown table 'db249.customer_vt_1'

frm file will been delete after the drop statement, but ibd is not: 
mysql> CREATE TABLE customer_vt_1(....)ENGINE=InnoDB DEFAULT CHARSET=utf8;
ERROR 1813 (HY000): Tablespace '`db249`.`customer_vt_1`' exists.
Error (Code 1813): Tablespace '`db249`.`customer_vt_1`' exists.
Error (Code 1030): Got error 184 from storage engine

after remove ibd manully:
mysql> CREATE TABLE customer_vt_1(...)ENGINE=InnoDB DEFAULT CHARSET=utf8;
ERROR 1030 (HY000): Got error 168 from storage engine

The MySQL instance is broken, the only way to create customer_vt_1 is recreate a new instance.

This is a very severe bug, hope it will be fixed sooner.
[2 Nov 2018 2:55] Jesper wisborg Krogh
Posted by developer:
 
The workaround is to disable the InnoDB adaptive hash index when dropping the table. This will at least greatly reduce the time it takes for the DROP TABLE to complete (in my tests in 8.0.13 to less than 0.2 seconds). In fact the DROP TABLE has become so quick, I didn't manage to see if it still blocked.

To temporarily disable the InnoDB adaptive hash index while dropping the table:

SET GLOBAL innodb_adaptive_hash_index = OFF;
DROP TABLE ...
SET GLOBAL innodb_adaptive_hash_index = ON;

Or if your workload doesn't benefit from it, disable it permanently. Read more at http://dev.mysql.com/doc/refman/en/innodb-adaptive-hash.html .
[2 Nov 2018 12:06] Ivan Groenewold
I don't think the "workaround" provided is a good idea. Disabling AHI on a live server without any testing is not to be done lightly. It could affect response times of existing queries that benefit from it and eventually bring the server down. I've seen this.
[9 Dec 2018 3:40] monty solomon
InnoDB crashed during a long running ALTER TABLE statement while running Percona Server 5.7.24-26.

2018-12-08T05:21:52.665720Z 0 [ERROR] [FATAL] InnoDB: Semaphore wait has lasted > 600 seconds. We intentionally crash the server because it appears to be hung.
2018-12-08 05:21:52 0x7f4c65ce4700  InnoDB: Assertion failure in thread 139966102259456 in file ut0ut.cc line 947
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.7/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
05:21:52 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
Attempting to collect some information that could help diagnose the problem.
As this is a crash and something is definitely wrong, the information
collection process might fail.
Please help us make Percona Server better by reporting any
bugs at http://bugs.percona.com/

key_buffer_size=67108864
read_buffer_size=1048576
max_used_connections=12
max_threads=1001
thread_count=9
connection_count=7
It is possible that mysqld could use up to 
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 1359714 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/sbin/mysqld(my_print_stacktrace+0x2c)[0xedf27c]
/usr/sbin/mysqld(handle_fatal_signal+0x461)[0xd206f1]
/lib64/libpthread.so.0(+0xf7e0)[0x7f59fb3b87e0]
/lib64/libc.so.6(gsignal+0x35)[0x7f59f95f1625]
/lib64/libc.so.6(abort+0x175)[0x7f59f95f2e05]
/usr/sbin/mysqld[0x777028]
/usr/sbin/mysqld(_ZN2ib5fatalD1Ev+0xee)[0x1136dbe]
/usr/sbin/mysqld(srv_error_monitor_thread+0xaf0)[0x10d18e0]
/lib64/libpthread.so.0(+0x7aa1)[0x7f59fb3b0aa1]
/lib64/libc.so.6(clone+0x6d)[0x7f59f96a793d]
[9 Dec 2018 4:13] monty solomon
Events status:
LLA = Last Locked At  LUA = Last Unlocked At
WOC = Waiting On Condition  DL = Data Locked

Event scheduler status:
State      : INITIALIZED
Thread id  : 0
LLA        : n/a:0
LUA        : n/a:0
WOC        : NO
Workers    : 0
Executed   : 0
Data locked: NO

Event queue status:
Element count   : 0
Data locked     : NO
Attempting lock : NO
LLA             : drop_schema_events:342
LUA             : drop_schema_events:344
WOC             : NO
Next activation : never
2018-12-08T05:10:00.660102Z 0 [Warning] InnoDB: A long semaphore wait:
--Thread 139965939672832 has waited at row0ins.cc line 2953 for 241.00 seconds the semaphore:
S-lock on RW-latch at 0x7f4bbc1a9018 created in file dict0dict.cc line 2751
a writer (thread id 139965938075392) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: ffffffffdffffffd
Last time read locked in file row0ins.cc line 2953
Last time write locked in file /mnt/workspace/percona-server-5.7-redhat-binary-rocks-new/label_exp/min-centos-6-x64/test/rpmbuild/BUILD/percona-server-5.7.24-26/percona-server-5.7.24-26/storage/innobase/btr/btr0bulk.cc line 53
InnoDB: ###### Starts InnoDB Monitor for 30 secs to print diagnostic info:
InnoDB: Pending preads 1, pwrites 1
[9 Dec 2018 4:17] monty solomon
----------
SEMAPHORES
----------
OS WAIT ARRAY INFO: reservation count 32325
--Thread 139965939672832 has waited at row0ins.cc line 2953 for 256.00 seconds t
he semaphore:
S-lock on RW-latch at 0x7f4bbc1a9018 created in file dict0dict.cc line 2751
a writer (thread id 139965938075392) has reserved it in mode  exclusive
number of readers 0, waiters flag 1, lock_word: ffffffffdffffffe
Last time read locked in file row0ins.cc line 2953
Last time write locked in file /mnt/workspace/percona-server-5.7-redhat-binary-r
ocks-new/label_exp/min-centos-6-x64/test/rpmbuild/BUILD/percona-server-5.7.24-26
/percona-server-5.7.24-26/storage/innobase/btr/btr0bulk.cc line 53
OS WAIT ARRAY INFO: signal count 63932
RW-shared spins 0, rounds 133592, OS waits 23111
RW-excl spins 0, rounds 191153, OS waits 1431
RW-sx spins 29392, rounds 163477, OS waits 980
Spin rounds per wait: 133592.00 RW-shared, 191153.00 RW-excl, 5.56 RW-sx
[9 Dec 2018 5:09] MySQL Verification Team
Monty - your semaphore wait is closer to the one reported in
https://bugs.mysql.com/bug.php?id=82940
[9 Dec 2018 5:20] monty solomon
INNODB MONITOR OUTPUT

Attachment: monitor.txt (text/plain), 11.48 KiB.

[9 Jan 2019 11:56] MySQL Verification Team
Bug #93753 marked as duplicate of this one
[4 Nov 2019 8:53] xyz xyz
what time can fix it ?
[19 Mar 2020 14:13] MySQL Verification Team
The following bug:

https://bugs.mysql.com/bug.php?id=98995

is a duplicate of this one.
[23 Mar 2020 11:12] MySQL Verification Team
Disabling the AHI is not a solution in all cases.  If the purge thread wakes up or the master thread checks if tables need eviction from cache,  this will still lead of long semaphore waits on the data dictionary lock.
[6 Aug 2020 9:47] WANG GUANGYOU
hi, I have an idea to solve the problem. And I want to discuss it. BTW, in test, it only block the DML several micro seconds
A) First, let see the delete table process 
A1 mysql_rm_table  #delete table entry
A2 ha_delete_table #delete table handler
A3 ha_innobase::delete_table  #innodb delete table implementation
A4 row_drop_table_for_mysql
A4.1 row_mysql_lock_data_dictionary #acquire dict_sys->mutex
A4.2 check_foreigns #foreign key check
A4.3 que_eval_sql #maitain internal innodb metadata like SYS_FIELDS、SYS_TABLESPACES
A4.3.1 dict_drop_index_tree(most time consuming)#Iterate every pages in the indexes, and remove the corresponding AHI. And it is also blocked by buf_load when load buffer pool
A4.4 dict_table_remove_from_cache #update dict cache
A4.5 fil_delete_tablespace #drop the tablespace
4.5.1 buf_LRU_flush_or_remove_pages(time consuming) #iterate flush_list of the buffer pool, and delete the page of the table 
A4.6 os_file_delete(time consuming)#delete the data file
A4.7 row_mysql_unlock_data_dictionary #release dict_sys->mutex
A5 finish

B) the solution
From the analysis above, we see the 4.3.1, 4.5.1, 4.6 is time consuming. especially the 4.3.1.  
I think we can solve the problem in this way.
B1. move the table to the specific db, called recycle_bin
B2. test if the buffer pool load complete, if not, wait and test
B3. iterate the buffer pool LRU list, remove the page and corresponding AHI step by step in 1000 page batch (it is important to do it gradually, for it hold buffer pool mutext)
B4. create the hard link of the ibd file
B5. do the drop table process and skip the A4.3.1 in previous analysis, for it had done in step B3
B6. shrink the ibd file by truncate step by step
[10 Aug 2020 2:37] WANG GUANGYOU
any update on this bug
[10 Sep 2020 6:45] Shlomi Noach
+1 on checking for progress. 
Also +1 that disabling AHI is in itself a dangerous operation: at the time we tried it and this locked production `master` to the point of outage. I think the operation `set global 	innodb_adaptive_hash_index=0` took many minutes to run, iirc some 20 minutes.

Keep up the good work.
[30 Oct 2020 15:30] Sunny Bains
The fix has been pushed and should be in the next 8.x release. All truncate/drop tables will be instant. This underlying issues is the same as Bug#98869.
[30 Oct 2020 21:07] Jean-François Gagné
Thanks Sunny, good to know this will be fixed in 8.0.

Can we hope to have this also fixed in 5.7 ?
[3 Dec 2020 0:39] Lucas Nicodemus
Sunny,

Can you confirm if the fix is pending release in 8.0.23 given that https://bugs.mysql.com/bug.php?id=98869 has been closed and resolved?
[2 Sep 2022 9:34] chengqing hu
There is a question that has been bothering me for a long time. After the 5.5.23 version is fixed, drop table only needs to scan the flush list and delete the page of the corresponding table, and no longer needs to delete the page on the LRU list. Why does drop table still cause semaphore waits in version 5.7?

the code:
buf_LRU_flush_or_remove_pages(
...
switch (buf_remove) {
		case BUF_REMOVE_ALL_NO_WRITE:
			buf_LRU_drop_page_hash_for_tablespace(buf_pool, id);
			break;

		case BUF_REMOVE_FLUSH_NO_WRITE:
			/* It is a DROP TABLE for a single table
			tablespace. No AHI entries exist because
			we already dealt with them when freeing up
			extents. */
		case BUF_REMOVE_FLUSH_WRITE:
			/* We allow read-only queries against the
			table, there is no need to drop the AHI entries. */
			break;
		}

		buf_LRU_remove_pages(buf_pool, id, buf_remove, trx);

when drop table, BUF_REMOVE_FLUSH_NO_WRITE was passed in instead of BUF_REMOVE_ALL_NO_WRITE. 
So it should no longer be necessary to scan the LRU list to remove AHI?

And the Reference Manual:
//
On a system with a large InnoDB buffer pool and innodb_adaptive_hash_index enabled, TRUNCATE TABLE operations may cause a temporary drop in system performance due to an LRU scan that occurs when removing an InnoDB table's adaptive hash index entries. The problem was addressed for DROP TABLE in MySQL 5.5.23 (Bug #13704145, Bug #64284) but remains a known issue for TRUNCATE TABLE (Bug #68184).

Doesn't this mean that drop table solves the performance problem caused by scanning LRU to delete AHI?
[2 Dec 2022 16:14] Vinicius Malvestio Grippa
Flamegraph of a idle databse when running drop (btr_drop_ahi_for_table hot) 8.0.30

Attachment: perf.report.out.svg (image/svg+xml, text), 675.12 KiB.

[2 Dec 2022 16:15] Vinicius Malvestio Grippa
I have attached the flamegraph for MySQL 8.0.30 when it is idle and a drop is executed. We can see function btr_drop_ahi_for_table being hot in the graph.
[15 Dec 2022 19:48] Online Ops
We've got the same request for clarity on this.  Is this fixed in 8.0.x?  Reference bug on the 8.x branch side?

We just sank production on this under 8.0.27, so pretty sure it's not fixed unless it's in the 8.0.30 release.
[7 Sep 2023 4:48] Jervin Real
This is still reproduceable on 8.0.33 and 8.0.34.
[1 Dec 2023 22:52] Daniel Lenski
> This is still reproducible on 8.0.33 and 8.0.34.

New bug report with a reproduction of this issue on 8.0.35: https://bugs.mysql.com/bug.php?id=113312
[1 Dec 2023 22:54] Mershad Irani
This is also reproducible on 8.1 and 8.0.35. Created  https://bugs.mysql.com/bug.php?id=113312  with the detailed steps to reproduce the  issue.
[6 Dec 2023 21:36] Nuno P
I wonder if this is what I have been battling with, for the past weeks.

I'm on 8.0.33, and I have a cronjob that updates the contents of a table from a CSV file.

The cronjob runs a "LOAD DATA LOCAL INFILE" into a new table, then a set of "RENAME TABLE" to swap the tables, and then "DROP TABLE" to delete the renamed old table.

The "DROP TABLE" is taking 3 seconds, and during that time, I get a number of logs saying that a number of concurrent "SELECT" queries took too long. Those "SELECT" tables are on another database (on the same MySQL server), and have absolutely nothing to do with this table.

I've been trying a number of things, such as using "TRUNCATE/OPTIMIZE/DROP" instead (but looks like "TRUNCATE" is an alias to "DROP/CREATE" anyway), add "SLEEP(5)" in between the queries, etc... nothing improves.

The table isn't even that big... just 350MB and 1 secondary index.

Is this the same issue described here?

I've recently migrated from MariaDB to MySQL, and I don't recall ever having this issue with MariaDB.

Thank you very much.
[9 Dec 2023 19:04] Nuno P
Looks like `innodb_adaptive_hash_index = OFF` resolved the issue for me !!