Bug #88096 MySQL crashes in os_aio_wait_until_no_pending_writes()
Submitted: 14 Oct 2017 8:38 Modified: 14 Nov 2017 22:22
Reporter: kfpanda kf Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S2 (Serious)
Version:5.6.37 OS:Linux
Assigned to: CPU Architecture:Any

[14 Oct 2017 8:38] kfpanda kf
Description:
#0  0x00007ffb99e55741 in pthread_kill () from /lib64/libpthread.so.0
#1  0x0000000000ab68b6 in my_write_core (sig=6) at /sda/src/rds-mysql/mysys/stacktrace.c:424
#2  0x000000000073403c in handle_fatal_signal (sig=6) at /sda/src/rds-mysql/sql/signal_handler.cc:230
#3  <signal handler called>
#4  0x00007ffb98e615f7 in raise () from /lib64/libc.so.6
#5  0x00007ffb98e62ce8 in abort () from /lib64/libc.so.6
#6  0x0000000000c0c3f5 in os_aio_wait_until_no_pending_writes () at /sda/src/rds-mysql/storage/innobase/os/os0file.cc:4092
#7  0x0000000000d4ebfb in buf_dblwr_sync_datafiles () at /sda/src/rds-mysql/storage/innobase/buf/buf0dblwr.cc:116
#8  0x0000000000d503f6 in buf_dblwr_flush_buffered_writes () at /sda/src/rds-mysql/storage/innobase/buf/buf0dblwr.cc:834
#9  0x0000000000d58200 in buf_flush_common (flush_type=BUF_FLUSH_LIST, page_count=0) at /sda/src/rds-mysql/storage/innobase/buf/buf0flu.cc:1650
#10 0x0000000000d5867f in buf_flush_list (min_n=18446744073709551614, lsn_limit=18446744073709551615, n_processed=0x0) at /sda/src/rds-mysql/storage/innobase/buf/buf0flu.cc:1820
#11 0x0000000000ba46f5 in buf_flush_list_now_set (thd=0x2d988a0, var=0x1508a80 <mysql_sysvar_buf_flush_list_now>, var_ptr=0x1508543 <innodb_buf_flush_list_now>, save=0x7ffb54004fb8) at /sda/src/rds-mysql/storage/innobase/handler/ha_innodb.cc:15676
#12 0x00000000007fa821 in sys_var_pluginvar::global_update (this=0x2da9b38, thd=0x2d988a0, var=0x7ffb54004f98) at /sda/src/rds-mysql/sql/sql_plugin.cc:3268
#13 0x0000000000731b35 in sys_var::update (this=0x2da9b38, thd=0x2d988a0, var=0x7ffb54004f98) at /sda/src/rds-mysql/sql/set_var.cc:193
#14 0x0000000000732c5e in set_var::update (this=0x7ffb54004f98, thd=0x2d988a0) at /sda/src/rds-mysql/sql/set_var.cc:670
#15 0x0000000000732836 in sql_set_variables (thd=0x2d988a0, var_list=0x2d9b3f0) at /sda/src/rds-mysql/sql/set_var.cc:573
#16 0x00000000007e58bb in mysql_execute_command (thd=0x2d988a0) at /sda/src/rds-mysql/sql/sql_parse.cc:3782
#17 0x00000000007ecb68 in mysql_parse (thd=0x2d988a0, rawbuf=0x7ffb54004e00 "set global innodb_buf_flush_list_now = ON", length=41, parser_state=0x7ffb7311a5e0) at /sda/src/rds-mysql/sql/sql_parse.cc:6489
#18 0x00000000007df8e2 in dispatch_command (command=COM_QUERY, thd=0x2d988a0, packet=0x38b0401 "set global innodb_buf_flush_list_now = ON;", packet_length=42) at /sda/src/rds-mysql/sql/sql_parse.cc:1377
#19 0x00000000007de914 in do_command (thd=0x2d988a0) at /sda/src/rds-mysql/sql/sql_parse.cc:1040
#20 0x00007ffb7ec40930 in threadpool_process_request (thd=0x2d988a0) at /sda/src/rds-mysql/plugin/threadpool/threadpool_common.cc:321
#21 0x00007ffb7ec435c8 in handle_event (connection=0x3984230) at /sda/src/rds-mysql/plugin/threadpool/threadpool_unix.cc:1611
#22 0x00007ffb7ec43825 in worker_main (param=0x7ffb7ee48e00 <all_groups>) at /sda/src/rds-mysql/plugin/threadpool/threadpool_unix.cc:1664
#23 0x0000000000b64db5 in pfs_spawn_thread (arg=0x7ffb4c001160) at /sda/src/rds-mysql/storage/perfschema/pfs.cc:1860
#24 0x00007ffb99e50dc5 in start_thread () from /lib64/libpthread.so.0
#25 0x00007ffb98f2221d in clone () from /lib64/libc.so.6

How to repeat:
It's not yet known how to reproduce it.
[14 Oct 2017 22:22] MySQL Verification Team
Thank you for the bug report. Could you please re-open this bug report when you can provide a repeatable test case. Thanks.
[15 Oct 2017 7:08] MySQL Verification Team
The crash happened on this statement:

set global innodb_buf_flush_list_now = ON ;

This is a debug only variable.  Probably it malfunctions somewhere...
[15 Nov 2017 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".