Bug #96099 show processlist crash with error of __cxa_pure_virtual
Submitted: 5 Jul 2019 7:07 Modified: 9 Aug 2019 5:21
Reporter: andy zhang Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: Information schema Severity:S2 (Serious)
Version:8.0 OS:Any
Assigned to: CPU Architecture:Any

[5 Jul 2019 7:07] andy zhang
Description:
Under heavy sysbench pressure, if there are concurrent "show processlist", MySQL could crash with stack like this (checking from core file)

#0  0x00007f55ffe329b1 in pthread_kill () from /lib64/libpthread.so.0
#1  0x0000000000ebc0bc in handle_fatal_signal (sig=6) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/signal_handler.cc:249
#2  <signal handler called>
#3  0x00007f55fe1e11f7 in raise () from /lib64/libc.so.6
#4  0x00007f55fe1e28e8 in abort () from /lib64/libc.so.6
#5  0x00007f55fe8d29d5 in __gnu_cxx::__verbose_terminate_handler() () from /lib64/libstdc++.so.6
#6  0x00007f55fe8d0946 in ?? () from /lib64/libstdc++.so.6
#7  0x00007f55fe8d0973 in std::terminate() () from /lib64/libstdc++.so.6
#8  0x00007f55fe8d14df in __cxa_pure_virtual () from /lib64/libstdc++.so.6
#9  0x0000000000df0e57 in Fill_process_list::operator() (this=0x7f55d40c8990, inspect_thd=0x7f1d945317f0) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_show.cc:2137
#10 0x0000000000ca5405 in operator() (thd=<optimized out>, this=<synthetic pointer>) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/mysqld_thd_manager.cc:83
#11 for_each<THD**, Do_THD> (__f=..., __last=0x7f55d40c8878, __first=0x7f55d40c8800) at /usr/include/c++/4.8.2/bits/stl_algo.h:4417
#12 Global_THD_manager::do_for_all_thd_copy (this=0x5ad5a20, func=func@entry=0x7f55d40c8990) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/mysqld_thd_manager.cc:313
#13 0x0000000000de94fc in fill_schema_processlist (thd=<optimized out>, tables=<optimized out>) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_show.cc:2247
#14 0x0000000000de975b in do_fill_table (thd=thd@entry=0x7f1f5800fc30, table_list=table_list@entry=0x7f1f5801ba98, qep_tab=qep_tab@entry=0x7f1f5801d6d0) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_show.cc:4715
#15 0x0000000000df20c1 in get_schema_tables_result (join=join@entry=0x7f1f5801cc60, executed_place=executed_place@entry=PROCESSED_BY_JOIN_EXEC) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_show.cc:4828
#16 0x0000000000dd662e in JOIN::prepare_result (this=this@entry=0x7f1f5801cc60) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_select.cc:1438
#17 0x0000000000d45bc2 in JOIN::exec (this=0x7f1f5801cc60) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_executor.cc:204
#18 0x0000000000dd727d in Sql_cmd_dml::execute_inner (this=0x7f1f5801c050, thd=0x7f1f5800fc30) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_select.cc:699
#19 0x0000000000de19c4 in Sql_cmd_dml::execute (this=0x7f1f5801c050, thd=0x7f1f5800fc30) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_select.cc:597
#20 0x0000000000d7ccae in mysql_execute_command (thd=thd@entry=0x7f1f5800fc30, first_level=first_level@entry=true) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_parse.cc:4644
#21 0x0000000000d82650 in mysql_parse (thd=thd@entry=0x7f1f5800fc30, parser_state=parser_state@entry=0x7f55d40ca3e0, force_primary_storage_engine=force_primary_storage_engine@entry=false) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_parse.cc:5396
#22 0x0000000000d85cb0 in dispatch_command (thd=thd@entry=0x7f1f5800fc30, com_data=com_data@entry=0x7f55d40caba0, command=COM_QUERY) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_parse.cc:1794
#23 0x0000000000d86790 in do_command (thd=thd@entry=0x7f1f5800fc30) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/sql_parse.cc:1288
#24 0x0000000000eabf58 in handle_connection (arg=arg@entry=0xfc657e0) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/sql/conn_handler/connection_handler_per_thread.cc:316
#25 0x00000000022d4930 in pfs_spawn_thread (arg=0xf9abbb0) at /home/admin/137_20190626151930927_92578811_code/rpm_workspace/storage/perfschema/pfs.cc:2836
#26 0x00007f55ffe2de25 in start_thread () from /lib64/libpthread.so.0
#27 0x00007f55fe2a434d in clone () from /lib64/libc.so.6

How to repeat:
High volume sysbench workload  like oltp_write_only, and then issue show processlist at the same time.

Suggested fix:
void Global_THD_manager::do_for_all_thd_copy(Do_THD_Impl *func) {
  Do_THD doit(func);

  for (int i = 0; i < NUM_PARTITIONS; i++) {
    MUTEX_LOCK(lock_remove, &LOCK_thd_remove[i]);
    mysql_mutex_lock(&LOCK_thd_list[i]);

    /* Take copy of global_thread_list. */
    THD_array thd_list_copy(thd_list[i]);

    /*
      Allow inserts to global_thread_list. Newly added thd
      will not be accounted for when executing func.
    */
    mysql_mutex_unlock(&LOCK_thd_list[i]);

    /* Execute func for all existing threads. */
    std::for_each(thd_list_copy.begin(), thd_list_copy.end(), doit);

    DEBUG_SYNC_C("inside_do_for_all_thd_copy");
  }
}

do_for_all_thd_copy released the LOCK_thd_list[i] too early, it's possible that target thd was free or is under freeing while "doit" logic is execution.  But we can't use do_for_all_thd either here to avoid the performance regression for new thread creation.

So, we might need a new lock to serialize thd removal only?
[9 Jul 2019 5:21] Umesh Shastry
Hello Andy Zhang,

Thank you for the report and feedback.
I tried to reproduce at my end on MySQL server 8.0.16, mysqlslap for concurrent show processlist statement and sysbench 1.1.0 but not able to reproduce. Could you please share and provide exact details such as MySQL version(if built using source then exact cmake options used, if binary then -is it release build or debug build), configuration details(my.cnf),exact sysbench statement to reproduce this issue at our end? Thank you!

regards,
Umesh
[10 Aug 2019 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".