Bug #96113 | mysqld got signal 11 | ||
---|---|---|---|
Submitted: | 6 Jul 2019 16:14 | Modified: | 27 Dec 2019 12:48 |
Reporter: | Mikael P HOUNDEGNON GBAI | Email Updates: | |
Status: | Can't repeat | Impact on me: | |
Category: | MySQL Server: Group Replication | Severity: | S1 (Critical) |
Version: | 5.7.26 | OS: | Red Hat (Amazon Linux AMI 2018.03 - rhel fedora) |
Assigned to: | MySQL Verification Team | CPU Architecture: | Other (36) |
[6 Jul 2019 16:14]
Mikael P HOUNDEGNON GBAI
[6 Jul 2019 21:09]
MySQL Verification Team
Hi, There is not enough data to go on here. Can you share - why do you think this is group replication related? - your config - what is the binary you are using - is there a core dumped - what did you do to crash mysqld - can you reproduce this issue thanks
[6 Jul 2019 21:49]
Mikael P HOUNDEGNON GBAI
MySQL Config File
Attachment: my.cnf (application/octet-stream, text), 4.25 KiB.
[6 Jul 2019 22:08]
Mikael P HOUNDEGNON GBAI
- why do you think this is group replication related? I'm using MySQL InnoDB Cluster which is based of group replication (3 mysql servers) the issue happened on R/W node of the cluster 2 times the same day. - your config : attached to this report (my.cnf) - what is the binary you are using: Mysql Community Edition Ver 14.14 Distrib 5.7.26, for Linux (x86_64) using EditLine wrapper - is there a core dumped : i will update soon - what did you do to crash mysqld : still investigating - can you reproduce this issue: Not yet, still investigating will update ASAP
[9 Jul 2019 18:55]
MySQL Verification Team
Hi, if you can get us the core dump or if yoiu can figure out what were you doing when this crash occurs would help a lot. Just from this report I'm having issues seeing what/why crashed. It looks like group replication messaging crashed during compression stack_bottom = 7f90d3ad6e68 thread_stack 0x40000 /usr/sbin/mysqld(my_print_stacktrace+0x35)[0xf54535] /usr/sbin/mysqld(handle_fatal_signal+0x4a4)[0x7d3b54] /lib64/libpthread.so.0(+0xf5e0)[0x7f9f221995e0] /usr/sbin/mysqld[0x120bae6] /usr/sbin/mysqld(LZ4_compress_fast_extState+0xce)[0x120c10e] /usr/sbin/mysqld(LZ4_compress_fast+0x28)[0x120c218] /usr/lib64/mysql/plugin/group_replication.so(_ZN21Gcs_message_stage_lz45applyER10Gcs_packet+0x3e3)[0x7f90f1b97423] but I never seen this on bare metal thanks Bogdan
[16 Jul 2019 11:09]
MySQL Verification Team
One additional question - was this 5.7.26 from scratch or there were some upgrades/downgrades along the way? thanks
[10 Aug 2019 1:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[27 Dec 2019 12:10]
ta fan
10:55:07 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=16777216 read_buffer_size=8388608 max_used_connections=95 max_threads=50001 thread_count=31 connection_count=28 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 819273023 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7fdb62de6000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fdb417fed48 thread_stack 0x100000 /opt/tiger/app/percona-5.7.23/bin/mysqld(my_print_stacktrace+0x2c)[0xe9f89c] /opt/tiger/app/percona-5.7.23/bin/mysqld(handle_fatal_signal+0x479)[0x7a4269] /lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7fdeec49f0c0] /opt/tiger/app/percona-5.7.23/bin/mysqld[0x121f83e] /opt/tiger/app/percona-5.7.23/bin/mysqld(LZ4_compress_fast_extState+0xf9)[0x121fac9] /opt/tiger/app/percona-5.7.23/bin/mysqld(LZ4_compress_fast+0x28)[0x12202d8] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN21Gcs_message_stage_lz45applyER10Gcs_packet+0x118)[0x7fdb6db84a28] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN20Gcs_message_pipeline8outgoingER10Gcs_packet+0x7d)[0x7fdb6db744bd] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN22Gcs_xcom_communication20send_binding_messageERK11Gcs_messagePyN27Gcs_internal_message_header15enum_cargo_typeE+0x5d6)[0x7fdb6db3d976] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN22Gcs_xcom_communication12send_messageERK11Gcs_message+0x228)[0x7fdb6db3cf28] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN14Gcs_operations12send_messageERK18Plugin_gcs_messageb+0x1b7)[0x7fdb6db9bb87] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_Z37group_replication_trans_before_commitP11Trans_param+0xf51)[0x7fdb6dba5271] /opt/tiger/app/percona-5.7.23/bin/mysqld(_ZN14Trans_delegate13before_commitEP3THDbP11st_io_cacheS3_y+0x148)[0xbd3188] /opt/tiger/app/percona-5.7.23/bin/mysqld(_ZN13MYSQL_BIN_LOG6commitEP3THDb+0x19e)[0xe37c9e] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z15ha_commit_transP3THDbb+0x1d2)[0x801742] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z17trans_commit_stmtP3THD+0x2e)[0xd1baae] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z21mysql_execute_commandP3THDb+0x83c)[0xc696ac] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x5dd)[0xc6f6ad] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0x8a7)[0xc70017] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z10do_commandP3THD+0x1b7)[0xc717a7] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z26threadpool_process_requestP3THD+0xc7)[0xd1b367] /opt/tiger/app/percona-5.7.23/bin/mysqld[0xd29dce] /opt/tiger/app/percona-5.7.23/bin/mysqld(pfs_spawn_thread+0x1b1)[0xeb9a81] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7fdeec495494] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fdeeb0dcacf] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7fdb5d65b030): is an invalid pointer Connection ID (thread ID): 8168625 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. The "--memlock" argument, which was enabled, uses system calls that are unreliable and unstable on some operating systems and operating-system versions (notably, some versions of Linux). This crash could be due to use of those buggy OS calls. You should consider whether you really need the "--memlock" parameter and/or consult the OS distributer about "mlockall" bugs. --------------- i found this bugs error and find out hot to trigger this problem by follow step 1 . use sysbench to generate test data and table struct , in my test case sbtest1 has 500W records 。 sysvar : group_replication_compression_threshold | 1000000 show create table sbtest1\G; *************************** 1. row *************************** Table: sbtest1 Create Table: CREATE TABLE `sbtest1` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', `d` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5160965 DEFAULT CHARSET=utf8 MAX_ROWS=1000000 1 row in set (0.00 sec) 2. add another column d by : alter table sbtest1 add column d int(10) unsigned; 3. finally exec full update sql : update sbtest1 set d=id ; 4. got mysqld down is there any thing i need to provide ?
[27 Dec 2019 12:11]
ta fan
10:55:07 UTC - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. Attempting to collect some information that could help diagnose the problem. As this is a crash and something is definitely wrong, the information collection process might fail. Please help us make Percona Server better by reporting any bugs at http://bugs.percona.com/ key_buffer_size=16777216 read_buffer_size=8388608 max_used_connections=95 max_threads=50001 thread_count=31 connection_count=28 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 819273023 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. Thread pointer: 0x7fdb62de6000 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... stack_bottom = 7fdb417fed48 thread_stack 0x100000 /opt/tiger/app/percona-5.7.23/bin/mysqld(my_print_stacktrace+0x2c)[0xe9f89c] /opt/tiger/app/percona-5.7.23/bin/mysqld(handle_fatal_signal+0x479)[0x7a4269] /lib/x86_64-linux-gnu/libpthread.so.0(+0x110c0)[0x7fdeec49f0c0] /opt/tiger/app/percona-5.7.23/bin/mysqld[0x121f83e] /opt/tiger/app/percona-5.7.23/bin/mysqld(LZ4_compress_fast_extState+0xf9)[0x121fac9] /opt/tiger/app/percona-5.7.23/bin/mysqld(LZ4_compress_fast+0x28)[0x12202d8] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN21Gcs_message_stage_lz45applyER10Gcs_packet+0x118)[0x7fdb6db84a28] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN20Gcs_message_pipeline8outgoingER10Gcs_packet+0x7d)[0x7fdb6db744bd] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN22Gcs_xcom_communication20send_binding_messageERK11Gcs_messagePyN27Gcs_internal_message_header15enum_cargo_typeE+0x5d6)[0x7fdb6db3d976] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN22Gcs_xcom_communication12send_messageERK11Gcs_message+0x228)[0x7fdb6db3cf28] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_ZN14Gcs_operations12send_messageERK18Plugin_gcs_messageb+0x1b7)[0x7fdb6db9bb87] /opt/tiger/app/percona-5.7.23/lib/mysql/plugin/group_replication.so(_Z37group_replication_trans_before_commitP11Trans_param+0xf51)[0x7fdb6dba5271] /opt/tiger/app/percona-5.7.23/bin/mysqld(_ZN14Trans_delegate13before_commitEP3THDbP11st_io_cacheS3_y+0x148)[0xbd3188] /opt/tiger/app/percona-5.7.23/bin/mysqld(_ZN13MYSQL_BIN_LOG6commitEP3THDb+0x19e)[0xe37c9e] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z15ha_commit_transP3THDbb+0x1d2)[0x801742] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z17trans_commit_stmtP3THD+0x2e)[0xd1baae] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z21mysql_execute_commandP3THDb+0x83c)[0xc696ac] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z11mysql_parseP3THDP12Parser_state+0x5dd)[0xc6f6ad] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z16dispatch_commandP3THDPK8COM_DATA19enum_server_command+0x8a7)[0xc70017] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z10do_commandP3THD+0x1b7)[0xc717a7] /opt/tiger/app/percona-5.7.23/bin/mysqld(_Z26threadpool_process_requestP3THD+0xc7)[0xd1b367] /opt/tiger/app/percona-5.7.23/bin/mysqld[0xd29dce] /opt/tiger/app/percona-5.7.23/bin/mysqld(pfs_spawn_thread+0x1b1)[0xeb9a81] /lib/x86_64-linux-gnu/libpthread.so.0(+0x7494)[0x7fdeec495494] /lib/x86_64-linux-gnu/libc.so.6(clone+0x3f)[0x7fdeeb0dcacf] Trying to get some variables. Some pointers may be invalid and cause the dump to abort. Query (7fdb5d65b030): is an invalid pointer Connection ID (thread ID): 8168625 Status: NOT_KILLED You may download the Percona Server operations manual by visiting http://www.percona.com/software/percona-server/. You may find information in the manual which will help you identify the cause of the crash. The "--memlock" argument, which was enabled, uses system calls that are unreliable and unstable on some operating systems and operating-system versions (notably, some versions of Linux). This crash could be due to use of those buggy OS calls. You should consider whether you really need the "--memlock" parameter and/or consult the OS distributer about "mlockall" bugs. --------------- i found this bugs error and find out hot to trigger this problem by follow step 1 . use sysbench to generate test data and table struct , in my test case sbtest1 has 500W records 。 sysvar : group_replication_compression_threshold | 1000000 show create table sbtest1\G; *************************** 1. row *************************** Table: sbtest1 Create Table: CREATE TABLE `sbtest1` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', `d` int(10) unsigned DEFAULT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=5160965 DEFAULT CHARSET=utf8 MAX_ROWS=1000000 1 row in set (0.00 sec) 2. add another column d by : alter table sbtest1 add column d int(10) unsigned; 3. finally exec full update sql : update sbtest1 set d=id ; 4. got mysqld down is there any thing i need to provide ?
[27 Dec 2019 12:48]
MySQL Verification Team
Hi, I followed your test and I cannot reproduce this neither on 5.7 nor on 8.0 MySQL server available from https://dev.mysql.com/downloads/
[29 Dec 2019 12:37]
ta fan
1 i used the 5.7.23 that i build in my local env。and it;s wired that when i set group_replication_compression_threshold=100000000 to avoid gcs compress pakcet by lz4 func . bu it's doesn't wokrk, the backtrace still shows panic happens where compress packet using cs_message_stage_lz4::apply::LZ4_compress_fast, is there any i can provide to help you tracing the problem ? my mysqld binary or the coredump 2 i'll try the newest 5.7.28 later in the same env。