Description:
ERROR MSG from SLAVE replication MYSQLD :
070417 21:49:15 - mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_size=8388600
read_buffer_size=131072
max_used_connections=1
max_threads=151
threads_connected=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338095 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd: 0xc67ec0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0x40999a80, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x5a8587
New value of fp=0xc67ec0 failed sanity check, terminating stack trace!
Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved
stack trace is much more helpful in diagnosing the problem, so please do
resolve it
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at (nil) is invalid pointer
thd->thread_id=8
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
Number of processes running now: 0
SHOW SLAVE STATUS (slave) AFTER RESTART -
Master_Host: XX.XXX.XX.XXX
Master_User: replication02
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: replication01-bin.000004
Read_Master_Log_Pos: 698035705
Relay_Log_File: replication01-relay-bin.000007
Relay_Log_Pos: 28019012
Relay_Master_Log_File: replication01-bin.000002
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 157456272
Relay_Log_Space: 2716169748
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 42688
How to repeat:
Crash occured during a replication load test 80% updates, 20% insert with a total of ~1000tps. Slave cluster was lagging substantially behind master (see show slave status output above). Master cluster is 4x dual 3.2ghz cpu data nodes, slave cluster is less powerful - 2 x dual 1.8ghz cpu data nodes. After restarting slave replication mysqld and re-synchronising with master binlog error did not re-occur.