Description:
We recently set up a replication arrangement from our existing mysql 4.1 server to a new 5.0 server (4.1 as master) to ease migration. While creating a new user on the master earlier today, I noticed that I was unable to reach the slave server. My investigation showed that mysqld_safe was restarting the mysql process about every 2 seconds. I cleared out mysql and started it with --skip-slave-start, which allowed me to skip several GRANT statements in the relay log. After skipping these statements, operation continued normally. Here is the relevant section of the MySQL Error log, which also shows the statement that appears to have caused the crash:
070710 14:34:36 - mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.
key_buffer_size=67108864
read_buffer_size=1044480
max_used_connections=2
max_connections=200
threads_connected=1
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 474334 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.
thd=0x61a45c0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0xe23e20, backtrace may not be correct.
Bogus stack limit or frame pointer, fp=0xe23e20, stack_bottom=0x45150000, thread_stack=262144, aborting backtrace.
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 0x619f4d6 = GRANT USAGE, SELECT (`Host`, `Db`, `User`, `Table_name`, `Table_priv`, `Column_priv`) ON `mysql`.`tables_priv` TO 'foo'@'bar.net'
thd->thread_id=11
The manual page at http://www.mysql.com/doc/en/Crashing.html contains
information that should help you find out what is causing the crash.
Number of processes running now: 0
070710 14:34:36 mysqld restarted
070710 14:34:39 InnoDB: Started; log sequence number 0 1433984653
070710 14:34:39 [Note] Recovering after a crash using /var/lib/mysqllogs/bin-log
070710 14:34:39 [Note] Starting crash recovery...
070710 14:34:39 [Note] Crash recovery finished.
The MySQL version on the master is:
mysql Ver 14.7 Distrib 4.1.16, for pc-linux-gnu (i686) using readline 4.3
and on the slave, it's:
mysql Ver 14.12 Distrib 5.0.41, for unknown-linux-gnu (x86_64) using readline 5.0
How to repeat:
The problem is occurring in production and I don't have a ready test environment, but this seems like a reasonable recipe:
1) Set up MySQL 4.1 master on x86 Linux
2) Set up MySQL 5.0 slave on x86_64 Linux
3) Establish replication
4) Issue GRANT statement on master
5) Duck and cover
Suggested fix:
My workaround will probably be to exclude the mysql.% tables from replication. This is a short-term setup anyway, so I may not even do that.