Bug #68619 InnoDB: Warning: purge reached the head of the history list but its length is...
Submitted: 9 Mar 2013 21:59 Modified: 1 Dec 2013 19:01
Reporter: Jim Ockers Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S1 (Critical)
Version:5.1.67 OS:Linux
Assigned to: CPU Architecture:Any
Tags: crash

[9 Mar 2013 21:59] Jim Ockers
Description:
This was right after a CentOS 6.3 "yum update" to CentOS 6.4.  The main client is zarafa-server.

130309 14:51:14  InnoDB: Assertion failure in thread 140163025987328 in file fil/fil0fil.c line 4135
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer to
InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
21:51:14 UTC - mysqld got signal 6 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose the problem, but since we have already crashed,
something is definitely wrong and this may fail.

key_buffer_size=8384512
read_buffer_size=131072
max_used_connections=6
max_threads=151
thread_count=6
connection_count=6
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 338332 K  bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = 0 thread_stack 0x40000
/usr/libexec/mysqld(my_print_stacktrace+0x29) [0x84e269]
/usr/libexec/mysqld(handle_fatal_signal+0x483) [0x6a2d73]
/lib64/libpthread.so.0(+0xf500) [0x7f7a510a9500]
/lib64/libc.so.6(gsignal+0x35) [0x7f7a4f7388a5]
/lib64/libc.so.6(abort+0x175) [0x7f7a4f73a085]
/usr/libexec/mysqld(fil_io+0x36e) [0x76607e]
/usr/libexec/mysqld() [0x74df33]
/usr/libexec/mysqld(buf_read_page+0x225) [0x74e975]
/usr/libexec/mysqld(buf_page_get_gen+0x393) [0x747e53]
/usr/libexec/mysqld() [0x7cadb9]
/usr/libexec/mysqld() [0x7cb425]
/usr/libexec/mysqld(trx_purge_fetch_next_rec+0x218) [0x7cc488]
/usr/libexec/mysqld(row_purge_step+0x40) [0x7b1080]
/usr/libexec/mysqld(que_run_threads+0x55b) [0x7a04bb]
/usr/libexec/mysqld(trx_purge+0x332) [0x7ca6c2]
/usr/libexec/mysqld(srv_master_thread+0x58d) [0x7c2efd]
/lib64/libpthread.so.0(+0x7851) [0x7f7a510a1851]
/lib64/libc.so.6(clone+0x6d) [0x7f7a4f7ee90d]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.
130309 14:51:14 mysqld_safe Number of processes running now: 0
130309 14:51:14 mysqld_safe mysqld restarted
130309 14:51:14  InnoDB: Initializing buffer pool, size = 8.0M
130309 14:51:14  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 0 892828389
130309 14:51:14  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 0 892829350
130309 14:51:14  InnoDB: Starting an apply batch of log records to the database...
InnoDB: Progress in percents: 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
InnoDB: Apply batch completed
130309 14:51:15  InnoDB: Started; log sequence number 0 892829350
130309 14:51:15 [Note] Event Scheduler: Loaded 0 events
130309 14:51:15 [Note] /usr/libexec/mysqld: ready for connections.
Version: '5.1.67-log'  socket: '/var/lib/mysql/mysql.sock'  port: 3306  Source distribution
130309 14:51:16  InnoDB: Warning: purge reached the head of the history list,
InnoDB: but its length is still reported as 168187! Make a detailed bug
InnoDB: report, and submit it to http://bugs.mysql.com

How to repeat:
I'm sorry but I'm not sure how to repeat this.

I was using mysql+zarafa as a mail archive server for old emails. I did a system update to CentOS 6.4 using "yum update."  No clients were connected to the zarafa server.  I did a "reboot" which should have stopped the zarafa and mysql processes cleanly.  On reboot, zarafa complained about problems connecting to the database.  I checked and mysqld (safe_mysqld) was constantly restarting itself and logging the above errors to the logfile.

Suggested fix:
I'm not sure what to do.
[9 Mar 2013 23:40] Jim Ockers
Changed the category to InnoDB
[20 Mar 2013 18:51] Sveta Smirnova
Thank you for the report.

All errors complain about IO failures. Did yum reboot your operating system? Can it happen that some files in the MySQL data directory were touched? Please check operating system log files.

But I assume MySQL server successfully restored after this failure? Am I correct?
[20 Mar 2013 19:50] Jim Ockers
Hi Sveta, yum did not reboot the OS but after the yum updates completed I issued the "reboot" command in a terminal.  It was a clean reboot with no filesystem corruption or recovery events.  I expected MySQL to shut itself down cleanly as part of the normal shutdown process on CentOS 6.3.  The MySQL server restarted but the InnoDB database would not load.  The "use zarafa" command on the mysql> monitor would fail.

I had a backup from 3:30 in the morning (I run automysqlbackup as a daily cron job) and I wound up having to delete all datatabases (drop tables, stop the server, delete all files) and then restart mysql as if it was a fresh install, then restore the database from the backup using command line "mysql <backup.sql"
[21 Mar 2013 18:42] Sveta Smirnova
Thank you for the feedback.

This still looks rather as OS bug than MySQL bug for me. Only thing which concerns me about MySQL is its automatic script should stop server gracefully.

Please send us /var/log/messages file, /var/log/yum file, output of `ls -la /etc/init.d/mysql*`, output of `ls -la /var/lock/subsys/mysql`, MySQL configuration file and full error log file.
[22 Mar 2013 19:58] Jim Ockers
/var/log/messages-20130310 (grep -v [nagios|freshclam|clamd] to remove cruft)

Attachment: messages-20130310 (application/octet-stream, text), 63.97 KiB.

[22 Mar 2013 20:02] Jim Ockers
/var/log/yum.log

Attachment: yum.log (application/octet-stream, text), 29.45 KiB.

[22 Mar 2013 20:03] Jim Ockers
/etc/my.cnf

Attachment: my.cnf (application/octet-stream, text), 275 bytes.

[22 Mar 2013 20:08] Jim Ockers
/var/log/maillog shows Zarafa errors trying to connect to MySQL

Attachment: maillog-20130310 (application/octet-stream, text), 91.00 KiB.

[22 Mar 2013 20:09] Jim Ockers
/var/log/mysqld.log

Attachment: mysqld.log (application/octet-stream, text), 323.68 KiB.

[22 Mar 2013 20:10] Jim Ockers
/etc/init.d/mysqld Red Hat service control for MySQL

Attachment: mysqld (application/octet-stream, text), 5.50 KiB.

[22 Mar 2013 20:13] Jim Ockers
Hi Sveta,

I uploaded what I think might be all of the relevant logfiles and configuration files.  Here is the command output you requested:

[root@webserver ~]# ls -la /etc/init.d/mysql*
-rwxr-xr-x 1 root root 5631 Mar  9 15:21 /etc/init.d/mysqld
[root@webserver ~]# ls -la /var/lock/subsys/mysql
ls: cannot access /var/lock/subsys/mysql: No such file or directory
[root@webserver ~]# ls -la /var/lock/subsys/mysqld
-rw-r--r-- 1 root root 0 Mar  9 15:21 /var/lock/subsys/mysqld

I still do not think this is an OS bug.  It looks a lot like a MySQL bug to me.  I noticed that lots of other people are complaining about this.  By the way the InnoDB file is relatively large, maybe 550MB:

[root@webserver ~]# ls -als /var/lib/mysql/ib*
550916 -rw-rw---- 1 mysql mysql 564133888 Mar 22 14:12 /var/lib/mysql/ibdata1
  5120 -rw-rw---- 1 mysql mysql   5242880 Mar 22 14:12 /var/lib/mysql/ib_logfile0
  5120 -rw-rw---- 1 mysql mysql   5242880 Mar 22 06:05 /var/lib/mysql/ib_logfile1

Thank you for looking at this.  Please let me know if you have further questions or need more information.

Regards,
Jim
[12 Apr 2013 20:00] Sveta Smirnova
Thank you for the feedback.

>  By the way the InnoDB file is relatively large, maybe 550MB:

This can explain error. See in the error log file:

InnoDB: Error: trying to access page number 2936698501 in space 0,
InnoDB: space name ./ibdata1,
InnoDB: which is outside the tablespace bounds.
InnoDB: Byte offset 0, len 16384, i/o type 10.
InnoDB: If you get this error at mysqld startup, please check that
InnoDB: your my.cnf matches the ibdata files that you have in the
InnoDB: MySQL server.

Although default value for innodb_data_file_path is ibdata1:10M:autoextend, so this should not matter.

I also noticed you removed InnoDB files at the end. Do you still have broken datadir?

Probably makes sense to stop access to MySQL from zarafa and check if it really crashes at startup or this is some query which zarafa sends to it.

I also noticed yum upgraded MySQL server. I assume this is your OS package. Did you run mysql_upgrade after it?
[13 May 2013 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[14 May 2013 18:16] Jim Ockers
Hi Sveta,

I did remove the InnoDB files (using unix rm command) so I could recreate them.  I do not have a backup of the directory in a broken state, because it was unusable to me and I needed to get my database working again.  If this happens again I could try to keep a tarball of the database directory when it was in a bad state.

With Zarafa stopped, mysql would start and run and accept commands on the mysql> monitor command line.  However whenever I type "use zarafa" command the database backend would crash, the monitor would lose connection to it, and the crash dump would be logged to the logfile.  Then mysqld_safe would restart msyql, and we could repeat this process.

The yum update to mysql was all automatic.  I'm not sure if the MySQL update RPM from CentOS included a mysql_upgrade command or not as part of the post-install scripts.  I would think it does do a mysql_upgrade automatically, because I've upgraded mysql on other zarafa servers the same way with no crashing or corruption.

Jim
[10 Jun 2013 19:07] Sveta Smirnova
Thank you for the feedback.

Yes, please collect all data if this crash happens next time.
[11 Jul 2013 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[13 Oct 2013 1:57] Ernie Rojas
Hi, I had the same problem, I work in a server with other sysadmins and see that one of them reboot he server and next the database crashed
The server is ppc64 with redhat 6 and mysql 5.1.69

- mysqld.log:
131012 11:34:34 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
131012 11:34:34 [Warning] '--default-character-set' is deprecated and will be removed in a future release. Please use '--character-set-server' instead.
131012 11:34:34 [Warning] '--default-collation' is deprecated and will be removed in a future release. Please use '--collation-server' instead.
131012 11:34:34  InnoDB: Initializing buffer pool, size = 15.0G
131012 11:34:36  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 41 1848404067
131012 11:34:36  InnoDB: Database was not shut down normally!

36 131012 11:35:49  InnoDB: Completed initialization of buffer pool
InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
131012 11:35:49  InnoDB: Retrying to lock the first data file
37 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
38 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
39 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
40 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
41 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.
42 InnoDB: Unable to lock ./ibdata1, error: 11
InnoDB: Check that you do not already have another mysqld process
InnoDB: using the same InnoDB data or log files.

messages:
Oct 12 11:31:56 bdeva2 init: serial (hvc0) main process (1826) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty1) main process (1827) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty2) main process (1829) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty3) main process (1831) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty4) main process (1833) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty5) main process (1835) killed by TERM signal
Oct 12 11:31:56 bdeva2 init: tty (/dev/tty6) main process (1837) killed by TERM signal
Oct 12 11:32:58 bdeva2 auditd[1244]: The audit daemon is exiting.
Oct 12 11:32:58 bdeva2 kernel: type=1305 audit(1381595578.120:75249): audit_pid=0 old=1244 auid=4294967295 ses=4294967295 res=1
Oct 12 11:32:58 bdeva2 kernel: type=1305 audit(1381595578.220:75250): audit_enabled=0 old=1 auid=4294967295 ses=4294967295 res=1
Oct 12 11:32:58 bdeva2 kernel: Kernel logging (proc) stopped.
Oct 12 11:32:58 bdeva2 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1260" x-info="http://www.rsyslog.com"] exiting on signal 15.
Oct 12 11:34:33 bdeva2 kernel: imklog 5.8.10, log source = /proc/kmsg started.
Oct 12 11:34:33 bdeva2 rsyslogd: [origin software="rsyslogd" swVersion="5.8.10" x-pid="1263" x-info="http://www.rsyslog.com"] start
Oct 12 11:34:33 bdeva2 kernel: Reserving 1024MB of memory at 64MB for crashkernel (System RAM: 43008MB)
Oct 12 11:34:33 bdeva2 kernel: Using pSeries machine description
Oct 12 11:34:33 bdeva2 kernel: Using 1TB segments
Oct 12 11:34:33 bdeva2 kernel: Found initrd at 0xc000000003880000:0xc000000004738400
[1 Nov 2013 19:01] Sveta Smirnova
Thank you for the feedback, but your issue is different. Error:

36 131012 11:35:49  InnoDB: Completed initialization of buffer pool
InnoDB: Unable to lock ./ibdata1, error: 11

means other process locked MySQL datadir. Most likely you have two instances of MySQL server started. Check output of ps
[2 Dec 2013 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".