Bug #52111 32-bit wraparound on buf_pool->ulint_clock causes high read ahead random
Submitted: 16 Mar 2010 18:47 Modified: 2 Apr 2010 7:54
Reporter: Harrison Fisk Email Updates:
Status: Won't fix Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S3 (Non-critical)
Version:5.0,5.1 built-in OS:Any
Assigned to: Marko Mäkelä CPU Architecture:Any
Tags: innodb, read ahead, read ahead random, ulint_clock

[16 Mar 2010 18:47] Harrison Fisk
Description:
After running for a very long time, InnoDB read ahead random can go crazy for a bit of time causing a lot of disk I/O and heavy use of the read ahead thread.

It will recover after a short bit of time and everything will continue like normal.

This is due to a 32-bit counter, buf_pool->ulint_clock, used in the calculation of whether to do read ahead random or not.  When this counter wraps, then the read ahead random thinks that there are a lot more recent pages than reality and decides to read lots of pages.

The effect of reading lots of pages will eventually flush the data enough to the point that it will stop doing this.

Typically this will last for a few seconds to a minute or two depending on the speed of your disks and size of your buffer pool.

Normally it will take several weeks to a month or longer for the wrap around to occur.

How to repeat:
1.  Start MySQL with innodb_buffer_pool_size=100M, innodb_flush_log_at_trx_commit=0, innodb_log_file_size=100M
  *  These aren't required, but makes it occur faster since you can do more operations per second and hence increase the clock faster
2.  Setup a sysbench test with size of 1000000 rows in the table:

./sysbench --test=oltp --oltp-table-size=1000000 --mysql-table-engine=innodb prepare

3.  Run the test to warm up the buffers:

./sysbench --test=oltp --oltp-table-size=1000000 --mysql-table-engine=innodb --num-threads=4 run

4.  Use gdb to view the value:

gdb -ex "print buf_pool->ulint_clock" --batch -p $(pidof mysqld)

5.  use gdb to set the value just under 2^32:

gdb -ex "set buf_pool->ulint_clock=4294867296" --batch -p $(pidof mysqld)

6.  Watch disk i/o, Innodb_read_ahead_rnd from SHOW GLOBAL STATUS, and SHOW INNODB STATUS

7.  Launch another sysbench run:

./sysbench --test=oltp --oltp-table-size=1000000 --mysql-table-engine=innodb --num-threads=4 --max-requests=1000000 run

Suggested fix:
Handle the wrapping or increase it to always be 64-bit.
[16 Mar 2010 19:03] Harrison Fisk
To summarize the symptoms of this bug:

1.  Requires 32-bit server
2.  Requires a month or so of uptime with a busy server
3.  Requires a particular query pattern
4.  I/O increases dramatically for ~30 seconds to 1 minute
[18 Mar 2010 14:12] Marko Mäkelä
In the InnoDB Plugin 1.0, the buffer-pool wide ulint_clock has been replaced with block-wide 32-bit access_time that holds ut_time_ms(). There is minimal risk of overflow, and the impact of it would be very low: a n*49-days-old block could be mistaken for less than innodb_old_blocks_time old.

Side note: the access_time has to be compared with 32-bit arithmetics, because the field in buf_page_t is 32 bits. That was not the case until this morning, and thus the "index scan resistance" on 64-bit platforms did not work quite as intended.