Description:
I expect SUM_TIMER_WAIT and similar such P_S counters to be atomically increasing.
While normally this seems to be the case sometimes I see situations when this is not
happening.
This wreaks havoc with pstop (https://github.com/sjmudd/pstop) as it gets very confused
by the unexpected decreases in these values.
Anyway it's best to show you just pulling data out of MySQL without using any special tool.
How to repeat:
A sample script
#!/bin/sh
mysql -Be "select @@VERSION, NOW(), FILE_NAME, SUM_TIMER_WAIT from file_summary_by_instance where FILE_NAME like '%ibdata%'" performance_schema
while true; do
sleep 1
mysql -BNe "select @@VERSION, NOW(), FILE_NAME, SUM_TIMER_WAIT from file_summary_by_instance where FILE_NAME like '%ibdata%'" performance_schema
done
shows:
$ sh test_script
@@VERSION NOW() FILE_NAME SUM_TIMER_WAIT
5.6.22-log 2015-05-17 09:15:24 /mysql/i1/data/ibdata1 21632302123191070
5.6.22-log 2015-05-17 09:15:25 /mysql/i1/data/ibdata1 21632302123191070
5.6.22-log 2015-05-17 09:15:26 /mysql/i1/data/ibdata1 21632568840760775
5.6.22-log 2015-05-17 09:15:27 /mysql/i1/data/ibdata1 21633150295197185
5.6.22-log 2015-05-17 09:15:28 /mysql/i1/data/ibdata1 21633742865951700
5.6.22-log 2015-05-17 09:15:29 /mysql/i1/data/ibdata1 21632955556991135 <======= drops down
5.6.22-log 2015-05-17 09:15:30 /mysql/i1/data/ibdata1 21632955556991135
5.6.22-log 2015-05-17 09:15:31 /mysql/i1/data/ibdata1 21632955556991135
5.6.22-log 2015-05-17 09:15:32 /mysql/i1/data/ibdata1 21633141394207775
5.6.22-log 2015-05-17 09:15:33 /mysql/i1/data/ibdata1 21630179946712480 <======= drops down
5.6.22-log 2015-05-17 09:15:34 /mysql/i1/data/ibdata1 21630179946712480
^C
I've seen this several times over the last few months on this server (my home pc) but not seen it anywhere else. I am not touching/resetting the ps tables
during this run, just collecting data out of them.
I can't explain the sudden small drop in a SUM_TIMER_WAIT which given I'm collecting values and calculating diffs completely confuses pstop
as it expects to see ever increasing values, and not sudden small drops.
$ rpm -q MySQL-server centos-release
MySQL-server-5.6.22-1.el6.x86_64
centos-release-6-6.el6.centos.12.2.x86_64
I also modified the script to show all the FSBI columns in case there's something that stands out here:
@@VERSION NOW() FILE_NAME EVENT_NAME OBJECT_INSTANCE_BEGIN COUNT_STAR SUM_TIMER_WAIT MIN_TIMER_WAIT AVG_TIMER_WAIT MAX_TIMER_WAIT COUNT_READ SUM_TIMER_READ MIN_TIMER_READ AVG_TIMER_READ MAX_TIMER_READ SUM_NUMBER_OF_BYTES_READ COUNT_WRITE SUM_TIMER_WRITE MIN_TIMER_WRITE AVG_TIMER_WRITE MAX_TIMER_WRITSUM_NUMBER_OF_BYTES_WRITE COUNT_MISC SUM_TIMER_MISC MIN_TIMER_MISC AVG_TIMER_MISC MAX_TIMER_MISC
5.6.22-log 2015-05-17 09:40:35 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6571929 21574686269019455 6260540 3282854245 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044409 886800235663725 30540530 146713745 18446744072166232986 214618357760 510651 20754031343025925 6260540 40642300335 18446744073708638206
5.6.22-log 2015-05-17 09:40:36 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6571981 21574429496642905 6260540 3282788575 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044458 886805602392240 30540530 146713745 18446744072166232986 214619979776 510654 20753769203920860 6260540 40641548115 18446744073708638206
5.6.22-log 2015-05-17 09:40:37 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6571992 21576736332937720 6260540 3283134835 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044465 886806595598255 30540530 146713745 18446744072166232986 214620192768 510658 20756075047009660 6260540 40645745025 18446744073708638206
5.6.22-log 2015-05-17 09:40:38 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6571993 21576737021250860 6260540 3283133840 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044466 886807283911395 30540530 146713745 18446744072166232986 214620487680 510658 20756075047009660 6260540 40645745025 18446744073708638206
5.6.22-log 2015-05-17 09:40:39 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572027 21577043483136385 6260540 3283163690 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044497 886810821817970 30540530 146713745 18446744072166232986 214621388800 510661 20756377970988610 6260540 40646099245 18446744073708638206
5.6.22-log 2015-05-17 09:40:40 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572032 21576738268098295 6260540 3283114935 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044499 886811002990555 30540530 146712750 18446744072166232986 214621421568 510664 20756072574777935 6260540 40645262450 18446744073708638206
5.6.22-log 2015-05-17 09:40:41 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572033 21576739337298430 6260540 3283113940 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044500 886812072190690 30540530 146713745 18446744072166232986 214621863936 510664 20756072574777935 6260540 40645262450 18446744073708638206
5.6.22-log 2015-05-17 09:40:42 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572060 21576335453967905 6260540 3283039315 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044525 886813739160955 30540530 146712750 18446744072166232986 214622289920 510666 20755667024477145 6260540 40644309240 18446744073708638206
5.6.22-log 2015-05-17 09:40:43 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572093 21577587827990110 6260540 3283213440 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044554 886817029402080 30540530 146712750 18446744072166232986 214623223808 510670 20756916108258225 6260540 40646436550 18446744073708638206
5.6.22-log 2015-05-17 09:40:44 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572129 21578730584048355 6260540 3283369655 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044589 886821862918055 30540530 146712750 18446744072166232986 214624518144 510671 20758054030800495 6260540 40648585750 18446744073708638206
5.6.22-log 2015-05-17 09:40:45 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572130 21577714445563895 6260540 3283214435 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044589 886821862918055 30540530 146712750 18446744072166232986 214624518144 510672 20757037892316035 6260540 40646516150 18446744073708638206
5.6.22-log 2015-05-17 09:40:46 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572183 21578054690256570 6260540 3283239310 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044641 886828749550860 30540530 146712750 18446744072166232986 214626369536 510673 20757371250375905 6260540 40647089270 18446744073708638206
5.6.22-log 2015-05-17 09:40:47 /mysql/i1/data/ibdata1 wait/io/file/innodb/innodb_data_file 140147900565056 6572196 21574533832787670 6260540 3282697035 18446744073708638206 16869 18446677928399881421 20858185 1088061550014564315 18446743991931173241 278446080 6044648 886829419757985 30540530 146712750 18446744072166232986 214626549760 510679 20753849722699880 6260540 40639716320 18446744073708638206
^C
Suggested fix:
It would be good to be clear and document as such if the timer and counter values provided by P_S are expected to be atomically increasing, excluding the moment when they may be reset by truncate_all_tables.
This looks like a bug in MySQL and performance_schema. I'm not sure if this might be related to timer metrics picked up from the PC.
I'm running MySQL on this hardware: AMD Athlon(tm) 64 X2 Dual Core Processor 5600+, stepping: 3, cpu MHz: 1000.000
(so it's old hardware).
I guess that it really depends on how MySQL picks up the timing values from the server, and whether it tries to may add negative increments should they happen to appear because the collection of one metric and the subsequent one give a negative rather than positive value.
Either way I'm confused and while I see no other issues with this server it still looks to me like a bug in P_S.
Thoughts and feedback welcome.