Bug #81961 sum_rows_affected populated for simple selects
Submitted: 21 Jun 2016 21:39 Modified: 19 Sep 21:18
Reporter: Trey Raymond Email Updates:
Status: Can't repeat Impact on me:
None 
Category:MySQL Server: Performance Schema Severity:S3 (Non-critical)
Version:5.6.29 OS:Any
Assigned to: CPU Architecture:Any
Tags: diagnostics, performance schema

[21 Jun 2016 21:39] Trey Raymond
Description:
Simple select queries are reporting a positive value for rows affected in performance schema.

See attached records from performance_schema.events_statements_summary_by_digest - took hourly snapshots for quite some time and this issue has persisted without any clear pattern.

Even very simple queries, no temp tables or anything, so cause is unclear:
SELECT `c` FROM `sbtest3` WHERE `id` = ? 
SELECT @@`version_comment` LIMIT ? 

Most seem to increment it only rarely, though some do ~every time/row.

How to repeat:
run a traffic generator for a while, then:
select schema_name,digest_text,count_star,sum_rows_examined,sum_rows_affected from performance_schema.events_statements_summary_by_digest where sum_rows_affected>0 and digest_text like 'select%';
I'm unaware of the exact cause, the patterns seem somewhat random, but this is happening to varying extents across many production dbs, benchmark setups, labs hosts...

Suggested fix:
Need to find out where these numbers are coming from and where they should be going.  I'll dig through the code and see if anything's obvious.
[21 Jun 2016 21:39] Trey Raymond
snapshots

Attachment: bad_rows_affected.csv (text/csv), 61.93 KiB.

[21 Jun 2016 21:50] Trey Raymond
It's looking like this goes into the diagnostics area, which is the data source for p_s.  First guess as to a cause, diagnostics area can return -1 for affected row count, but p_s expects a uint64.  Not sure why it's not happening 100% for that, though.
[22 Jun 2016 6:56] Shane Bester
it could be expected, for example this query:

select 1 into @a;
[22 Jun 2016 9:45] Umesh Shastry
Hello Trey,

Thank you for the report.
I ran sysbench load for few hours on 5.6.31/5.6.30 without any luck.

- tried with diff oltp mode(simple, complex, nontrx), and with diff oltp_*_ranges

sysbench/sysbench --test=oltp.lua --oltp-table-size=1000000 --oltp_tables_count=5 --db-driver=mysql --mysql-socket=/tmp/mysql_ushastry.sock --mysql-db=sbtest2 --mysql-user=root prepare

 sysbench/sysbench --test=oltp.lua --oltp-test-mode=complex  --num-threads=100 --max-time=3600 --max-requests=0 --db-driver=mysql --mysql-socket=/tmp/mysql_ushastry.sock --mysql-db=sbtest1 --mysql-user=root run

Could you please share your exact sysbench prepare/run details to repeat this issue at our end?

Thanks,
Umesh
[23 Jun 2016 17:42] Trey Raymond
in this env, there are a bunch of sysbench setups running in pairs of rw and ro:

--max-requests=0
--max-time=0
--report-interval=60
--test=/path/to/sysbench/db/oltp.lua
--oltp-test-mode=complex
--oltp-table-size=1000000
--oltp_tables_count=4
--oltp_auto_inc=off
--mysql-user=[user]
--mysql-password=[password]
--mysql-db=sbtest2
--num-threads=4
--oltp-read-only=off
--mysql-host=master.hostname.com

--max-requests=0
--max-time=0
--report-interval=60
--test=/path/to/sysbench/db/oltp.lua
--oltp-test-mode=complex
--oltp-table-size=1000000
--oltp_tables_count=4
--oltp_auto_inc=off
--mysql-user=[user]
--mysql-password=[password]
--mysql-db=sbtest2
--num-threads=8
--oltp-read-only=on
--mysql-host=slave.hostname.com

You should only need the second (RO) to get the issue.  Boost the threads/size if only running one instance to get more traffic.

I worried this might be hard to reproduce.  The sample attached was collected over a period of weeks, with no reliable pattern.
This does happen on multiple production systems (with very different workloads) as well.  The ones I found it on were all 5.6.27-5.6.29, though that might just be a coincidence.  I checked a few that are on 5.6.24 and found nothing, but there just aren't a lot of dbs on that, and without a good way to reproduce on prod, it's more likely to be a fluke.
[30 Jun 2016 8:27] Umesh Shastry
Still no luck at my end with 2 sysbench sessions running since the uptime of the instance.

mysql> select schema_name,digest_text,count_star,sum_rows_examined,sum_rows_affected from performance_schema.events_statements_summary_by_digest where sum_rows_affected>0 and digest_text like 'select%';
Empty set (0.01 sec)

mysql> \s
--------------
bin/mysql  Ver 14.14 Distrib 5.6.31, for linux-glibc2.5 (x86_64) using  EditLine wrapper

Connection id:          202
Current database:
Current user:           root@localhost
SSL:                    Not in use
Current pager:          more
Using outfile:          ''
Using delimiter:        ;
Server version:         5.6.31-enterprise-commercial-advanced MySQL Enterprise Server - Advanced Edition (Commercial)
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql_ushastry.sock
Uptime:                 6 days 3 hours 29 min 22 sec

Threads: 101  Questions: 10720767842  Slow queries: 37  Opens: 723  Flush tables: 2  Open tables: 427  Queries per second avg: 20191.214
--------------
[12 Jul 2016 4:50] Umesh Shastry
Still no luck after ~18 days

mysql> select schema_name,digest_text,count_star,sum_rows_examined,sum_rows_affected from performance_schema.events_statements_summary_by_digest where sum_rows_affected>0 and digest_text like 'select%';
Empty set (0.01 sec)

mysql> \s
--------------
bin/mysql  Ver 14.14 Distrib 5.6.31, for linux-glibc2.5 (x86_64) using  EditLine wrapper

Connection id:          409
Current database:
Current user:           root@localhost
SSL:                    Not in use
Current pager:          more
Using outfile:          ''
Using delimiter:        ;
Server version:         5.6.31-enterprise-commercial-advanced MySQL Enterprise Server - Advanced Edition (Commercial)
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql_ushastry.sock
Uptime:                 17 days 23 hours 53 min 48 sec

Threads: 124  Questions: 86185276362  Slow queries: 5889  Opens: 982  Flush tables: 2  Open tables: 505  Queries per second avg: 55430.746
--------------

mysql> \q
Bye
[12 Jul 2016 20:14] Trey Raymond
Damn.  I'll dig a bit deeper for differences between our affected and unaffected systems.
[13 Jul 2016 8:49] Umesh Shastry
Thank you, for now I'll mark this report as can't repeat and if you can provide more information, please feel free to add it to this bug and change the status back to 'Open'. Once we have a repeatable test case then we can proceed further on this.

--

mysql> select schema_name,digest_text,count_star,sum_rows_examined,sum_rows_affected from performance_schema.events_statements_summary_by_digest where sum_rows_affected>0 and digest_text like 'select%';
Empty set (0.00 sec)

mysql> \s
--------------
bin/mysql  Ver 14.14 Distrib 5.6.31, for linux-glibc2.5 (x86_64) using  EditLine wrapper

Connection id:          413
Current database:       test
Current user:           root@localhost
SSL:                    Not in use
Current pager:          more
Using outfile:          ''
Using delimiter:        ;
Server version:         5.6.31-enterprise-commercial-advanced MySQL Enterprise Server - Advanced Edition (Commercial)
Protocol version:       10
Connection:             Localhost via UNIX socket
Server characterset:    latin1
Db     characterset:    latin1
Client characterset:    utf8
Conn.  characterset:    utf8
UNIX socket:            /tmp/mysql_ushastry.sock
Uptime:                 19 days 3 hours 47 min 40 sec

Threads: 124  Questions: 96490569672  Slow queries: 5889  Opens: 1013  Flush tables: 2  Open tables: 533  Queries per second avg: 58293.301
--------------
[27 Jul 2016 22:08] Trey Raymond
Okay, did some more digging.  If I trigger heavy swapping, I get weird results.  I see that also on some prod hosts where the data was weird, but it's not really consistent, it might be something where an attempt to do a proper digest gets abandoned if it takes too long, or it might be as crazy as memory addressing getting screwy.  Or something else, this is unusual.

I found some other odd data in p_s digests, example:

mysql> select * from performance_schema.events_statements_summary_by_digest where digest='861a058c4dde327cfdf9881705ef893d' and schema_name='performance_schema'\G
*************************** 1. row ***************************
                SCHEMA_NAME: performance_schema
                     DIGEST: 861a058c4dde327cfdf9881705ef893d
                DIGEST_TEXT: SHOW VARIABLES LIKE ?
                 COUNT_STAR: 1157583
             SUM_TIMER_WAIT: 783868560779000
             MIN_TIMER_WAIT: 291686000
             AVG_TIMER_WAIT: 677159000
             MAX_TIMER_WAIT: 84985136000
              SUM_LOCK_TIME: 430421084000000
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 0
              SUM_ROWS_SENT: 1805589
          SUM_ROWS_EXAMINED: 9027941
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 1
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 1
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 1
     SUM_NO_GOOD_INDEX_USED: 0
                 FIRST_SEEN: 2016-07-27 14:45:09
                  LAST_SEEN: 2016-07-27 14:45:09
1 row in set (0.00 sec)

>1M executions, all in one second, with all that other data, doesn't really make sense.  The only run of a query like that is every minute by a health check script (historically we see 60-61 execs per hours from saved p_s samples, which matches up to that).

So, try bogging down a system terribly, make it swap, give it high load, and see if anything gets screwy then.  Nondeterministic behavior is a pain to diagnose, but that's what I've seen so far.
[27 Jul 2016 23:15] Trey Raymond
this also seemed to be a different issue, but see this:
http://bugs.mysql.com/bug.php?id=79009
that was repeatable, might help
[13 Sep 2016 18:45] Trey Raymond
can't reopen from that state...
[13 Sep 2016 18:45] Trey Raymond
there we go
[1 Mar 2018 2:24] Trey Raymond
more weirdness from a 5.6.29 server, look particularly at SUM_ROWS_EXAMINED (and this is just for insert...values statements)
overflows bigint when you sum it, even!

mysql> select * from events_statements_summary_by_digest where digest='ec0277b763e7e88f4bfe934cef4759ef'\G
*************************** 1. row ***************************
                SCHEMA_NAME: dhpatrol
                     DIGEST: ec0277b763e7e88f4bfe934cef4759ef
                DIGEST_TEXT: INSERT INTO `track_host_traffic_min` ( `track_id` , `host_id` , SQL_TSI_MINUTE , `events_emitted` , `events_received` ) VALUES (...) /* , ... */ ON DUPLICATE KEY UPDATE `events_emitted` = VALUES ( `events_emitted` ) , `events_received` = VALUES ( `events_received` )
                 COUNT_STAR: 3960
             SUM_TIMER_WAIT: 1119441730488000
             MIN_TIMER_WAIT: 337820000
             AVG_TIMER_WAIT: 282687305000
             MAX_TIMER_WAIT: 5133291070000
              SUM_LOCK_TIME: 7869110000000
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 1567342
              SUM_ROWS_SENT: 0
          SUM_ROWS_EXAMINED: 10646517816578210032
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
                 FIRST_SEEN: 2018-02-25 09:01:09
                  LAST_SEEN: 2018-02-25 09:27:49
*************************** 2. row ***************************
                SCHEMA_NAME: dhpatrol
                     DIGEST: ec0277b763e7e88f4bfe934cef4759ef
                DIGEST_TEXT: INSERT INTO `track_host_traffic_min` ( `track_id` , `host_id` , SQL_TSI_MINUTE , `events_emitted` , `events_received` ) VALUES (...) /* , ... */ ON DUPLICATE KEY UPDATE `events_emitted` = VALUES ( `events_emitted` ) , `events_received` = VALUES ( `events_received` )
                 COUNT_STAR: 5242
             SUM_TIMER_WAIT: 1526638487102000
             MIN_TIMER_WAIT: 357096000
             AVG_TIMER_WAIT: 291232065000
             MAX_TIMER_WAIT: 5380710801000
              SUM_LOCK_TIME: 10557132000000
                 SUM_ERRORS: 0
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 2088874
              SUM_ROWS_SENT: 0
          SUM_ROWS_EXAMINED: 9043241997653254464
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
                 FIRST_SEEN: 2018-02-25 09:31:06
                  LAST_SEEN: 2018-02-25 10:07:35
*************************** 3. row ***************************
                SCHEMA_NAME: dhpatrol
                     DIGEST: ec0277b763e7e88f4bfe934cef4759ef
                DIGEST_TEXT: INSERT INTO `track_host_traffic_min` ( `track_id` , `host_id` , SQL_TSI_MINUTE , `events_emitted` , `events_received` ) VALUES (...) /* , ... */ ON DUPLICATE KEY UPDATE `events_emitted` = VALUES ( `events_emitted` ) , `events_received` = VALUES ( `events_received` )
                 COUNT_STAR: 697386
             SUM_TIMER_WAIT: 180236416397726000
             MIN_TIMER_WAIT: 235782000
             AVG_TIMER_WAIT: 258445704000
             MAX_TIMER_WAIT: 12839381895000
              SUM_LOCK_TIME: 1354210618000000
                 SUM_ERRORS: 20
               SUM_WARNINGS: 0
          SUM_ROWS_AFFECTED: 274675735
              SUM_ROWS_SENT: 0
          SUM_ROWS_EXAMINED: 16407818723921975191
SUM_CREATED_TMP_DISK_TABLES: 0
     SUM_CREATED_TMP_TABLES: 0
       SUM_SELECT_FULL_JOIN: 0
 SUM_SELECT_FULL_RANGE_JOIN: 0
           SUM_SELECT_RANGE: 0
     SUM_SELECT_RANGE_CHECK: 0
            SUM_SELECT_SCAN: 0
      SUM_SORT_MERGE_PASSES: 0
             SUM_SORT_RANGE: 0
              SUM_SORT_ROWS: 0
              SUM_SORT_SCAN: 0
          SUM_NO_INDEX_USED: 0
     SUM_NO_GOOD_INDEX_USED: 0
                 FIRST_SEEN: 2018-02-25 10:11:07
                  LAST_SEEN: 2018-03-01 02:21:24
3 rows in set (0.01 sec)
[19 Sep 21:18] Trey Raymond
We see the behavior on servers ranging from 5.7.21 to 5.7.26 (not enough 8.x deployed yet to have a good sample size yet).
unfortunately still no consistent pattern, so it's not readily reproducable.  consider the following two queries, one a very simple select, the second just fetching a variable value, both show this behavior.

digest_text	count_star	sum_rows_affected	sum_rows_sent	sum_rows_examined
SELECT `Id` , `BodyMd5` , `IsDelete` FROM DATA WHERE `Path` = ? 	33943	2	33940	33940
SELECT @@`version_comment` LIMIT ?	34	64	2	0