Bug #80989 Severe performance degradation from 5.6 to 5.7 with a long query on a larg table
Submitted: 7 Apr 2016 8:09 Modified: 8 Apr 2016 1:29
Reporter: Paolo Scaglione Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: Options Severity:S2 (Serious)
Version: OS:Any
Assigned to: CPU Architecture:Any
Tags: large query, partitioning, range_optimizer_max_mem_size

[7 Apr 2016 8:09] Paolo Scaglione
Description:
We migrated a simple database (5 tables) from 5.6.25 to 5.7.11 and we had a severe performance impact to our application. We had to downgrade again to 5.6.

The problem is on UPDATE to a very large partitioned table (350M rows, 40GB).

How to repeat:
CREATE TABLE `dspam_token_data` (
  `uid` int(10) UNSIGNED NOT NULL,
  `token` bigint(20) UNSIGNED NOT NULL,
  `spam_hits` bigint(20) UNSIGNED NOT NULL,
  `innocent_hits` bigint(20) UNSIGNED NOT NULL,
  `last_hit` date NOT NULL
) ENGINE=InnoDB
PARTITION BY RANGE (uid)
(
PARTITION p1 VALUES LESS THAN (500)ENGINE=InnoDB,
PARTITION p2 VALUES LESS THAN (1000)ENGINE=InnoDB,
PARTITION p3 VALUES LESS THAN (1500)ENGINE=InnoDB,
...
PARTITION p39 VALUES LESS THAN (19500)ENGINE=InnoDB
);
ALTER TABLE `dspam_token_data`
  ADD UNIQUE KEY `id_token_data_01` (`uid`,`token`);

Load the data...

Concurrently execute statements like:
UPDATE dspam_token_data SET last_hit=CURRENT_DATE(),innocent_hits=innocent_hits+1 
from dspam_token_data
WHERE uid=182 AND token IN ('817826770129840413','7032436664721219860','9296280635907542312','8084589742373015488','5598310934478947250','14923068003140115985','14948497202667263291','2631869130431364401','12988664423538783304','1424316543056514634','10938872930176794680','15588712953827092836','15527758387176677','5576155117189757872','14302861856647165440','10466073322814215855', ...

On 5.6 statements take 0.03 to complete while on 5.7 sometimes they take much longer (30, 90 or more secs).
[7 Apr 2016 9:59] Paolo Scaglione
Slow update into a partitioned table

Attachment: innoDB-status.txt (text/plain), 23.06 KiB.

[7 Apr 2016 10:06] MySQL Verification Team
Hi,    Can you please try setting the following variable on 5.7 and see if it has any effect.

range_optimizer_max_mem_size=128*1024*1024;  # size depend on how many items in the list.
[7 Apr 2016 10:12] Paolo Scaglione
The statement lenght is variable. Sometimes there are thousands of token, we use max_allowed_packet=128M.

I'll try the suggested parameter.
[7 Apr 2016 14:46] Paolo Scaglione
With the suggested value (range_optimizer_max_mem_size=128M) the problem is solved. We also tested the value that should be the default in 5.7.12 (range_optimizer_max_mem_size=8388608) and the large update we use works fine.

The problem is the default value in 5.7.11 which is too low for our query and
[7 Apr 2016 15:23] MySQL Verification Team
the default will be increased, when
https://bugs.mysql.com/bug.php?id=78973 is fixed/released
[7 Apr 2016 22:34] Paolo Scaglione
Thanks. I think that this bug can be closed as a duplicate of 78973.