Bug #96919 Performance drops with different workload and concurrency of sysbench
Submitted: 18 Sep 2019 7:15 Modified: 24 Oct 2019 6:31
Reporter: zhai weixiang (OCA) Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server Severity:S5 (Performance)
Version:8.0.17 OS:Any
Assigned to: CPU Architecture:Any

[18 Sep 2019 7:15] zhai weixiang
Description:
If running sysbench with different workload, repeat same oltp-point-select may see regression. For example, after a clean shutdown:

oltp-point-select: 256 threads
    transactions:                        4684752 (78039.86 per sec.)
    queries:                             46847520 (780398.64 per sec.)
    
    
   oltp-read-write: 512 threads
        transactions:                        339946 (5649.53 per sec.)
        queries:                             6798920 (112990.63 per sec.)
   
   sleep 60
 
 oltp-point-select: 256 threads
 transactions:                        3777655 (62934.01 per sec.)
 queries:                             37776550 (629340.13 per sec.)

If running with same concurrency (for example, 256), there's no regression 

How to repeat:
run sysbench 

src/sysbench  oltp_point_select  --mysql-host=$2--mysql-port=$1 --mysql-user='xx' --mysql-db=sb1 --tables=100 --table_size=25000 --threads=256 --max-time=60 --report_interval=5 run
src/sysbench  oltp_read_write  --mysql-host=$2 --mysql-port=$1 --mysql-user='xx' --mysql-db=sb1 --tables=100 --table_size=25000 --threads=512 --max-time=60 --report_interval=5 run
sleep 60
src/sysbench  oltp_point_select  --mysql-host=$2 --mysql-port=$1 --mysql-user='xx' --mysql-db=sb1 --tables=100 --table_size=25000 --threads=256 --max-time=60 --report_interval=5 run

buffer pool: 128GB,  64 cores

Suggested fix:
Still investigate the issue
[18 Sep 2019 7:16] zhai weixiang
correct the title
[18 Sep 2019 13:30] zhai weixiang
correct title
[18 Sep 2019 14:46] zhai weixiang
Possiblely an issue of glibc...Firstly running with fresh restart, the point select performs very well. but when encountering regression, I can see hot __lll_lock_wait_private by malloc(). and _spin_lock from perf is almost 50% cpu time.
[24 Sep 2019 6:31] Umesh Shastry
Hello Zhai,

Thank you for the report and feedback.
May I request you to please provide configuration details used for these tests from your environment? Please mark it as private if you prefer after posting here. Thank you.

regards,
Umesh
[25 Oct 2019 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".