Bug #46893 Performance degradation on SuSE-10/RHEL in dbt2 test
Submitted: 24 Aug 2009 13:02 Modified: 21 Nov 2009 15:46
Reporter: Nirbhay Choubey Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:5.1.38 OS:Linux
Assigned to: Georgi Kodinov CPU Architecture:Any

[24 Aug 2009 13:02] Nirbhay Choubey
Description:
While testing of the latest MySQL server release (5.1.38) with dbt2 benchmark
on the SuSE 10 (x86_64) performance degradation was observed. We got the degraded results again when ran the test for the same binaries on the same hardware but on RHEL. 

Results for 16, 40 and 80 threads, in transactions per minute :

RHEL 
-----
Thresds          5.1.37        5.1.38pre     5.1.38
16               43059.1900    43413.5700    42154.0600
40               41157.6900    41278.0900    40077.5600
80               40231.2900    40190.8300    39394.4600

SLES 10   
------
Threads         5.1.37        5.1.38pre     5.1.38
16              44931.1500    44732.8700    43971.4100
40              40739.3100    40732.2100    40277.3600  
80              39506.8700    39523.3000    38299.9300

DBT2 package used :
dbt2-0.37

Machine details :
Hardware :
8 Cores - dl360-g5-b, 16 GB RAM, SLES 10/RHEL52, kernel 2.6.16.21-0.8-smp, ext3, RAID 10

OSes :
RHEL (2.6.18-92.1.18)
SLES (2.6.16.60) 

How to repeat:
Run DBT2-0.37 on similar hardware on SLES10/RHEL.
[24 Aug 2009 13:13] Nirbhay Choubey
Configurations used were :

dbt2
----
Number of warehouses = 10 (scale factor)

mysqld
---
user=root
max_connections=200
table_cache=2048
transaction_isolation=REPEATABLE-READ
skip-locking
innodb_status_file=0
innodb_data_file_path=ibdata1:100M:autoextend
innodb_buffer_pool_size=2G
innodb_additional_mem_pool_size=20M
innodb_log_file_size=650M
innodb_log_files_in_group=2
innodb_log_buffer_size=16M
innodb_support_xa=0
innodb_doublewrite=0
innodb_thread_concurrency=0
innodb_flush_log_at_trx_commit=1
innodb_flush_method=O_DIRECT
[25 Aug 2009 20:01] Nirbhay Choubey
When run under the same configuration (as above), ~1% drop in tpm observed for innodb-plugin between 5.1.38 and 5.1.38pre on SUSE10.
[26 Aug 2009 13:10] Alexey Stroganov
Most likely this performance degradation was caused by fix for BUG#43435 - LOCK_open does not use MY_MUTEX_INIT_FAST

In results from BUG#43435 initializing with NULL is the worse case but in our tests on our benchmark platform we observe degradation when LOCK_open was initialized with MY_MUTEX_INIT_FAST. 

results for dbt2 test for the same binary(5.1.38pre) with only difference in 
initialization of LOCK_open:

# Server 1: LOCK_open initialized with NULL
# Server 2: LOCK_open initialized with MY_MUTEX_INIT_FAST
#
#            Server 1 Server 2
# Thread       INNODB   INNODB
         16  44773.01 43817.91
         40  40781.30 40054.13
         80  39377.48 38372.21
[21 Oct 2009 15:46] Georgi Kodinov
I've tried a fresh 5.1-bugteam binary with and without the change in bug #43435.
I'm not getting drastic performance differences : e.g. 121 NOTPM (without the fix) vs 123 NOTPM (with the fix).
Can you please test with the latest 5.1 binary if the performance degradation still exists ? 
I can prepare a set of linux binaries with and without the fix for your testing if need be.

My configuration : 
Core 2 Extreme 6600 (2.4 GHz)/8G Ram/SATA disk. 
I've tested as follows : 
1. Compiled the server with BUILD/pentium64-max
2.deleted the database.
3. started the mysql server from the mysql-test directory (using mysql-test-run.pl --start), but with the config parameters from the bug added to the generated my.cnf
4. Loaded the data for 10 warehouses through the driver
5. run the test with the following command line : 
./run_mysql.sh -t 1200 -w 10 -n test -o <path-to-mysqld-sock-file> -u root -c 16

I was repeating steps 2-5 for each separate run
[22 Oct 2009 12:10] Alexey Stroganov
Joro,

I've recompiled 5.1.40 with NULL, MY_MUTEX_INIT_FAST for LOCK_open and reran tests:

# Test: dbt2 w10
#
# Server 1 - 5.1.40_lock_open_null
# Server 2 - 5.1.40_lock_open_fast
#
# Results are average values (3 runs)

#            Server 1 Server 2
# Thread       INNODB   INNODB
         16  44618.08 43681.61
[22 Nov 2009 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".