Bug #46893 | Performance degradation on SuSE-10/RHEL in dbt2 test | ||
---|---|---|---|
Submitted: | 24 Aug 2009 13:02 | Modified: | 21 Nov 2009 15:46 |
Reporter: | Nirbhay Choubey | Email Updates: | |
Status: | No Feedback | Impact on me: | |
Category: | MySQL Server | Severity: | S3 (Non-critical) |
Version: | 5.1.38 | OS: | Linux |
Assigned to: | Georgi Kodinov | CPU Architecture: | Any |
[24 Aug 2009 13:02]
Nirbhay Choubey
[24 Aug 2009 13:13]
Nirbhay Choubey
Configurations used were : dbt2 ---- Number of warehouses = 10 (scale factor) mysqld --- user=root max_connections=200 table_cache=2048 transaction_isolation=REPEATABLE-READ skip-locking innodb_status_file=0 innodb_data_file_path=ibdata1:100M:autoextend innodb_buffer_pool_size=2G innodb_additional_mem_pool_size=20M innodb_log_file_size=650M innodb_log_files_in_group=2 innodb_log_buffer_size=16M innodb_support_xa=0 innodb_doublewrite=0 innodb_thread_concurrency=0 innodb_flush_log_at_trx_commit=1 innodb_flush_method=O_DIRECT
[25 Aug 2009 20:01]
Nirbhay Choubey
When run under the same configuration (as above), ~1% drop in tpm observed for innodb-plugin between 5.1.38 and 5.1.38pre on SUSE10.
[26 Aug 2009 13:10]
Alexey Stroganov
Most likely this performance degradation was caused by fix for BUG#43435 - LOCK_open does not use MY_MUTEX_INIT_FAST In results from BUG#43435 initializing with NULL is the worse case but in our tests on our benchmark platform we observe degradation when LOCK_open was initialized with MY_MUTEX_INIT_FAST. results for dbt2 test for the same binary(5.1.38pre) with only difference in initialization of LOCK_open: # Server 1: LOCK_open initialized with NULL # Server 2: LOCK_open initialized with MY_MUTEX_INIT_FAST # # Server 1 Server 2 # Thread INNODB INNODB 16 44773.01 43817.91 40 40781.30 40054.13 80 39377.48 38372.21
[21 Oct 2009 15:46]
Georgi Kodinov
I've tried a fresh 5.1-bugteam binary with and without the change in bug #43435. I'm not getting drastic performance differences : e.g. 121 NOTPM (without the fix) vs 123 NOTPM (with the fix). Can you please test with the latest 5.1 binary if the performance degradation still exists ? I can prepare a set of linux binaries with and without the fix for your testing if need be. My configuration : Core 2 Extreme 6600 (2.4 GHz)/8G Ram/SATA disk. I've tested as follows : 1. Compiled the server with BUILD/pentium64-max 2.deleted the database. 3. started the mysql server from the mysql-test directory (using mysql-test-run.pl --start), but with the config parameters from the bug added to the generated my.cnf 4. Loaded the data for 10 warehouses through the driver 5. run the test with the following command line : ./run_mysql.sh -t 1200 -w 10 -n test -o <path-to-mysqld-sock-file> -u root -c 16 I was repeating steps 2-5 for each separate run
[22 Oct 2009 12:10]
Alexey Stroganov
Joro, I've recompiled 5.1.40 with NULL, MY_MUTEX_INIT_FAST for LOCK_open and reran tests: # Test: dbt2 w10 # # Server 1 - 5.1.40_lock_open_null # Server 2 - 5.1.40_lock_open_fast # # Results are average values (3 runs) # Server 1 Server 2 # Thread INNODB INNODB 16 44618.08 43681.61
[22 Nov 2009 0:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".