Bug #70252 How much does SUM_TIMER_WAIT from performance schema overstate waits?
Submitted: 5 Sep 2013 17:36 Modified: 6 Sep 2013 18:07
Reporter: Mark Callaghan Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: Performance Schema Severity:S3 (Non-critical)
Version:5.6.13 OS:Any
Assigned to: CPU Architecture:Any

[5 Sep 2013 17:36] Mark Callaghan
Description:
The value of SUM_TIMER_WAIT appears to greatly overstate the wait times.

Querying events_waits_summary_global_by_event_name.sum_timer_wait immediately after startup shows high wait times for a host that is idle and has not done recovery. The values grow greatly during perf tests too.

How to repeat:
1) restart server
2) immediately after query events_waits_summary_global_by_event_name.sum_timer_wait

My query is:
mysql -h${dbh} -uroot -ppw -A performance_schema -e 'select timer_frequency into @tf from performance_timers where timer_name = "CYCLE"; SELECT EVENT_NAME, COUNT_STAR FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY COUNT_STAR DESC LIMIT 10; SELECT EVENT_NAME, SUM_TIMER_WAIT / @tf as secs, SUM_TIMER_WAIT FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY SUM_TIMER_WAIT DESC LIMIT 10; select * from information_schema.global_status where variable_name = "Uptime"'

Or

select timer_frequency into @tf from performance_timers where timer_name = "CYCLE";

SELECT EVENT_NAME, COUNT_STAR FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY COUNT_STAR DESC LIMIT 10; 

SELECT EVENT_NAME, SUM_TIMER_WAIT / @tf as secs, SUM_TIMER_WAIT FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY SUM_TIMER_WAIT DESC LIMIT 10; 

select * from information_schema.global_status where variable_name = "Uptime"'

mysql> select * from setup_timers;
+-----------+-------------+
| NAME      | TIMER_NAME  |
+-----------+-------------+
| idle      | MICROSECOND |
| wait      | CYCLE       |
| stage     | NANOSECOND  |
| statement | NANOSECOND  |
+-----------+-------------+
[5 Sep 2013 17:39] Mark Callaghan
Am I doing something wrong? This is immediately after startup (10 seconds) and I don't see how the cumulative wait time for 3126 lock attempts of index_tree_rw_lock would add up to 31.5 seconds. 

+--------------------------------------------------+------------+
| EVENT_NAME                                       | COUNT_STAR |
+--------------------------------------------------+------------+
| wait/synch/mutex/innodb/os_mutex                 |   25192952 |
| wait/synch/mutex/innodb/mutex_list_mutex         |    4199328 |
| wait/synch/mutex/innodb/rw_lock_list_mutex       |    4194650 |
| wait/synch/rwlock/innodb/hash_table_locks        |      37154 |
| wait/synch/mutex/innodb/fil_system_mutex         |      33418 |
| wait/synch/rwlock/sql/CRYPTO_dynlock_value::lock |      24921 |
| wait/synch/mutex/innodb/buf_pool_mutex           |      19011 |
| wait/synch/mutex/innodb/log_sys_mutex            |       7779 |
| wait/synch/mutex/innodb/flush_list_mutex         |       4272 |
| wait/synch/rwlock/innodb/index_tree_rw_lock      |       3126 |
+--------------------------------------------------+------------+
+---------------------------------------------+----------+----------------+
| EVENT_NAME                                  | secs     | SUM_TIMER_WAIT |
+---------------------------------------------+----------+----------------+
| wait/synch/mutex/innodb/os_mutex            | 370.7285 |   815005207835 |
| wait/synch/mutex/innodb/mutex_list_mutex    |  51.8328 |   113948540160 |
| wait/synch/mutex/innodb/rw_lock_list_mutex  |  51.4753 |   113162604100 |
| wait/synch/rwlock/innodb/index_tree_rw_lock |  31.5052 |    69260721075 |
| wait/synch/cond/sql/COND_server_started     |   5.2886 |    11626326530 |
| wait/synch/rwlock/innodb/hash_table_locks   |   2.1822 |     4797409435 |
| wait/synch/rwlock/innodb/checkpoint_lock    |   2.1612 |     4751091800 |
| wait/synch/mutex/innodb/fil_system_mutex    |   1.4990 |     3295285630 |
| wait/synch/mutex/innodb/buf_pool_mutex      |   0.8987 |     1975724660 |
| wait/synch/mutex/innodb/ibuf_mutex          |   0.6034 |     1326463320 |
+---------------------------------------------+----------+----------------+
+---------------+----------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------+----------------+
| UPTIME        | 11             |
+---------------+----------------+
[5 Sep 2013 17:40] Mark Callaghan
And even worse 46 seconds after startup the cumulative wait time for index_tree_rw_lock is more than 445 seconds.

+---------------------------------------------+------------+
| EVENT_NAME                                  | COUNT_STAR |
+---------------------------------------------+------------+
| wait/synch/mutex/innodb/os_mutex            |   26189285 |
| wait/synch/mutex/innodb/mutex_list_mutex    |    4353304 |
| wait/synch/mutex/innodb/rw_lock_list_mutex  |    4194650 |
| wait/synch/mutex/innodb/fil_system_mutex    |     551895 |
| wait/synch/rwlock/innodb/hash_table_locks   |     532019 |
| wait/synch/mutex/innodb/buf_pool_mutex      |     365740 |
| wait/synch/mutex/innodb/flush_list_mutex    |     231005 |
| wait/synch/mutex/innodb/log_sys_mutex       |     172608 |
| wait/synch/rwlock/innodb/index_tree_rw_lock |      72235 |
| wait/synch/mutex/innodb/lock_mutex          |      40976 |
+---------------------------------------------+------------+
+---------------------------------------------+----------+----------------+
| EVENT_NAME                                  | secs     | SUM_TIMER_WAIT |
+---------------------------------------------+----------+----------------+
| wait/synch/rwlock/innodb/index_tree_rw_lock | 445.7520 |   979935931975 |
| wait/synch/mutex/innodb/os_mutex            | 413.1724 |   908313259035 |
| wait/synch/mutex/innodb/mutex_list_mutex    |  55.8163 |   122705903775 |
| wait/synch/mutex/innodb/rw_lock_list_mutex  |  51.4753 |   113162604100 |
| wait/synch/mutex/innodb/ibuf_mutex          |  25.2167 |    55436057495 |
| wait/synch/mutex/innodb/fil_system_mutex    |  20.7036 |    45514594125 |
| wait/synch/rwlock/innodb/hash_table_locks   |  20.0763 |    44135582400 |
| wait/synch/mutex/innodb/buf_pool_mutex      |  18.4113 |    40475099210 |
| wait/synch/mutex/innodb/flush_list_mutex    |   8.2184 |    18067179130 |
| wait/synch/mutex/innodb/log_sys_mutex       |   8.0687 |    17738181370 |
+---------------------------------------------+----------+----------------+
+---------------+----------------+
| VARIABLE_NAME | VARIABLE_VALUE |
+---------------+----------------+
| UPTIME        | 46             |
+---------------+----------------+
[5 Sep 2013 17:43] Mark Callaghan
I assume that some of the problem is that sum_timer_event is incremented whenever the mutex is locked whether or not the thread really waited. And then this is likely made worse by any measurement code that overstates the wait time (what is the overhead for rdtsc)?

But in the last measurement the wait time is ~445 seconds from 72235 lock attempts for index_tree_rw_lock. That is 616 microseconds per lock attempt on an idle server which should have no contention. I don't understand this.
[5 Sep 2013 18:00] Mark Callaghan
my.cnf has these PS variables:

performance_schema
performance-schema-instrument='wait/synch/%=1'

and all settings:
performance_schema      ON
performance_schema_accounts_size        100
performance_schema_digests_size 10000
performance_schema_events_stages_history_long_size      10000
performance_schema_events_stages_history_size   10
performance_schema_events_statements_history_long_size  10000
performance_schema_events_statements_history_size       10
performance_schema_events_waits_history_long_size       10000
performance_schema_events_waits_history_size    10
performance_schema_hosts_size   100
performance_schema_max_cond_classes     80
performance_schema_max_cond_instances   10100
performance_schema_max_file_classes     50
performance_schema_max_file_handles     32768
performance_schema_max_file_instances   15385
performance_schema_max_mutex_classes    200
performance_schema_max_mutex_instances  23000
performance_schema_max_rwlock_classes   30
performance_schema_max_rwlock_instances 10400
performance_schema_max_socket_classes   10
performance_schema_max_socket_instances 4020
performance_schema_max_stage_classes    150
performance_schema_max_statement_classes        168
performance_schema_max_table_handles    4000
performance_schema_max_table_instances  12500
performance_schema_max_thread_classes   50
performance_schema_max_thread_instances 4100
performance_schema_session_connect_attrs_size   512
performance_schema_setup_actors_size    100
performance_schema_setup_objects_size   100
performance_schema_users_size   100
[5 Sep 2013 18:26] Mark Callaghan
another view 200 seconds after startup on an idle server, crash recovery was not needed. All of the mutexes have large values for usecs_per_use (10+ microseconds)

+--------------------------------------------------+------------+------------------+------------------+-------------------------------+
| event_name                                       | count_star | cycles_per_use   | usecs_per_use    | a.sum_timer_wait / 1000000000 |
+--------------------------------------------------+------------+------------------+------------------+-------------------------------+
| wait/synch/mutex/innodb/os_mutex                 |   29708867 |       48805.1172 |      22.19438313 |                     1449.9447 |
| wait/synch/rwlock/innodb/index_tree_rw_lock      |     400305 |     3329349.5294 |    1514.03916689 |                     1332.7553 |
| wait/synch/rwlock/innodb/hash_table_locks        |    2578253 |      101195.2904 |      46.01908924 |                      260.9071 |
| wait/synch/mutex/innodb/fil_system_mutex         |    2441453 |       89686.6975 |      40.78549625 |                      218.9659 |
| wait/synch/mutex/innodb/buf_pool_mutex           |    1614074 |      121044.9433 |      55.04582305 |                      195.3755 |
| wait/synch/mutex/innodb/mutex_list_mutex         |    4875236 |       36236.5849 |      16.47877710 |                      176.6619 |
| wait/synch/mutex/innodb/log_sys_mutex            |     777565 |      181847.5863 |      82.69614398 |                      141.3983 |
| wait/synch/mutex/innodb/rw_lock_list_mutex       |    4194650 |       27517.0862 |      12.51353932 |                      115.4245 |
| wait/synch/mutex/innodb/ibuf_mutex               |      25220 |     4385952.0593 |    1994.53471106 |                      110.6137 |
| wait/synch/mutex/innodb/flush_list_mutex         |    1068267 |       91345.6799 |      41.53992722 |                       97.5816 |
| wait/synch/mutex/innodb/log_flush_order_mutex    |     190492 |      232747.1466 |     105.84298609 |                       44.3365 |
| wait/synch/rwlock/innodb/checkpoint_lock         |        344 |   124384319.5203 |   56564.42191860 |                       42.7882 |
| wait/synch/mutex/innodb/lock_mutex               |     172453 |      104738.7320 |      47.63048793 |                       18.0625 |
| wait/synch/cond/sql/COND_server_started          |          1 | 11928889700.0000 | 5424725.17900000 |                       11.9289 |
| wait/synch/rwlock/sql/CRYPTO_dynlock_value::lock |      24923 |       43833.7174 |      19.93361297 |                        1.0925 |
| wait/synch/rwlock/innodb/dict_table_stats        |        854 |      400173.5656 |     181.98102810 |                        0.3417 |
| wait/synch/mutex/innodb/rseg_mutex               |       6271 |       54390.2400 |      24.73424701 |                        0.3411 |
| wait/synch/cond/sql/COND_thread_count            |          1 |   317648240.0000 |  144452.20300000 |                        0.3176 |
| wait/synch/rwlock/innodb/btr_search_latch        |       1465 |      214140.0819 |      97.38132560 |                        0.3137 |
| wait/synch/mutex/innodb/trx_mutex                |       2172 |      128345.4029 |      58.36574494 |                        0.2788 |
[5 Sep 2013 19:18] Mark Callaghan
About 30 minutes into a stress test, checkout the avg lock time for buf_pool_mutex (32,000 usecs) and index_tree_rw_lock (11,000 usecs) -- 32 milliseconds and 11 milliseconds average lock waits seems kind of high.

+-----------------------------------------------+-------------+----------------+----------------+-------------------------------+
| event_name                                    | count_star  | cycles_per_use | usecs_per_use  | a.sum_timer_wait / 1000000000 |
+-----------------------------------------------+-------------+----------------+----------------+-------------------------------+
| wait/synch/mutex/innodb/buf_pool_mutex        |   320421562 |  70816803.6402 | 32204.31301581 |                 22691230.8382 |
| wait/synch/rwlock/innodb/index_tree_rw_lock   |   859992667 |  24432834.0064 | 11110.95945822 |                 21012058.0796 |
| wait/synch/rwlock/innodb/btr_search_latch     |  1741557480 |   2177629.7042 |   990.28853352 |                  3792467.3000 |
| wait/synch/rwlock/innodb/hash_table_locks     | 11675185763 |    129543.6414 |    58.91065061 |                  1512446.0775 |
| wait/synch/mutex/innodb/log_sys_mutex         |   174749854 |   1314668.0674 |   597.85220144 |                   229738.0528 |
| wait/synch/mutex/innodb/os_mutex              |   397312477 |    363076.0478 |   165.11073770 |                   144254.6439 |
| wait/synch/mutex/innodb/fil_system_mutex      |   376491966 |    217094.8509 |    98.72502250 |                    81734.4672 |
| wait/synch/rwlock/sql/MDL_lock::rwlock        |   128843601 |    383164.3198 |   174.24598481 |                    49368.2707 |
| wait/synch/mutex/sql/THD::LOCK_thd_data       |   239383982 |    170545.5809 |    77.55649772 |                    40825.8803 |
| wait/synch/mutex/innodb/trx_sys_mutex         |   117090912 |    278931.8235 |   126.84571025 |                    32660.3816 |
| wait/synch/mutex/mysys/THR_LOCK::mutex        |    81461197 |    330055.8707 |   150.09463894 |                    26886.7463 |
| wait/synch/mutex/sql/LOCK_table_cache         |    81461522 |    283535.5602 |   128.93928367 |                    23097.2383 |
| wait/synch/mutex/innodb/lock_mutex            |   118160342 |    192475.9716 |    87.52945798 |                    22743.0266 |
| wait/synch/mutex/sql/MDL_map::mutex           |    38198430 |    528623.8002 |   240.39444672 |                    20192.5992 |
| wait/synch/mutex/innodb/rseg_mutex            |    58635508 |    306828.2369 |   139.53175058 |                    17991.0295 |
| wait/synch/rwlock/innodb/fil_space_latch      |     2137803 |   7755505.8792 |  3526.85699003 |                    16579.7437 |
| wait/synch/rwlock/sql/LOCK_grant              |    40701104 |    292701.7429 |   133.10765331 |                    11913.2841 |
| wait/synch/mutex/innodb/trx_mutex             |   109916810 |     66864.2098 |    30.40685025 |                     7349.5006 |
| wait/synch/mutex/innodb/flush_list_mutex      |    39165741 |    110210.6647 |    50.11887802 |                     4316.4823 |
| wait/synch/mutex/innodb/log_flush_order_mutex |    16526336 |    167972.5494 |    76.38639817 |                     2775.9708 |
+-----------------------------------------------+-------------+----------------+----------------+-------------------------------+
[6 Sep 2013 8:33] Mark Leith
Hey Mark,

Could you also include all the queries for the latter results please?

You're on slightly the wrong path with this:

select timer_frequency into @tf from performance_timers where timer_name = "CYCLE";

...

SELECT EVENT_NAME, SUM_TIMER_WAIT / @tf as secs, SUM_TIMER_WAIT FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY SUM_TIMER_WAIT DESC LIMIT 10; 

All timers in P_S are exposed in picoseconds, so you don't divide by timer frequencies, but by a pure picosecond to second conversion. For example, here's ps_helper's function for this:

DROP FUNCTION IF EXISTS format_time;
 
DELIMITER $$
 
CREATE FUNCTION format_time(picoseconds BIGINT)
  RETURNS VARCHAR(16) CHARSET UTF8 DETERMINISTIC
BEGIN
  IF picoseconds IS NULL THEN RETURN NULL;
  ELSEIF picoseconds >= 60000000000000 THEN RETURN SEC_TO_TIME(picoseconds/1000000000000);
  ELSEIF picoseconds >= 1000000000000 THEN RETURN CONCAT(ROUND(picoseconds / 1000000000000, 2), ' s');
  ELSEIF picoseconds >= 1000000000 THEN RETURN CONCAT(ROUND(picoseconds / 1000000000, 2), ' ms');
  ELSEIF picoseconds >= 1000000 THEN RETURN CONCAT(ROUND(picoseconds / 1000000, 2), ' µs');
  ELSEIF picoseconds >= 1000 THEN RETURN CONCAT(ROUND(picoseconds / 1000, 2), ' ns');
  ELSE RETURN CONCAT(picoseconds, ' ps');
  END IF;
END $$
 
DELIMITER ;

So in the very first example, instead of os_mutex being 370 seconds, it really is around 0.81 seconds. In the second comment, the index_tree_rw_lock is really only around 0.99 seconds, etc.

Can you re-run using the format_time() function above instead, and see if that matches your expectations a little better? I'm tempted to call this "Not a bug" based on some of my own math from some of your output, but would like to confirm first how you came to the latter results as well.
[6 Sep 2013 17:47] Mark Callaghan
Thanks. Data looks better at startup, will post more when linkbench is running.
[6 Sep 2013 17:48] Mark Callaghan
Data at startup:
+--------------------------------------------------+------------+----------------+---------------+-------------------------------+
| event_name                                       | count_star | cycles_per_use | usecs_per_use | a.sum_timer_wait / 1000000000 |
+--------------------------------------------------+------------+----------------+---------------+-------------------------------+
| wait/synch/mutex/innodb/os_mutex                 |   27340371 |     40745.7740 |    0.04074577 |                     1114.0046 |
| wait/synch/rwlock/innodb/hash_table_locks        |   12906293 |     76894.3686 |    0.07689437 |                      992.4213 |
| wait/synch/mutex/innodb/mutex_list_mutex         |    4607582 |     32139.4474 |    0.03213945 |                      148.0851 |
| wait/synch/mutex/innodb/fil_system_mutex         |    4317528 |     83296.2160 |    0.08329622 |                      359.6337 |
| wait/synch/mutex/innodb/rw_lock_list_mutex       |    4194618 |     26388.9620 |    0.02638896 |                      110.6916 |
| wait/synch/mutex/innodb/rseg_mutex               |    2539260 |     39502.6964 |    0.03950270 |                      100.3076 |
| wait/synch/mutex/innodb/buf_pool_mutex           |    2506450 |     76614.6157 |    0.07661462 |                      192.0307 |
| wait/synch/mutex/innodb/purge_sys_bh_mutex       |    1265737 |     30542.6973 |    0.03054270 |                       38.6590 |
| wait/synch/mutex/innodb/log_sys_mutex            |    1165574 |    958798.8971 |    0.95879890 |                     1117.5511 |
| wait/synch/rwlock/innodb/index_tree_rw_lock      |     892642 |   1248908.6760 |    1.24890868 |                     1114.8283 |
| wait/synch/mutex/innodb/trx_mutex                |     656868 |     40263.9800 |    0.04026398 |                       26.4481 |
| wait/synch/mutex/innodb/flush_list_mutex         |     628797 |     76590.0660 |    0.07659006 |                       48.1596 |
| wait/synch/mutex/innodb/dict_sys_mutex           |     536197 |     31927.7779 |    0.03192778 |                       17.1196 |
| wait/synch/rwlock/innodb/trx_purge_latch         |     411943 |     66239.7572 |    0.06623976 |                       27.2870 |
| wait/synch/mutex/innodb/lock_mutex               |     399472 |     75730.7168 |    0.07573071 |                       30.2523 |
| wait/synch/mutex/innodb/log_flush_order_mutex    |     304267 |    105330.4186 |    0.10533042 |                       32.0486 |
| wait/synch/rwlock/innodb/dict_operation_lock     |     268144 |    946558.3561 |    0.94655835 |                      253.8139 |
| wait/synch/rwlock/innodb/dict_table_stats        |     268053 |    151610.4409 |    0.15161044 |                       40.6396 |
| wait/synch/rwlock/innodb/fil_space_latch         |      87842 |    155261.8975 |    0.15526189 |                       13.6385 |
| wait/synch/rwlock/sql/CRYPTO_dynlock_value::lock |      24922 |     43382.1030 |    0.04338207 |                        1.0812 |
+--------------------------------------------------+------------+----------------+---------------+-------------------------------+
[6 Sep 2013 17:54] Mark Callaghan
Fixed queries:
select * from performance_timers;

SELECT EVENT_NAME, COUNT_STAR FROM events_waits_summary_global_by_event_name where event_name like "wait/sync%" ORDER BY COUNT_STAR DESC LIMIT 10; 

select * from information_schema.global_status where variable_name = "Uptime"; 

select b.event_name, b.count_star, a.sum_timer_wait / b.count_star as cycles_per_use, (a.sum_timer_wait / 1000 / b.count_star) nsecs_per_use, a.sum_timer_wait / 1000000000 from  events_waits_summary_global_by_event_name b, events_waits_summary_global_by_event_name a where a.event_name = b.event_name and a.event_name like "%wait/sync%" ORDER BY a.SUM_TIMER_WAIT DESC LIMIT 20'
[6 Sep 2013 18:07] Mark Callaghan
While running linkbench correct results look much better but there are many long lock waits on buf_pool_mutex and index_tree_rw_lock -- avg lock get times are 104 and 88 usecs.

+-----------------------------------------------+------------+----------------+-----------------+-------------------------------+
| event_name                                    | count_star | cycles_per_use | nsecs_per_use   | a.sum_timer_wait / 1000000000 |
+-----------------------------------------------+------------+----------------+-----------------+-------------------------------+
| wait/synch/mutex/innodb/buf_pool_mutex        |  113423605 | 104016924.4906 | 104016.92449061 |                 11797974.5567 |
| wait/synch/rwlock/innodb/index_tree_rw_lock   |   62221362 |  88731252.0521 |  88731.25205211 |                  5520979.3546 |
| wait/synch/rwlock/innodb/hash_table_locks     | 3338035690 |     88201.7352 |     88.20173520 |                   294420.5400 |
| wait/synch/mutex/innodb/log_sys_mutex         |   63114139 |   1356244.8077 |   1356.24480770 |                    85598.2233 |
| wait/synch/mutex/innodb/os_mutex              |  175068668 |    477157.6641 |    477.15766409 |                    83535.3567 |
| wait/synch/mutex/innodb/fil_system_mutex      |  144570263 |    217861.5501 |    217.86155015 |                    31496.3016 |
| wait/synch/rwlock/sql/MDL_lock::rwlock        |   46726374 |    331503.5951 |    331.50359512 |                    15489.9610 |
| wait/synch/mutex/sql/THD::LOCK_thd_data       |   85451930 |    171398.0441 |    171.39804414 |                    14646.2937 |
| wait/synch/mutex/innodb/trx_sys_mutex         |   39959368 |    282041.0679 |    282.04106792 |                    11270.1828 |
| wait/synch/mutex/innodb/rseg_mutex            |   43602719 |    247329.0096 |    247.32900962 |                    10784.2173 |
| wait/synch/rwlock/innodb/fil_space_latch      |    1651942 |   6020944.1179 |   6020.94411794 |                     9946.2505 |
| wait/synch/mutex/sql/LOCK_table_cache         |   29050395 |    321047.1344 |    321.04713437 |                     9326.5461 |
| wait/synch/mutex/mysys/THR_LOCK::mutex        |   29049999 |    316706.3865 |    316.70638646 |                     9200.3202 |
| wait/synch/mutex/innodb/lock_mutex            |   41536696 |    203479.6960 |    203.47969605 |                     8451.8743 |
| wait/synch/mutex/sql/MDL_map::mutex           |   13673604 |    484816.7098 |    484.81670978 |                     6629.1917 |
| wait/synch/rwlock/sql/LOCK_grant              |   14506567 |    280734.7423 |    280.73474231 |                     4072.4973 |
| wait/synch/mutex/innodb/trx_mutex             |   39437661 |     62875.5108 |     62.87551085 |                     2479.6631 |
| wait/synch/mutex/innodb/flush_list_mutex      |   18488094 |     94145.5025 |     94.14550245 |                     1740.5709 |
| wait/synch/mutex/innodb/log_flush_order_mutex |    7602220 |    148044.2606 |    148.04426055 |                     1125.4650 |
| wait/synch/rwlock/innodb/dict_operation_lock  |    3621565 |    280263.5363 |    280.26353627 |                     1014.9926 |
+-----------------------------------------------+------------+----------------+-----------------+-------------------------------+