Bug #54815 | reduce the overhead from buf_flush_free_margin | ||
---|---|---|---|
Submitted: | 25 Jun 2010 17:32 | Modified: | 14 Jun 2013 8:39 |
Reporter: | Mark Callaghan | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: InnoDB Plugin storage engine | Severity: | S5 (Performance) |
Version: | 5.1.47 | OS: | Any |
Assigned to: | Inaam Rana | CPU Architecture: | Any |
Tags: | innodb, performance |
[25 Jun 2010 17:32]
Mark Callaghan
[25 Jun 2010 17:38]
Mark Callaghan
Details are in http://www.facebook.com/MySQLatFacebook#!/notes/mysqlfacebook/using-pmp-to-double-mysql-th... In my branch, buf_flush_LRU_recommendation is: buf_flush_LRU_recommendation( /*=========================*/ ulint n_needed) /*!< in: number of free pages needed */ { buf_page_t* bpage; ulint n_replaceable; ulint distance = 0; if (UT_LIST_GET_LEN(buf_pool->free) >= n_needed) { /* This does a dirty read of buf_pool->free. That is good enough and reduces mutex contention on buf_pool->mutex. */ return 0; } buf_pool_mutex_enter(); n_replaceable = UT_LIST_GET_LEN(buf_pool->free); bpage = UT_LIST_GET_LAST(buf_pool->LRU); while ((bpage != NULL) && (n_replaceable < n_needed) && (distance < BUF_LRU_FREE_SEARCH_LEN)) { mutex_t* block_mutex = buf_page_get_mutex(bpage); mutex_enter(block_mutex); if (buf_flush_ready_for_replace(bpage)) { n_replaceable++; } mutex_exit(block_mutex); distance++; bpage = UT_LIST_GET_PREV(LRU, bpage); } buf_pool_mutex_exit(); if (n_replaceable >= n_needed) { return(0); } return(n_needed - n_replaceable); }
[25 Jun 2010 17:39]
Mark Callaghan
And buf_flush_free_margin is: buf_flush_free_margin( /*===================*/ ulint npages, /*!< in: number of free pages needed */ ibool foreground) /*!< in: done from foreground thread */ { ulint n_to_flush; ulint n_flushed; my_fast_timer_t start_time; my_get_fast_timer(&start_time); n_to_flush = buf_flush_LRU_recommendation(npages); if (n_to_flush > 0) { n_flushed = buf_flush_batch(BUF_FLUSH_LRU, n_to_flush, 0); if (n_flushed == ULINT_UNDEFINED) { /* There was an LRU type flush batch already running; let us wait for it to end */ buf_flush_wait_batch_end(BUF_FLUSH_LRU); } else { if (foreground) { srv_n_flushed_free_margin_fg += n_flushed; } else { srv_n_flushed_free_margin_bg += n_flushed; } } } if (foreground) { srv_free_margin_fg_secs += my_fast_timer_diff_now(&start_time, NULL); } else { srv_free_margin_bg_secs += my_fast_timer_diff_now(&start_time, NULL); } }
[25 Jun 2010 17:42]
Mark Callaghan
Peak QPS at 64 threads increased from ~49,000 to ~74,000 with this change
[28 Jul 2010 17:58]
Inaam Rana
Mark, The improvement that you noticed was due to less contention on the buf_pool mutex or was it because we were doing less amount of flushing now?
[30 Jul 2010 15:50]
Mark Callaghan
The benefit was from less contention on buf_pool mutex
[30 Jul 2010 15:50]
Mark Callaghan
The benefit was from less contention on buf_pool mutex
[14 Jun 2013 8:39]
Erlend Dahl
[13 Jun 2013 10:14] Inaam Rana: New LRU mechanism was implemented in 5.6