Description:
Let's understand the reason using an use-case.
- Table is being loaded with blob entries.
- Insertion of n entries cause btr_cur_pessmisitic_update to get invoked.
- Before it actually inserts the entry it reserves the extents.
During extent reservation mtr_x_lock() is taken on space with latch->level = SYNC_FSP
- Same mtr continues to hold w/o mtr_commit.
- Because there is big_rec_vec involved insertion logic tries to get index->lock latch.
if (big_rec_vec) {
ut_ad(page_is_leaf(page));
ut_ad(dict_index_is_clust(index));
ut_ad(flags & BTR_KEEP_POS_FLAG);
/* btr_page_split_and_insert() in
btr_cur_pessimistic_insert() invokes
mtr_memo_release(mtr, index->lock, MTR_MEMO_SX_LOCK).
We must keep the index->lock when we created a
big_rec, so that row_upd_clust_rec() can store the
big_rec in the same mini-transaction. */
mtr_sx_lock(dict_index_get_lock(index), mtr);
}
- Index lock latch is with SYNC_INDEX_TREE latch level.
- SYNC_INDEX_TREE (54) > SYNC_FSP (42) and so this is not a valid latching order.
But we don't hit assert during debug build because between the 2 latching action mentioned
above debug_latch is taken on the space header page which is puts latch level of
SYNC_NO_ORDER_CHECK making the order safe from assert.
- SYNC_FSP -> SYNC_NO_ORDER_CHECK -> SYNC_INDEX_TREE.
If we introduce an assert in optimized build we would always hit this case.
How to repeat:
Will attach the test-file.