Bug #95168 Make log_flush_notifier thread more accurate
Submitted: 27 Apr 2019 20:33 Modified: 3 Jun 2019 13:00
Reporter: chen zongzhi (OCA) Email Updates:
Status: Can't repeat Impact on me:
Category:MySQL Server: InnoDB storage engine Severity:S5 (Performance)
Version:8.0.14 OS:Any
Assigned to: CPU Architecture:Any

[27 Apr 2019 20:33] chen zongzhi
right now, In multi thread and small transaction situation, some user threads may wait on the same log.flush_events[slot], the code is in log_flush_notifier()

const auto slot =
(lsn - 1) / OS_FILE_LOG_BLOCK_SIZE & (log.write_events_size - 1);

so if the user threads writing the data in the same redolog block, some thread could be false wake-up and the thread will retry and waiting.

such as if there is thread1 write the lsn to 128, and thread2 write the lsn to 300. And at this moment, the log_flusher thread advance the flushed_to_disk_lsn to 128. Then both thread1 and thread2 will be false wake-up, however the thread2 will check flushed_to_disk_lsn(128) >= lsn(300), and return false, then thread2 continue to wait. 

I have add some log in the log_flush_notifier() function, the false wake-up: success wake-up = 10:1 when we doing sysbench oltp_write_only and using 1024 thread.

And the flase wake-up: success wake-up=5:1 if we change the thread-num to 64.

The log also show us that the more concurrent the more false wake-up will happen.

And I thought we can also make the wake-up more accurate, when there is lots of small transaction, the frequently false wake-up may cause performance degrade.

If I misunderstanding something, please correct me.

How to repeat:
Read the code and testing.
[2 May 2019 14:16] Sinisa Milivojevic
Hi Chen,

Thank you for your contribution.

I find that your idea is very interesting. Can you send us the patch, so that I can verify this bug ???

Many thanks in advance.
[3 Jun 2019 1:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".