Description:
[Proposal]
For a local SSD, I think a much better way is: directIO for all heavy files and don’t invoke fsync. But I searched the mysql doc, there is no such option.
PS: without fsync, data maybe lost during a power failure due to internal cache of SSD. However that’s not a problem for industry level SSD(there is a capacitor ensuring data persistence).
[Problem]
The performance of mysql under SSD is not as good as expected.
My test result is one "insert + commit" operation costs 3~4 ms. The mainly cost is commit operation which trigger binary log sync.
The disk IO pattern of mysql:
Innodb data files: one direct write for each insert/commit, fsync once per second;
Innodb log files: one buffered write for each insert/commit, fsync once per second;
Binary log files: one buffered write for each insert/commit, fsync for each commit;
Am I missing something or there is a reason for that?
How to repeat:
[Machine]
centos
A local SSD
[myslq configuration]
innodb_flush_method = O_DIRECT
innodb_flush_log_at_trx_commit = 2
log_bin=1
sync_binlog = 1
binlog_format = statement
default-storage-engine = innodb
innodb_data_file_path = ibdata1:10M:autoextend
[data]
10 columns, each column is a 128bytes random string.
[procedure]
A python client based on unix sock;
Insert a row, then commit it;
After last commit succeed, insert a new row and commit;
Repeated in that way....
And client and mysql is running on the same machine.
Suggested fix:
DirectIO and not invoke fsync.