Bug #56283 | Recommend innodb_change_buffering=none when tables are not on disk medium | ||
---|---|---|---|
Submitted: | 26 Aug 2010 8:41 | Modified: | 27 Aug 2015 4:26 |
Reporter: | Marko Mäkelä | Email Updates: | |
Status: | Won't fix | Impact on me: | |
Category: | MySQL Server: Documentation | Severity: | S5 (Performance) |
Version: | 5.1 plugin, 5.5+ | OS: | Any |
Assigned to: | Daniel Price | CPU Architecture: | Any |
Tags: | flash, ibuf, innodb_change_buffering, random I/O, SSD |
[26 Aug 2010 8:41]
Marko Mäkelä
[26 Aug 2010 8:49]
Marko Mäkelä
On a second thought, with innodb_change_buffering=none and innodb_file_per_table=1, most writes to the ibdata* files should be to the doublewrite buffer and to undo logs. The doublewrite buffer is sequential and would benefit from being on HDD (the constant writes could cause significant wear on SSD). Whether the undo logs are going to be a performance issue, I do not know. I think it depends on the size and amount of concurrently active transactions. Thus, for SSD users, we should recommend to use innodb_change_buffering=none and put the *.ibd files on SSD and the rest (ib_logfile*, ibdata*, MySQL binlog) on HDD. The *.frm files can be on either HDD or SSD; they are small and are only written by DDL operations.
[1 Sep 2010 4:12]
James Day
Lets review what we know: 1. SSDs keep a limited number of erased blocks that can be used for rapid writes. Exceed this or write faster than the rate at which blocks are erased and the write speed drops greatly. 2. The insert buffer efficiently (in RAM use terms, compared to the buffer pool pages) defers changing blocks, increasing the amount of write combining and decreasing the rate at which pages are made dirty. That in turn decreases the rate at which InnoDB writes to disk. Hence, removing the change buffering is something that should be expected to have significant adverse performance effects for SSDs, by increasing the chance of writing faster than the SSD is freeing pages. This effect won't show up in short run time tests unless those tests happen to use all of the freed pages. For production servers using SSDs I'd be recommending full InnoDB change buffering, because that's the setting that's most likely to improve sustained server throughput. If you'd like to make some other recommendation, you really need to get benchmarks that run for a sustained time period that can support the belief that increasing write rate doesn't slow down SSD-based workloads. And those benchmarks need to monitor how close the SSD gets to running out of free blocks. For file locations, the likely best combination, if there are enough drives, is ib_logfile* on one spinning disk, ibdata* on another (to get doublewrite buffer and undo logs off SSD), binlog either on another or sharing with ib_logfile, ibd on SSD.
[14 Sep 2010 17:40]
John Russell
Taking out of 'Documenting' state since this seems like the start of a longer discussion rather than a straightforward doc addition.
[3 Feb 2014 21:20]
James Day
A few years later and some more observations about production systems: It's becoming increasingly uncommon to see any significant pending work in the change buffer. We do see it sometimes and in those cases it can be a lot, but it's now a clear minority of the cases. That appears to be due to a combination of increased merge rates - higher background thread activity since say 4.0/4.1 when it was routinely having pending work - and more RAM for the buffer pool. As a result I've largely changed my view from change buffering always on being best to a good deal more uncertain view that is shifting towards try it each way and see which is best, with a bias towards on in large data and a historic bias towards on.
[26 Aug 2015 17:59]
Daniel Price
Posted by developer: This bug requested that we recommend disabling innodb_change_buffering if tables are stored on SSD. The discussion in this bug and the ensuing discussion that took place via email was inconclusive on this particular recommendation. I am therefore rejecting the bug. If there is other innodb_change_buffering configuration advice that should be documented, please open a new documentation bug. Thank you.