Description:
we are using a MySQL replication scenario with one db master and several slaves. The most frequently accessed tables with high read and write access are InnoDB tables, which shouldn’t pose any problem to the engine.
Unfortunately, our largest table hadn’t recently been cleaned up by an automatic script in order to remove obsolete records, which is why its size rose by 35% to Data_length=5,245,845,504, Index_length=1,264,517,120. Since then, the load on our slaves had increased dramatically (usually between .5 and 2.1, then between 8 and 12!), and due to slower response times the number of connections had risen high continuously from 5 to 15 conn./sec to about 80-150 (max_connections). This behaviour could be observed on all slaves.
Our current configuration concerning this matter is:
version=4.1.11-standard
set-variable = innodb_buffer_pool_size=512M
set-variable = innodb_log_buffer_size=8M
set-variable = innodb_log_file_size=128M
set-variable = innodb_flush_log_at_trx_commit=0
Even an eventual increase of the buffer pool to 1950M (maximum possible for malloc under our OS/libraries) didn’t bring any relief. Does InnoDB have problems with indexes this large? The table contained about 8.7 million records at that time, and indexed columns are chosen wisely.
As a consequence, we chose to convert the table back to MyISAM, deleted the obsolete records, deleted InnoDB’s table space and finally converted it back to InnoDB. From then on, our whole system is running smoothly again. However, we will eventually run into this problem again (when the index size has grown to the size mentioned above under “normal” circumstances), and I’d appreciate any advice that could lead to a prevention of this problem.
Thank you very much and kind regards,
Julien Walther
Hamburg
How to repeat:
Create an InnoDB table that hold 5.2G of data and 1.2G in indexes, configure the server cluster as told and create load by querying that table using all indexes and with a concurrency of approx. 10 conn./sec.