Bug #62486 | Replication Latency (freeing items and reading from relay log) | ||
---|---|---|---|
Submitted: | 21 Sep 2011 3:16 | Modified: | 24 Mar 2015 18:43 |
Reporter: | Jamie Koc | Email Updates: | |
Status: | No Feedback | Impact on me: | |
Category: | MySQL Server: Replication | Severity: | S2 (Serious) |
Version: | 5.1.42, 5.1.59 | OS: | Linux |
Assigned to: | CPU Architecture: | Any | |
Tags: | replication, slave |
[21 Sep 2011 3:16]
Jamie Koc
[21 Sep 2011 3:46]
Valeriy Kravchuk
Please, send the output of: show global status like 'Qcache%'; from slave that spends a lot of time in "freeing items" status.
[22 Sep 2011 0:07]
Jamie Koc
mysql> show global status like 'Qcache%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Qcache_free_blocks | 0 | | Qcache_free_memory | 0 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 0 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 0 | +-------------------------+-------+
[27 Sep 2011 9:44]
Jamie Koc
I tried updating to version 5.1.59-community but see the same issue. Freeing items is taking 3-5 seconds. What else can I do?
[3 Nov 2011 1:38]
Jamie Koc
I haven't heard back on this for over a month. It is still an issue for me.
[30 Dec 2011 18:55]
Valeriy Kravchuk
If you are still affected, please, send my.cnf from server, the results of SHOW GLOBAL STATUS and SHOW FULL PROCESSLIST at the moment when you see the problem.
[5 Jan 2012 5:39]
Jamie Koc
Hi, Yes still affected by this problem. mysql> show processlist; +----+-------------+-----------+-------+---------+-------+----------------------------------+------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+-------------+-----------+-------+---------+-------+----------------------------------+------------------+ | 1 | system user | | NULL | Connect | 843 | Waiting for master to send event | NULL | | 2 | system user | | NULL | Connect | 64811 | freeing items | NULL | | 4 | root | localhost | NULL | Sleep | 22 | | NULL | | 13 | root | localhost | adsum | Query | 0 | NULL | show processlist | +----+-------------+-----------+-------+---------+-------+----------------------------------+------------------+ mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: at02.imiclk.local Master_User: repuser Master_Port: 3306 Connect_Retry: 60 Master_Log_File: at02-bin.013247 Read_Master_Log_Pos: 434693755 Relay_Log_File: as11-relay-bin.000031 Relay_Log_Pos: 495022201 Relay_Master_Log_File: at02-bin.013213 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 495022057 Relay_Log_Space: 37033341456 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 64825 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: 1 row in set (0.00 sec) Here is our my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock skip-locking key_buffer = 2048M max_allowed_packet = 64M table_cache = 512 sort_buffer_size = 2048M read_buffer_size = 64M myisam_sort_buffer_size = 1024M thread_cache = 8 thread_concurrency = 4 max_connections = 16384 tmpdir = /data1/mysql/tmp/ max_heap_table_size = 16106127360 tmp_table_size = 1024M query_cache_wlock_invalidate query_cache_size= 0 query_cache_type= 0 delay_key_write = OFF flush max_write_lock_count = 1 skip-innodb server-id=99011 report-host=as11 relay-log=as11-relay-bin [mysql.server] user=mysql [safe_mysqld] err-log=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash max_allowed_packet = 64M [isamchk] key_buffer = 256M sort_buffer_size = 256M read_buffer = 2M write_buffer = 2M [myisamchk] key_buffer = 1536M sort_buffer_size = 1024M read_buffer = 4M write_buffer = 2M [mysqlhotcopy] interactive-timeout
[14 Jan 2012 12:53]
Sveta Smirnova
Thank you for the feedback. "freeing items" is not exact process state: it includes several operations, so it is hard to say what exactly causes the problem. I saw you have option flush in the configuration file. Try to temporarily remove it and watch if problem stop occurring.
[15 Feb 2012 1:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[9 Sep 2013 10:59]
Iaroslav Pleten
I have the same issue; A->B->C This issue appears on C slave. After restarting database engine all become fine. "... Reading event from the relay log | DROP TEMPORARY TABLE IF EXISTS NotReplTemp" "... freeing items | DROP TEMPORARY TABLE IF EXISTS NotReplTemp" (none)> show global status like 'Qcache%'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Qcache_free_blocks | 0 | | Qcache_free_memory | 0 | | Qcache_hits | 0 | | Qcache_inserts | 0 | | Qcache_lowmem_prunes | 0 | | Qcache_not_cached | 0 | | Qcache_queries_in_cache | 0 | | Qcache_total_blocks | 0 | +-------------------------+-------+ 8 rows in set (0.00 sec) Also I wonder why this query is replicating because we have a "replicate-wild-ignore-table=%.NotRepl%" in my.cnf version of mysqld is 5.1.41-community-log
[24 Feb 2015 18:43]
Sveta Smirnova
Iaroslav, thank you for the feedback. Please try with one of current supported versions: 5.5.42 or 5.6.23. If problem still exists try to create repeatable test case or at least provide as much information as possible: configuration files from all servers, output of SHOW GLOBAL STATUS and SHOW ENGINE INNODB STATUS, taken two times with 5 minutes interval.
[25 Mar 2015 1:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".