Bug #68639 FLUSH TABLES .... FOR EXPORT takes ages (making it useless?) on a busy server
Submitted: 11 Mar 2013 17:40 Modified: 29 Jun 2013 12:57
Reporter: Simon Mudd (OCA) Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Locking Severity:S3 (Non-critical)
Version:5.6.10 OS:Any
Assigned to: CPU Architecture:Any

[11 Mar 2013 17:40] Simon Mudd
Description:
I see the new documentation allows us to copy data from a server by "moving the .ibd files". this is a nice, neat new feature.

However, I'm trying to do this on a very busy server. It has a load of 9 and is pretty much i/o bound. I created a 16,000 row Innodb table with just an int column for testing and did this:

root@myserver [mydb]> FLUSH TABLES SIMON_TEST FOR EXPORT;

and have been waiting for the last hour for this to complete.

show processlist shows this:

| 326192 | root | localhost | mydb | Query | 3770 | System lock | FLUSH TABLES SIMON_TEST FOR EXPORT |

and as reference the box has about 119GB of dirty pages. This took nearly 2 hours to complete:

root@myserver [mydb]> FLUSH TABLES SIMON_TEST FOR EXPORT;
Query OK, 0 rows affected (1 hour 39 min 14.45 sec)

but IMO even on a busy server for a table of this size "flushing" shouldn't take that long:

root@myserver [mydb]> select * from INFORMATION_SCHEMA.TABLES WHERE TABLE_SCHEMA='et_data_013153' AND TABLE_NAME ='SIMON_TEST'\G
*************************** 1. row ***************************
TABLE_CATALOG: def
TABLE_SCHEMA: mydb
TABLE_NAME: SIMON_TEST
TABLE_TYPE: BASE TABLE
ENGINE: InnoDB
VERSION: 10
ROW_FORMAT: Compact
TABLE_ROWS: 32246
AVG_ROW_LENGTH: 49
DATA_LENGTH: 1589248
MAX_DATA_LENGTH: 0
INDEX_LENGTH: 0
DATA_FREE: 4194304
AUTO_INCREMENT: 32769
CREATE_TIME: 2013-03-11 15:19:29
UPDATE_TIME: NULL
CHECK_TIME: NULL
TABLE_COLLATION: utf8_general_ci
CHECKSUM: NULL
CREATE_OPTIONS: 
TABLE_COMMENT: 
1 row in set (0.00 sec)

See also: 
http://dev.mysql.com/doc/refman/5.6/en/innodb-migration.html (which doesn't mention FLUSH TABLES ... FOR EXPORT and imo should)
https://dev.mysql.com/doc/refman/5.6/en/flush.html

How to repeat:
See above. The server was very busy with a load average of ~9 and busy doing heavy writing in parallel.

As a reference from the datadir I see:

[root@myserver data]# find . -type d | wc -l
1635
[root@myserver data]# find . -type f | wc -l
32994

So ~ 33,000 tables over 1600 databases.

Suggested fix:
Given the command name it seems useful to:
1. FLUSH the pages of this table, and maybe do this earlier if possible
2. not be so affected by other activity on the server which is not accessing this table.

For this new feature to be useful waiting 1h45 minutes to free up a table to copy it somewhere else isn't very helpful.
[11 Mar 2013 17:53] MySQL Verification Team
verified with a simple test to read small table t1 in a tight loop, while trying to flush another small table for export, t2.
[27 Jun 2013 18:14] Simon Mudd
Please close since this is fixed (and reported to me by Oracle).

http://dev.mysql.com/doc/relnotes/mysql/5.6/en/news-5-6-12.html
"InnoDB: FLUSH TABLES FOR EXPORT would sleep too often while flushing pages from buffer pools. (Bug #16471701)"
[5 Sep 2013 17:24] Charles Peterson
Seems to be an issue again on 5.6.13.

2629 seconds+++ 	System lock FLUSH TABLES tmp.tmp FOR EXPORT
[26 Sep 2013 12:50] Russ Huguley
I am seeing the same thing on  5.6.13.

It seemed to hang for at least 5 hours on a FLUSH TABLES .... FOR EXPORT.  The table size was around 5GB.  There was nothing else accessing the table while this was going on and the only activity happening on the system was replication.  Very little disk activity was going on as well:
sdb is the mysql data partition:

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
sda               0.00     0.30    0.00    0.20     0.00     2.00    20.00     0.00    3.50   2.50   0.05
sdb               0.13    32.93   12.00   13.77   191.20   186.80    29.34     0.17    6.45   1.83   4.72
[26 Sep 2013 21:44] Russ Huguley
Just in case anyone runs across this looking for an answer as to why it still locks up.  
If you have replication running do a stop slave; the flush tables completes in about a minute and a half.  If you have other io going to it I am guessing it will still take forever.
[7 Jul 2014 23:20] jason` smitt
If anyone ran into this,   and you are curious as to what impact it will have on your server I found that a 200 gig table on a busy server took roughly 24 hours to complete.  it tied up one of the cores,   but generated little or no i/o so I felt it was safe to leave it running to completion.
[22 May 2015 16:40] Vincent Meng
This bug has been reported again in #77011.
[22 May 2015 16:43] Vincent Meng
To resolve the system lock, try to set change buffer size to 0 temporarily.

`set global innodb_change_buffer_max_size = 0`.

In the same time you can monitor the change buffer size, when it clears the system lock will be gone.

`show status like 'Innodb_ibuf_size'`