Bug #14737 After "FLUSH TABLES WITH READ LOCK", Copying InnoDB files is really no problem?
Submitted: 8 Nov 2005 8:35 Modified: 8 Nov 2005 9:53
Reporter: Yoshiaki Tajika (Basic Quality Contributor) Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:5.0.15 OS:Linux (Linux)
Assigned to: CPU Architecture:Any

[8 Nov 2005 8:35] Yoshiaki Tajika
Description:
You say as below in Reference Manual 6.4 How to Set Up Replication;

> For InnoDB tables, note the following: FLUSH TABLES WITH READ LOCK also 
> blocks COMMIT operations. When you have acquired a global read lock, 
> you can start a filesystem snapshot of your InnoDB tables. 
> Internally (inside the InnoDB storage engine) the snapshot won't be 
> consistent (because the InnoDB caches are not flushed), but this is not a 
> cause for concern, because InnoDB resolves this at startup and delivers a 
> consistent result. This means that InnoDB can perform crash recovery when 
> started on this snapshot, without corruption. However, there is no way to stop 
> the MySQL server while insuring a consistent snapshot of your InnoDB tables. 
>
> Leave running the client from which you issue the FLUSH TABLES statement so 
> that the read lock remains in effect. (If you exit the client, the lock is released.) 
> Then take a snapshot of the data on your master server. 
>
> The easiest way to create a snapshot is to use an archiving program to make a
> binary backup of the databases in your master's data directory. For example, use 
> tar on Unix, ...

But I'm still suspisious of copying(archiving) innodb files to backup them.
There are two reasons why I think so.

1. By using tar command, you can archive innodb files sequentially, 
but not simultaneously. That is, timestamp of each files is different.
And you can't know which has newer/older timestamp, ibdata1, 
ibdata2, ..., ibdataN, ib_logfile1, ..., ib_logfileN.
Do you really assure that innodb crash recovery system works well
with these files? 

2. Even after you execute "FLUSH TABLES WITH READ LOCK", 
I/O on innodb files does't freeze. As far as I know, an internal thread 
continues to work to free unused space. There might be another I/Os 
I never know. In such situation, can you copy(or archive) each files
without any corruption, and do you assure innodb starts up well
with these files?

I show you an example, in which tar command cause errors.

How to repeat:
Scripts and my.cnf are attached after.

1. Using mysql client tool,

mysql> source prepare.sql;  /* insert many records to innodb table */
mysql> source dellock.sql;  /* delete all records, and lock */

At this point all the tables are locked, and you can copy innodb files 
according to the Reference Manual. But you will notice HDD access lamp 
is blinking. It is because an innodb internal thread is still working.
Please go advance quickly before it ceases.

2. Using Linux shell prompt, 

# sh backup.sh

3. Then you'll see errors like below.

ibdata1
tar: ibdata1: file changed as we read it
ibdata2
tar: ibdata2: file changed as we read it
ibdata3
ibdata4
tar: ibdata4: file changed as we read it
ib_logfile0
ib_logfile1
tar: ib_logfile1: file changed as we read it
tar: Error exit delayed from previous errors

4. I tried to restore innodb files from the archive, and kick mysqld_safe.
Innodb crash recovery system worked well. But I'm not sure that it is 
always OK.

---- prepare.sql ----------------------
/*
You need enough records in an InnoDB table, so that 
it takes long time to free unused space in the innodb 
tablespace even after completion of DELETE command.
In my PC, 262144 records are adequate.
And, in order to repeat this test many times easily,
initially prepared 262144 records are put in a MyISAM
table. After you delete all of InnoDB records, you can
restore them from the MyISAM table.
*/
use test;
drop table if exists t2;
create table t2(
  c0 int primary key auto_increment,
  c1 char(255) not null,
  c2 char(255) not null,
  c3 char(255) not null,
  c4 char(255) not null
) engine=myisam;
insert into t2(c1,c2,c3,c4) values(1,1,1,1);
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
insert into t2(c1,c2,c3,c4) select 1,1,1,1 from t2;
drop table if exists t1;
create table t1 like t2;
alter table t1 engine=innodb;
insert into t1 select * from t2;

---- dellock.sql ----------------------
use test;
delete from t1;
flush tables with read lock;

---- reset.sql ----------------------
use test;
insert into t1 select * from t2;

---- backup.sh ----------------------
pushd /usr/local/mysql/var
tar -cvf jb.tar ibdata? ib_logfile?
popd

----- my.cnf ------------------------
[mysqld]
datadir = /usr/local/mysql/var
log-bin
innodb_data_file_path = ibdata1:50M;ibdata2:50M;ibdata3:50M;ibdata4:50M:autoextend
innodb_log_file_size = 200M
user=mysql
innodb_data_home_dir = /usr/local/mysql/var
innodb_log_group_home_dir = /usr/local/mysql/var

----- end ----------------------------

Suggested fix:
If you say "Yes, above operation has no problem in order to backup innodb files",
nothing is suggested from me.
If you say "Above operation might have problems", modify the Reference Manual.
[8 Nov 2005 9:53] Hartmut Holzgraefe
We're sorry, but the bug system is not the appropriate forum for 
asking help on using MySQL products. Your problem is not the result 
of a bug.

Support on using our products is available both free in our forums
at http://forums.mysql.com and for a reasonable fee direct from our
skilled support engineers at http://www.mysql.com/support/

Thank you for your interest in MySQL.