Bug #26026 mysqldump can't dump databases with more than 563 tables
Submitted: 1 Feb 2007 18:57 Modified: 3 Feb 2007 22:45
Reporter: Arkady Volodin Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: Backup Severity:S4 (Feature request)
Version:4.0.22 and may be all others OS:Any (Slamd64-11 Linux-2.6.19.2 EM64T)
Assigned to: CPU Architecture:Any

[1 Feb 2007 18:57] Arkady Volodin
Description:
I made a test database with 1000 tables.
But when I trying to mysqldump it, I got:

mysqldump: Got error: 1105: File 'test000/t564.MYD' not found (Errcode: 24) when using LOCK TABLES

Console mysql program opens tablt t564 without errors.
Before start this task I have only 1261 (of max. 343581) open files in my system. So, I think this is a bug in mysqldump or in MySQL.

How to repeat:
bash script called "test" for making test000 database:
#!/bin/bash
echo "create database test000 ;" | /usr/local/bin/mysql -pTesT
for a in 0 1 2 3 4 5 6 7 8 9 ; do
for b in 0 1 2 3 4 5 6 7 8 9 ; do
for c in 0 1 2 3 4 5 6 7 8 9 ; do
echo "use test000 ; create table t$a$b$c (i int not null auto_increment, data int, primary key (i)) engine=myisam ;" | /usr/local/bin/mysql -pTesT
done
done
done

root@r00t:~/tests/mysql4# ./test
root@r00t:~/tests/mysql4# /usr/local/bin/mysqldump -B test000 --add-drop-database --add-drop-table -pTesT | bzip -zc9 > ./dump.bz2
mysqldump: Got error: 1105: File 'test000/t564.MYD' not found (Errcode: 24) when using LOCK TABLES
root@r00t:~/tests/mysql4# echo "use test000 ; select * from t564 ;" | /usr/local/bin/mysql -pTest
root@r00t:~/tests/mysql4#
[3 Feb 2007 21:00] Sergei Golubchik
This is not really a bug.

mysqldump by default has --lock-tables enabled, which means it tries to lock all tables to be dumped before starting the dump. And doing LOCK TABLES t1, t2, ... for really big number of tables will inevitably exhaust all available file descriptors, as LOCK needs all tables to be opened.

Workarounds: --skip-lock-tables will disable such a locking completely. Alternatively, --lock-all-tables will make mysqldump to use FLUSH TABLES WITH READ LOCK which locks all tables in all databases (without opening them). In this case mysqldump will automatically disable --lock-tables because it makes no sense when --lock-all-tables is used.
[3 Feb 2007 22:45] Arkady Volodin
Ok. Let me understand.
If my database has big number of tables, I can't backup it without writing my own tool for making backup?
Situation, when I let mysqldump create dump without table locking is very bad because it isn't possible for me to know a "time border". I can't know, which data by time a backed up and which I didn't without insert transaction time in every row.
[4 Feb 2007 8:44] Sergei Golubchik
You can use --lock-all-tables which will lock all tables and won't need to open them. But it'll lock all tables in all databases, which may be not desirable in some cases.
[6 Feb 2007 13:54] Kristian Koehntopp
Arkady: The backup command should contain "--master-data", which has a side effect of --lock-all-tables anyway. Otherwise the backup will not be consistent and will not be associated with a binlog position, thus you'll be unable to do point-in-time recovery

Sergei, the behaviour of mysqldump is still not accecptable, as it affects replication and other things in the server. See support case 14316 and bug 26114 which reproduces this behaviour on 5.0.26. Using "mysqldump -d <schema>" on a large schema stopped replication on my server.

This must not happen. I think it is a bug in mysqldump to use that excessively many file handles.
[11 Apr 2007 18:29] MySQL Verification Team
Bug: http://bugs.mysql.com/bug.php?id=27763 it was marked as duplicate
of this one.
[11 Apr 2007 18:56] Alessandro C.
Please fix this problem on default command.
Thanks
[12 Apr 2007 16:26] Valeriy Kravchuk
Bug #26114 was marked as a duplicate of this one.