Bug #17089 mysqldump will not dump a db with more than 247 tables on Mac OS X
Submitted: 3 Feb 2006 3:58 Modified: 24 Aug 2006 15:16
Reporter: S D Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: Backup Severity:S1 (Critical)
Version:4.1.15 OS:MacOS (Mac OS X 10.4.4)
Assigned to: Magnus Blåudd CPU Architecture:Any

[3 Feb 2006 3:58] S D
Description:
mysqldump will not dump a db with more than 247 tables on Mac OS X 10.4.4

When I try to the dump the db, gives an error and does not dump the db.  The error doesn't seem to be consistent.  Some times is complains about a file missing while it tries to lock a table.  Also gives "/usr/local/mysql/bin/mysqldump: /usr/local/mysql/bin/mysqldump: Couldn't execute 'show table status like 't91'': Can't read dir of './test/' (Errcode: 24) (12)" sometimes.

I've reproduced this on 3 different machines.

Also tried it on 4.1.12 and 5.0.18.  Both gave errors but the errors were different.  4.0.11 does work but seems to have an index corruption problem for us.

How to repeat:
Create a fresh "test" db.

Run sql to create 248 tables (attached to bug) into the "test" db:

Then use mysqldump to dump the "test" db.
[3 Feb 2006 3:59] S D
sql to create 248 tables

Attachment: createTestTables.sql (application/octet-stream, text), 29.93 KiB.

[3 Feb 2006 8:24] Kai Voigt
kai-voigts-powerbook-g4-15:~/hartmut k$ uname -a
Darwin kai-voigts-powerbook-g4-15.local 8.4.0 Darwin Kernel Version 8.4.0: Tue Jan  3 18:22:10 PST 2006; root:xnu-792.6.56.obj~1/RELEASE_PPC Power Macintosh powerpc                                                                            

kai-voigts-powerbook-g4-15:~/hartmut k$ mysqld --version
mysqld  Ver 5.0.18-max for apple-darwin8.2.0 on powerpc (MySQL Community Edition - Experimental (GPL))                                                          

Here, the same problem arises when going from 246 to 247 tables, so obviously a matter of number of file descriptors.

Script t generate the test case and trigger the bug:

kai-voigts-powerbook-g4-15:~/hartmut k$ cat tables.rb 
print "drop database if exists hartmut;\n"
print "create database hartmut;\n"
print "use hartmut;\n"
(1..247).each do
  |n|
  print "create table table_" + n.to_s + " select * from world.country;\n"
end
kai-voigts-powerbook-g4-15:~/hartmut k$ ruby tables.rb|mysql
Enter password: 
kai-voigts-powerbook-g4-15:~/hartmut k$ ktrace mysqldump hartmut > hartmut.dump
Enter password: 
mysqldump: mysqldump: Couldn't execute 'show fields from `table_1`': Can't create/write to file '/var/tmp/#sql_9f_0.MYD' (Errcode: 24) (1)
[3 Feb 2006 8:28] Hartmut Holzgraefe
Error code 24 is "to many open files"

i tried this on linux but it seems to be a MacOS only problem,
even with "ulimit -n 100" for both client and server the dump
worked well
[24 Jul 2006 15:16] Magnus Blåudd
Please examine the server status variable "open_files_limit" and note that mysqld requires two open files for each myisam table. 

On my machine it looks like this:
mysql> SHOW VARIABLES LIKE 'Open_files_limit';
Variable_name Value
open_files_limit      1024

From the manual:
 The number of files that the operating system allows mysqld to open. This is the real value allowed by the system and might be different from the value you gave using the --open-files-limit option to mysqld or mysqld_safe. The value is 0 on systems where MySQL can't change the number of open files.

When mysqldump tries to dump the files, it will first take a read lock on all the tables in the database. That requires it to open all of them at the same time. So if the number of available open files is low, this kind of error can occur.

To make mysqldump avoid taking the read lock use --skip-lock-tables option. I successfully used that to dump more tables than my system had file descriptors.

It should also be possible to put a smaller number of tables in each database or only dump a selected number of tables at a time.

But that are workarounds, best thing is to increase the number of open files on the system.
[24 Jul 2006 15:30] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/9503
[3 Aug 2006 13:57] Magnus Blåudd
Waiting to see if this problem is reproduced by the added test case
[24 Aug 2006 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[20 Feb 2007 9:09] Guy Baconnière
Same issue on MySQL 4.1.20-max-log on Linux Debian Sarge with Kernel 2.6.x
with a database with 7609 tables

mysqldump --quote-names --add-drop-table --all --quick --lock-tables
--disable-keys --force database

database$ find -name \*\.MYI | wc -l
7609

database$ find | wc -l
22829
[20 Feb 2007 9:23] Magnus Blåudd
Did you try any of the hints provided above? 

Using --skip-lock-table to mysqldump and checking the "open_files_limit" both of the OS and in mysqld.

If problem persist please provide the exact error message.