Bug #56550 Slow performance on Mac OSX Snow Leopard With Table Drop and Creation
Submitted: 3 Sep 2010 18:51 Modified: 24 Jul 2011 16:43
Reporter: Andrew Werhane Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: General Severity:S5 (Performance)
Version:5.1.37 OS:MacOS (Snow Leopard 10.6)
Assigned to: CPU Architecture:Any
Tags: CREATE TABLE, drop table, mac osx, mysqldump, performance, snow leopard

[3 Sep 2010 18:51] Andrew Werhane
Description:
http://forums.mysql.com/read.php?117,383828,383901

Replacing schema on OSX is exponentially slower than same operation on CentOS Linux system.   Performance for selects/inserts seems to be on par with Linux system, but table changes run much slower.  When using OSX for development system, we reload a sample schema for unit tests so this process is run repeatedly throughout the day.

Example -- 3 seconds vs 58 seconds.

time cat mysqlPerformanceTest2.sql | mysql -uroot -hLinuxSystem

real	0m3.130s
user	0m0.015s
sys	0m0.031s

time cat mysqlPerformanceTest2.sql | mysql -uroot  -hMacOSXSystem

real	0m58.004s
user	0m0.022s
sys	0m0.040s

How to repeat:
Create sql file with several drop table and create table statements as well as foreign key creation.  Run against Mac OSX 10.6 system and then compare time against Linux system.
[3 Sep 2010 18:52] Andrew Werhane
Test File to Run against Mac and Linux boxes

Attachment: mysqlPerformanceTest2.sql (application/octet-stream, text), 65.50 KiB.

[3 Sep 2010 18:56] Andrew Werhane
Adding additional tags
[3 Sep 2010 19:18] Valeriy Kravchuk
Firs of all, please, check if the same problem happens with recent version, 5.1.50. What exact filesystems do you use on Mac OS X and on Linux to store MySQL's data directory?
[3 Sep 2010 21:39] Andrew Werhane
Mac Operating System: 10.6.4
Mac OSX Partition Format: Mac OS Extended (Journaled)
Linux Partition: ext3

Witnessed same behavior with latest 5.1.50 release.

Steps to reproduce:

Removed entire /usr/local/mysql* directory
Removed /etc/my.cnf

Installed mysql-5.1.50-osx10.6-x86_64

time cat mysqlPerformanceTest2.sql | mysql -uroot hLINUX
real	0m3.996s
user	0m0.020s
sys	0m0.044s

time cat mysqlPerformanceTest2.sql | mysql -uroot -hMACINTOSH
real	0m54.872s
user	0m0.031s
sys	0m0.057s
[4 Sep 2010 8:14] Valeriy Kravchuk
Please, send my.cnf file content from both systems. 

Also you create and drop a lot of tables, and this means creation of many small .frm files. Please, test how fast each filesystem is when creating a lot of small files. Use some shell script or utilities like sysbench for this. Mac's hfs filesystem is known to be not that fast...
[6 Sep 2010 6:45] Andrew Werhane
I wrote a shell script to simulate creating and deleting files in rapid succession:

----------------

#!/bin/sh

for i in {0..30000}

do
  echo "Hello" > file.txt
  echo "World" > file2.txt
  rm file.txt
  rm file2.txt
done
--------------

Mac Times:
real	2m9.819s
user	0m19.746s
sys	1m43.297s

Linux Times:
real	1m29.362s
user	0m13.946s
sys	1m10.373s

The linux machine was faster at the filesystem test which might account for some of the time discrepancies, but it seems there's a much more significant gap still in schema test (again, meant to simulate dropping and creating tables for a development test environment).

Linux my.cnf
-----------------
[mysqld]
default_time_zone=UTC
default-character-set=utf8
transaction-isolation=READ-COMMITTED
sql-mode=NO_UNSIGNED_SUBTRACTION

server-id = 1234

# Replication
log-bin=/var/lib/mysql/mysql-bin
binlog-format=ROW

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /var/lib/mysql
#innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /var/lib/mysql
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
innodb_lock_wait_timeout = 50000

[client]
default-character-set=utf8

==========================
Mac my.cnf
==========================
[mysqld]
#default_time_zone=UTC
default-character-set=utf8
transaction-isolation=READ-COMMITTED
sql-mode=NO_UNSIGNED_SUBTRACTION

server-id = 123

# Replication
#log-bin=/var/lib/mysql/mysql-bin
#binlog-format=ROW

# Uncomment the following if you are using InnoDB tables
innodb_data_home_dir = /usr/local/mysql/data
#innodb_data_file_path = ibdata1:10M:autoextend
innodb_log_group_home_dir = /usr/local/mysql/data
# You can set .._buffer_pool_size up to 50 - 80 %
# of RAM but beware of setting memory usage too high
innodb_buffer_pool_size = 256M
innodb_additional_mem_pool_size = 20M
# Set .._log_file_size to 25 % of buffer pool size
innodb_log_file_size = 64M
innodb_log_buffer_size = 8M
innodb_flush_log_at_trx_commit = 1
#innodb_lock_wait_timeout = 50
innodb_lock_wait_timeout = 50000

[client]
default-character-set=utf8
[7 Sep 2010 9:10] Valeriy Kravchuk
As you do not use innodb_file_per_table, more realistic test will be to write 16K blocks at random locations of big file, or to make big file constantly grow using 16K blocks (default InnoDB page size). You can try to use sysbench (http://sysbench.sourceforge.net/) fileio benchmark for this.
[7 Sep 2010 17:34] Andrew Werhane
I have previously toggled the "innodb_file_per_table" option back and forth while trying to investigate this issue and have not noticed a significant change either way.

Has anyone else been able to replicate the slow performance with this test or is it just me?  I've seen the same thing on multiple OSX installs, so I'm curious if it's just something specific to our my.cnf configuration.

Thanks.
[15 Sep 2010 20:38] Josh Butts
I can confirm this behavior on 5.1.50 an Mac OS 10.6.4
[22 Sep 2010 17:22] Richard Fogle
I can confirm this as well.
[22 Sep 2010 17:23] Richard Fogle
Sorry, I'm on 5.1.50 and 10.6.4 too.
[2 Oct 2010 19:22] Vito Botta
Glad I found this...I thought it was just me.
Same issues on 10.6 with the latest versions of Mysql installed either via ports or with the package for Mac OS X. I've also tried MAMP and all sorts of configuration settings but it's so slow..
An example: restoring a dump of  a database with a few tables having about 2.5+ million rows each, takes between around 300 minutes on my MBP (2010 edition with a 7200 rpm drive), while it takes only a fraction of that time (can't remember...perhaps 20-30 minutes) on a Ubuntu Linux virtual machine.

I did the dumps with the --opt option and the restores with

time (
echo "SET AUTOCOMMIT=0;"
echo "SET UNIQUE_CHECKS=0;"
echo "SET FOREIGN_KEY_CHECKS=0;"
cat dump.sql
echo "SET FOREIGN_KEY_CHECKS=1;"
echo "SET UNIQUE_CHECKS=1;"
echo "SET AUTOCOMMIT=1;"
echo "COMMIT;"
) | mysql database_name

Any progress or suggestion on this? I need to test some Rails migrations on that data (I can't test on smaller datasets) and it's a pain like this. 
I am writing these migrations so that I 

- create an empty table first with the same schema as that of the table I need to upgrade (with the LIKE syntax)
- make schema changes on this new table
- set the auto_increment to start from where it should 
- rename the tables to use the new empty one 
- import the rows from the old table into the new

But however I do this it's really slow now and not easy :(
[26 Oct 2010 14:57] Damien Trog
I experience the same problem. To circumvent the bad performance for running unit tests that rebuild the scheme every time, I let mysql run on a RAM disk. The performance increase is about 5 fold on mac. Running linux on the same machine I don't need to create a RAM disk, since it's almost as fast.

So there is definitely something wrong with disk access in mysql on mac os x.
[9 Nov 2010 7:24] Daniel Fischer
This is caused by sync calls after creating a frm file. If you want to play roulette, you can disable this with the --skip-sync-frm option.

The reason why it's slower on Mac OS X than on Linux is that on Mac OS X, fcntl(F_FULLFSYNC) is available, and mysqld prefers this call to fsync(). The difference is that fsync() only flushes data to the disk - both on Linux and Mac OS X -, while fcntl(F_FULLFSYNC) also asks the disk to flush its own buffers and blocks until the data is physically written to the disk. 

In a nutshell, it's slower because it's safer.
[9 Nov 2010 19:33] Andrew Werhane
@Daniel Fischer -- Thanks for the tip!!!  I'm constantly replacing this database locally for development purposes only and this is exactly what I needed.

I added skip-sync-frm=OFF to my my.cnf file and now the process that was taking upwards of 5 minutes per cycle is down to 30 seconds.

I will NOT be using this for production environments, but I'm very psyched to be able to speed up my local test system.
[24 Jul 2011 16:43] Valeriy Kravchuk
So we can conclude this is not a bug, but a way filesystem in Mac OS X works. There is also a workaround.