Description:
While trying to generate a benchmark of the server we found that it used too many resource (the machine was a quad PPC, with 4gigs of RAM, and max file descriptors set to 1,000,00). Despite the message about running out of memory, it never used more then ~300 megs. Also tried to increase file descriptors to 10,000,000, still ran out.
Schema:
CREATE TABLE accesslog ( id int(10) unsigned NOT NULL , method varchar(255) default NULL, bytes_sent varchar(255) default NULL, protocol varchar(255) default NULL, uri varchar(255) default NULL, args varchar(255) default NULL, hostname varchar(255) default NULL, client varchar(255) default NULL, Referer varchar(255) default NULL, slash_user varchar(255) default NULL, e2_node varchar(255) default NULL, time datetime NOT NULL default '0000-00-00 00:00:00') PARTITION BY HASH(id) PARTITIONS 5;
Slap Run:
~/mysqlds/5.1/bin/mysqlslap --create=create-with-partition.sql --query="INSERT INTO accesslog VALUES (1,'GET','4428','HTTP/1.1','/index.pl','lastnode_id=305&node_id=2802','krow.net','66.249.72.162',NULL,NULL,'Whitaker','2006-08-28 08:32:28')" --concurrency="2,25,50,100,200,300" --csv=partition.csv --number-of-queries=1000 --iterations=10
How to repeat:
Schema:
CREATE TABLE accesslog ( id int(10) unsigned NOT NULL , method varchar(255) default NULL, bytes_sent varchar(255) default NULL, protocol varchar(255) default NULL, uri varchar(255) default NULL, args varchar(255) default NULL, hostname varchar(255) default NULL, client varchar(255) default NULL, Referer varchar(255) default NULL, slash_user varchar(255) default NULL, e2_node varchar(255) default NULL, time datetime NOT NULL default '0000-00-00 00:00:00') PARTITION BY HASH(id) PARTITIONS 5;
Slap Run:
~/mysqlds/5.1/bin/mysqlslap --create=create-with-partition.sql --query="INSERT INTO accesslog VALUES (1,'GET','4428','HTTP/1.1','/index.pl','lastnode_id=305&node_id=2802','krow.net','66.249.72.162',NULL,NULL,'Whitaker','2006-08-28 08:32:28')" --concurrency="2,25,50,100,200,300" --csv=partition.csv --number-of-queries=1000 --iterations=10
Suggested fix:
Look at file descriptor usage. It was between 200 and 300 that the problem came up.