Bug #39565 Falcon read I/O system is not very efficient
Submitted: 21 Sep 2008 1:58 Modified: 20 Dec 2013 8:18
Reporter: Xuekun Hu Email Updates:
Status: Won't fix Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S5 (Performance)
Version:6.0-falcon OS:Linux (SLES10SP1 (2.6.16.46-0.12-smp))
Assigned to: CPU Architecture:Any
Tags: falcon

[21 Sep 2008 1:58] Xuekun Hu
Description:
When reading Falcon table, the read throughput is only ~1.1MB/s, while ~5MB/s for Innodb table. 

If increased falcon_page_size to 32KB from default 4KB, the read throughput can be improved to ~6.6MB/s. 

Using a larger page size is one way to ensure less disk IO as a page will likely have more records of interest on it.  But consecutive disk sectors are not necessarily of associative interest due to fragmentation and other table data being interspersed.

How to repeat:
Create a big Falcon table, and do full table scan. Then to check the disk throughput. 

I used DBT3 5GB benchmark. Q9 average execution time cold run on Falcon vs. Innodb with same query plan: 3698s vs. 1259s.
[11 Nov 2009 1:02] Kevin Lewis
Jim Starkey wrote;
You are aware, I hope, that there are tradeoffs for page size, and an exhaustive scan of the a large database is not a typical operation.  Yes, a large page size is the most efficient way to read a large table, but most people use indexes (particularly is a human is waiting for the result), and a large page size reduces the number of pages in the page cache for a given amount of memory, reducing the probability that a particular page will be in cache.

I suggest that you consider (and weight) various access patterns before you make such bold statements.
[20 Dec 2013 8:18] Erlend Dahl
This project has been abandoned.