Bug #36442 Falcon: crash during optimize table
Submitted: 30 Apr 2008 21:51 Modified: 15 May 2009 13:01
Reporter: Peter Gulutzan Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: Falcon storage engine Severity:S3 (Non-critical)
Version:6.0.6-alpha-debug OS:Linux (SUSE 10 | 32-bit)
Assigned to: Christopher Powers
Triage: Triaged: D1 (Critical)

[30 Apr 2008 21:51] Peter Gulutzan
I create a Falcon table.
I insert and update 400,000 rows.

How to repeat:
/* This will take a while, there are 400,000 rows. */

delimiter //
set @@autocommit=0//
drop table if exists t//
drop procedure if exists p//
create procedure p ()
  declare v int default 0;
  declare r int default 0;
  declare c varchar(100) character set utf32;
  declare c2 varchar(1) character set utf32;
  declare ch char(8);
  create table t (s1 int, s2 varchar(100) character set utf32, s3 text character set utf8) engine=falcon;
  /* create index i1 on t (s2, s1); */
  create index i2 on t (s1);
  create index i3 on t (s2);
  while v < 400000 do
      declare continue handler for sqlexception begin end;
      insert into t values (v,null,null);
      set ch = hex(v);
      while length(ch) < 8 do set ch = concat('0',ch); end while;
      set @x = concat('update t set s2 = repeat(0x',ch,',100) where s1 = ',v-1);
      if v mod 1000 = 0 then select v,@x; end if;
      prepare stmt1 from @x;
      execute stmt1;
      update t set s3 = s2 where s1 = v - 2;
    set v = v + 1;
    if v mod 100 = 0 then commit; end if;
    end while;
delimiter ;
call p();
alter table t modify column s3 varchar(100) CHARACTER SET utf32;
optimize table t;
[1 May 2008 1:58] Miguel Solorzano
Thank you for the bug report.

080430 22:47:48 [Note] 6.0\bin\mysqld: ready for connections.
Version: '6.0.6-alpha-nt'  socket: ''  port: 3306  Source distribution
080430 22:55:50 - mysqld got exception 0xe06d7363 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help diagnose
the problem, but since we have already crashed, something is definitely wrong
and this may fail.

It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 337727 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd: 0x1b6d930
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
7C812A5B    kernel32.dll!RaiseException()
0077A472    mysqld.exe!_CxxThrowException()
006E5FFF    mysqld.exe!MemMgr::allocRaw()[memmgr.cpp:811]
006E6E13    mysqld.exe!MemMgr::alloc()[memmgr.cpp:464]
006E6F6B    mysqld.exe!MemMgr::allocate()[memmgr.cpp:506]
006E72EE    mysqld.exe!MemMgrPoolAllocate()[memmgr.cpp:161]
006EEF43    mysqld.exe!Record::allocRecordData()[record.cpp:926]
006EF707    mysqld.exe!Record::setEncodedRecord()[record.cpp:702]
006EF92F    mysqld.exe!Record::Record()[record.cpp:97]
006F1E96    mysqld.exe!Table::allocRecord()[table.cpp:3649]
006F2847    mysqld.exe!Table::databaseFetch()[table.cpp:663]
006F4ECF    mysqld.exe!Table::fetchNext()[table.cpp:506]
006F6066    mysqld.exe!Table::optimize()[table.cpp:3578]
006DF3A2    mysqld.exe!StorageTable::optimize()[storagetable.cpp:577]
006DDA01    mysqld.exe!StorageInterface::optimize()[ha_falcon.cpp:686]
00422B3C    mysqld.exe!handler::ha_optimize()[handler.cc:3071]
0054FB84    mysqld.exe!mysql_admin_table()[sql_table.cc:4287]
00550299    mysqld.exe!mysql_optimize_table()[sql_table.cc:4525]
00502C4A    mysqld.exe!mysql_execute_command()[sql_parse.cc:2794]
00506441    mysqld.exe!mysql_parse()[sql_parse.cc:5752]
00506D63    mysqld.exe!dispatch_command()[sql_parse.cc:1047]
005079C7    mysqld.exe!do_command()[sql_parse.cc:726]
00579379    mysqld.exe!handle_one_connection()[sql_connect.cc:1134]
005DED3B    mysqld.exe!pthread_start()[my_winthread.c:86]
0077619C    mysqld.exe!_threadstart()[thread.c:196]
7C80B683    kernel32.dll!GetModuleFileNameA()
Trying to get some variables.
Some pointers may be invalid and cause the dump to abort...
thd->query at 059F1990=optimize table t
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash.

[1 Feb 2009 9:08] Christopher Powers
A fix for this bug was included in a patch for a related bug:


2992 Christopher Powers	2009-02-01
      Bug #42424, "Serious performance degradation after new scavenger"
      Increased scavenger wait increment from 10 to 20ms.
      Use progressive scavenger wait timeout for record allocation methods.
[1 Feb 2009 9:40] Christopher Powers
Several factors contributed to the failure of the testcase in this bug.

1. Record cache overflow.

The primary issue with this bug is that TABLE OPTIMIZE overwhelms the Falcon record cache such that the record scavenger cannot keep up. This is true even for very large (>1GB) cache sizes.

The Falcon scavenger has been reworked and is now more responsive, however, the fixes in Bug#42424 further improve the scavenger response, and are necessary for the testcase in this bug to complete successfully. 

2. Table::optimize() is very inefficient.

Table::optimize() refreshes the record and index cardinality for a table, but does so very inefficiently.

The record cardinality is determined by fetching every record in the table. Each fetch loads a record into the record cache, where it remains unscavengeable (useCount > 1) until the transaction completes.

This is easily corrected by calling release() for each record fetched.

The index cardinality is computed by Index::optimize, which executes a SELECT for each index, like this:


The SELECT fetches each record in the index. If the table is large enough, this will overflow the record cache very quickly.

The fix for this is more complex. Index::optimize() must be rewritten to use the IndexWalker class, which obviates the need to fetch any records. However, each segment of the index must be enumerated separately--simply instantiating an IndexWalker and calling ::getNextNode() is not sufficient.

Index::optimize() has been disabled until it is rewritten using the IndexWalker.

3. The default Falcon memory parameters must be changed.

The default Falcon record cache size is 250MB, but the default page cache size is only 4MB--much too small for memory-intensive operations such as the testcase in this bug. Changing the page cache size from 4MB to 250MB or even 500MB reduces the test run time from hours to minutes.

This is addressed in Bug #42510, "Falcon: Default memory parameters must be changed".

4. The record backlog facility is never enabled by the Scavenger.

The record backlog is a fallback mechanism triggered by a low-memory condition. The new scavenger did not properly enable record backlogging. This has been corrected in Bug #42505, "Falcon: Record backlogging not enabled by the Scavenger".
[1 Feb 2009 9:53] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


2993 Christopher Powers	2009-02-01
      Bug #42510, "Falcon: Default memory parameters must be changed"
      Bug #36442, "Falcon: crash during optimize table"
      Increased default page cache size from 4MB to 250MB
      Increased record scavenge threshold from 67 to 90
      Increased record scavenge floor from 50 to 80
[2 Feb 2009 7:00] Kevin Lewis
I agree that 250Mb page cache is the most appropriate setting when someone is using Falcon.  But Vlad has pointed out that it increases the amount of memory that MySQL 6.0 initializes with by 246Mb.  This will be a major inconvenience to any MySQL customer that does not plan to use Falcon.  Is there any way to make Falcon allocate this page cache in smaller segments as needed?  The original default of 4Mb never really got in anyones way.  But 250Mb will be noticed.
[2 Feb 2009 14:36] Kevin Lewis
After reconsideration, I think we can go ahead and use this setting.  If a customer is not interested in using Falcon, they can use -loose-skip-falcon at startup.  So this memory use is still optional.  If customers are using Falcon, the 'memory hog', then I say 'this is what we reccommend'.  'If you don't like it change it'.

That said, it would still be less intrusive if the page cache was allocated as needed.
[3 Feb 2009 18:13] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


2999 Christopher Powers	2009-02-03
      Bug #36442, "Falcon: crash during optimize table"
      Refined Table::optimize() to release each base record.
      Records having newer record versions were not being released.
[10 Feb 2009 22:11] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


3012 Christopher Powers	2009-02-10
      Bug #36442, "Falcon: crash during optimize table"
      Refined Table::optimize() to release each base record.
      Records having newer record versions were not being released.
[13 Feb 2009 7:25] Bugs System
Pushed into 6.0.10-alpha (revid:alik@sun.com-20090211182317-uagkyj01fk30p1f8) (version source revid:hky@sun.com-20090203204730-3k0pc0uoxp6wqr1i) (merge vers: 6.0.10-alpha) (pib:6)
[2 Mar 2009 14:13] Bugs System
Pushed into 6.0.11-alpha (revid:alik@sun.com-20090302140208-lfdejjbcyezlhhjt) (version source revid:vvaintroub@mysql.com-20090211225555-gcuo9fv97xlhydd8) (merge vers: 6.0.10-alpha) (pib:6)
[15 May 2009 13:01] MC Brown
An entry has been added to the 6.0.10 changelog: 

The default Falcon memory parameters have been updated. The new settings for all the memory parameters are as follows:

falcon_record_memory_max is now 250 MB

falcon_page_cache_size is now 250 MB

falcon_record_scavenge_threshold is 90% (of record memory max)

falcon_record_scavenge_floor is 80% (of scavenge threshold)

falcon_record_chill_threshold is 5 MB

falcon_index_chill_threshold is now 4MB