Bug #42651 Regression: falcon_bug_22169-big started to fail with error 305
Submitted: 6 Feb 2009 14:52 Modified: 15 May 2009 16:13
Reporter: Hakan Küçükyılmaz Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: Falcon storage engine Severity:S1 (Critical)
Version:mysql-6.0-falcon-team OS:Linux (or any)
Assigned to: Christopher Powers CPU Architecture:Any
Triage: Triaged: D2 (Serious)

[6 Feb 2009 14:52] Hakan Küçükyılmaz
falcon.falcon_bug_22169-big regressed with error 305. Verified by WFTO on on my laptop.

How to repeat:
./mysql-test-run.pl --big-test --force --skip-ndb --suite=falcon falcon_bug_22169-big

worker[1] Using MTR_BUILD_THREAD 250, with reserved ports 12500..12509
falcon.falcon_bug_22169-big              [ fail ]
        Test ended at 2009-02-06 15:49:58

CURRENT_TEST: falcon.falcon_bug_22169-big
mysqltest: At line 49: query 'INSERT INTO t2 (id, grp, id_rev) SELECT id, grp, id_rev FROM t1' failed: 1296: Got error 305 'record memory is exhausted' from Falcon
[3 Mar 2009 7:33] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


3046 Christopher Powers	2009-03-03
      Bug #42651 "Regression: falcon_bug_22169-big started to fail with error 305"
      Bug #33177 "Table creation fails after error 305 and tablespace change"
      Bug #32838 "Falcon; error 1296 : Got error 305 'record memory is exhausted'"
      The fix for these bugs is the first of several improvements
      to Falcon's memory management (Worklog TBD).
      Falcon out-of-memory errors are caused by a combination of things.
      Recent improvements to the Scavenger and to the Backlogging subsystem
      (Bug#42592) have contributed to the resolution of these bugs, however,
      certain operations can still fill the record cache to the point where
      scavenging is ineffective.
      Scavenging efficiency will be greatly improved by allocating record
      data and metadata separately. The record cache now stores only
      actual record data, and Record and RecordVersion objects (metadata)
      are allocated from separate memory pools.
      The metadata memory pools are completely homogeneous, with no memory
      fragmentation. The record cache will also be far less fragmented,
      because large blocks of record data will no longer be interspersed
      with very small blocks of object data.
      Decoupling the data and metadata will also greatly reduce the number of
      out-of-memory conditions--typically seen during large inserts and
      updates--because the memory pools are allowed to grow independently.
      These memory pools may flucuate considerably during massive transactions,
      depending upon the record makeup and type of operation. This flucuation,
      however, serves only to emphasize the value managing these memory pools
      One side-effect of this change is that, while the record cache max size
      remains fixed, the record metadata caches can grow unbounded. Although
      this is not unprecedented (Falcon's general purpose memory pool has
      always been unbounded), one remaining challenge is to ensure that
      the Falcon memory manager releases resources back to the system as
      soon as possible.
[2 Apr 2009 17:39] Bugs System
Pushed into 6.0.11-alpha (revid:hky@sun.com-20090402144811-yc5kp8g0rjnhz7vy) (version source revid:christopher.powers@sun.com-20090303070929-ig36zlo3luoxrm2t) (merge vers: 6.0.11-alpha) (pib:6)
[15 May 2009 16:13] MC Brown
Internal/test fix. No changelog entry required.