Bug #32838 Falcon; error 1296 : Got error 305 'record memory is exhausted'
Submitted: 29 Nov 2007 5:42 Modified: 17 Nov 2016 7:44
Reporter: Jeffrey Pugh Email Updates:
Status: Won't fix Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S2 (Serious)
Version:6.0.4, mysql-6.0-falcon-team OS:Any
Assigned to: CPU Architecture:Any
Tags: F_MEMORY

[29 Nov 2007 5:42] Jeffrey Pugh
Description:
Insert 1M rows into a table in portions (1,2,4 etc); then try to insert 1M rows all at once - Falcon errors with ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon.

I was instructed to increase falcon_record_memory_max, but this doesn't help. 

I also tried doing the inserts in two different client, in two different transactions, expecting that to fail earlier, but it still fails on the insert of the 1M rows.

There is a workaround to split the insert into multiple inserts, but this seems unpredictable.

My system is Lenovo Thinkpad , WinXP, 1G RAM

mysql> describe t1;
+-------+---------+------+-----+---------+-------+
| Field | Type    | Null | Key | Default | Extra |
+-------+---------+------+-----+---------+-------+
| i1    | int(11) | YES  |     | NULL    |       |
+-------+---------+------+-----+---------+-------+
1 row in set (0.02 sec)

mysql> show variables like 'falcon%';
+----------------------------------+-----------------+
| Variable_name                    | Value           |
+----------------------------------+-----------------+
| falcon_checkpoint_schedule       | 7 * * * * *     |
| falcon_consistent_read           | ON              |
| falcon_debug_mask                | 0               |
| falcon_debug_server              | OFF             |
| falcon_debug_trace               | 0               |
| falcon_direct_io                 | 1               |
| falcon_gopher_threads            | 5               |
| falcon_index_chill_threshold     | 4               |
| falcon_initial_allocation        | 0               |
| falcon_io_threads                | 2               |
| falcon_large_blob_threshold      | 160000          |
| falcon_lock_timeout              | 0               |
| falcon_max_transaction_backlog   | 150             |
| falcon_page_cache_size           | 4194304         |
| falcon_page_size                 | 4096            |
| falcon_record_chill_threshold    | 5               |
| falcon_record_memory_max         | 262144000       |
| falcon_record_scavenge_floor     | 50              |
| falcon_record_scavenge_threshold | 67              |
| falcon_scavenge_schedule         | 15,45 * * * * * |
| falcon_serial_log_block_size     | 0               |
| falcon_serial_log_buffers        | 20              |
| falcon_serial_log_dir            |                 |
| falcon_serial_log_priority       | 1               |
| falcon_use_deferred_index_hash   | OFF             |
+----------------------------------+-----------------+
25 rows in set (0.00 sec)

mysql> insert into t1 values(1);
Query OK, 1 row affected (0.03 sec)

mysql> insert into t1 select * from t1;
Query OK, 1 row affected (0.02 sec)
Records: 1  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 2 rows affected (0.02 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 4 rows affected (0.01 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 8 rows affected (0.02 sec)
Records: 8  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 16 rows affected (0.01 sec)
Records: 16  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 32 rows affected (0.01 sec)
Records: 32  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 64 rows affected (0.01 sec)
Records: 64  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 128 rows affected (0.01 sec)
Records: 128  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 256 rows affected (0.00 sec)
Records: 256  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 512 rows affected (0.02 sec)
Records: 512  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 1024 rows affected (0.02 sec)
Records: 1024  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 2048 rows affected (0.02 sec)
Records: 2048  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 4096 rows affected (0.03 sec)
Records: 4096  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 8192 rows affected (0.05 sec)
Records: 8192  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 16384 rows affected (0.08 sec)
Records: 16384  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 32768 rows affected (0.13 sec)
Records: 32768  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 65536 rows affected (0.25 sec)
Records: 65536  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 131072 rows affected (0.47 sec)
Records: 131072  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 262144 rows affected (0.95 sec)
Records: 262144  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
Query OK, 524288 rows affected (1.88 sec)
Records: 524288  Duplicates: 0  Warnings: 0

mysql> insert into t1 select * from t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon
mysql> select count(*) from t1;
+----------+
| count(*) |
+----------+
|  1048576 |
+----------+
1 row in set (0.55 sec)

Now increase falcon_record_memory_max to 1024000000; you still can't do the final insert.

How to repeat:
create table t1 (i1 int) engine=falcon;
insert into t1 values(1);
insert into t1 select * from t1;
.
.repeat 20 times
.
insert into t1 select * from t1;
<get error 305 as above>
set GLOBAL variable falcon_record_memory_max=1024000000
insert into t1 select * from t1;
<get error 305 as above>
[29 Nov 2007 16:29] Giuseppe Maxia
The same error happens in a few more circumstances.
After building a large table (17 million recs, 2.4 GB) without errors, thanks to a larger falcon_record_memory_max, a call to TRUNCATE tablename or ALTER TABLE tablename DROP KEY ... will result in error 305.
[29 Nov 2007 16:39] Kevin Lewis
Falcon needs to be able to treat record cache as a window into all types of records that are needed.  It should never be exhausted.  WL4155 should take care of that.  Since this is related to WL#4155, it is assigned to cpowers.
[29 Nov 2007 18:30] Jeffrey Pugh
I separately tested this on xeno (8GB Linux system), with falcon_record_memory_max=262K (default) and got similar behavior except the failure occurs at the 2M insertion.

Then I increased falcon_record_memory_max to 1024000000 (4x) and the problem then occurs at the 8M record insertion.

As long as this limitation, and the appropriate memory settings, are documented for test servers this is a reasonable limitation at beta. I am downgrading this to P3.
[31 Mar 2008 7:47] Hakan Küçükyılmaz
After WL#4155 being implemented, I still see
   Got error 305 'record memory is exhausted'

[09:35] root@test>create table t1 (a int, b int, c int) engine falcon;
Query OK, 0 rows affected (0.07 sec)

[09:36] root@test>insert into t1 select * from t1;
Query OK, 1 row affected (0.01 sec)
Records: 1  Duplicates: 0  Warnings: 0

...

[09:37] root@test>insert into t1 select * from t1;
Query OK, 1048576 rows affected (10.63 sec)
Records: 1048576  Duplicates: 0  Warnings: 0

[09:38] root@test>insert into t1 select * from t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon

Relevant settings are:
falcon_record_memory_max = 300MB
falcon_page_cache_size = 300MB
[3 Apr 2008 17:51] Kevin Lewis
WL#4155 ( Falcon; Durable Backlog of Old Record Versions) is now implemented and works, at least initially.  I have attached two pictures of the memory and CPU usage on my laptop while doing  "insert into t1 select * from t1;" when it requires more memory than the record cache limit.

The first chart hits the limit about halfway through the INSERT.  Memory use flatlined when the record cache limit is hit and the INSERT succeeds.  This INSERT used to fail with "error 305 'record memory is exhausted' from Falcon".

The second chart shows the following INSERT that used twice as much data as seen in the first chart.  This chart shows an insert that affects about 1Gb of memory.  My record cache is only 250Mb.  The memory usage flatlines but fluctuates a little while backlogging is going on.  About 60% of the way through the test falcon reported an out-of-record-cache message on the console, but the insert continued. At the end of the test the client got 

ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon

So this tells me that backlogging is working, but not completely.

Jim, please try this test;

CREATE TABLE t1 (
  t1_autoinc INTEGER NOT NULL AUTO_INCREMENT,
  t1_uuid CHAR(36),
  PRIMARY KEY (t1_autoinc)
) ENGINE = Falcon;

DROP TABLE t2;

CREATE TABLE t2 (
  t1_autoinc INTEGER NOT NULL AUTO_INCREMENT,
  t1_uuid CHAR(36),
  PRIMARY KEY (t1_autoinc)
) ENGINE = Falcon;

delimiter //
CREATE PROCEDURE p1 ()
BEGIN
  DECLARE my_count INT DEFAULT 0;
  WHILE my_count < 1000 do
    INSERT INTO t1 (t1_uuid) VALUES (uuid());
    SET my_count = my_count + 1;
  END WHILE;
END//
delimiter ;

CALL p1();
INSERT INTO t1 (t1_uuid) SELECT t1_uuid FROM t1;
#repeat this insert  until record cache is full and then do it a couple more times to test/stress the backlogging.
[3 Apr 2008 17:52] Kevin Lewis
Successful Backlog test

Attachment: backlog.flatline.1.jpg (image/jpeg, text), 122.87 KiB.

[3 Apr 2008 17:53] Kevin Lewis
Unseccessful backlog test

Attachment: backlog.flatline.2.jpg (image/jpeg, text), 179.91 KiB.

[30 Apr 2008 22:11] Philip Stoev
Here is a simpler sql script to demonstrate this problem

create table t1 (dt datetime not null, s2 varchar(100) character set utf32, s3 text character set utf8, primary key (dt)) engine = falcon;
create view v3 as select null union all select null union all select null;
create view v10 as select null from v3 a, v3 b union all select null;
create view v1000 as select null from v10 a, v10 b, v10;
insert into t1 select now()- interval @n:=@n+1 second, REPEAT('x',10), REPEAT('y', 20)  from v1000 a,v1000 b;
[19 Jun 2008 17:58] Kevin Lewis
Chris, can you retest this 'out of memory' situation once Online Add Index is available?  There is still a lot of talk about not being able to load large tables.  Do we still have crashes, ect.
[11 Sep 2008 14:52] Vemund Østgaard
I see this same problem when running the large_tests suite with 6.0.7.

On a 2GB memory linux box with default falcon_record_memory_max it failed on a 1
 million records insert. Running on a 8GB linux box with falcon_record_memory_max set to 4GB 
the test got further and failed in the same way on a 16 million records insert.
[3 Feb 2009 7:26] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/64967

2998 Christopher Powers	2009-02-03
      Bug #32838 "Falcon; error 1296 : Got error 305 'record memory is exhausted'"
      
      Several factors contribute to the out of memory error. Many have been resolved
      in other bugs (see Bug#42424, Bug#42505, Bug#42510 for details).
      
      Record scavenger efficiency can be further improved by reducing record
      cache fragmentation.
      
      The size of Record and RecordVersion objects is always the same, but the
      size of record data can vary considerably. The record cache becomes more
      fragmented over time because it is interspersed with fixed-size Record and
      RecordVersion objects.
      
      Record and RecordVersion objects are now allocated from a memory pool
      separate from the record data. Both memory pools will be more homogeneous,
      less fragmented and easier to scavenge.
[3 Feb 2009 7:46] Christopher Powers
With this patch I was able to run the first testcase up to 2M rows before reaching out-of-memory. Increasing the record cache to 1GB allowed it to continue to 16M rows.

Note that the record data in this case is exceptionally small, and the memory allocated to record objects outpaces allocated record data by 2:1.
[3 Mar 2009 7:33] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/68093

3046 Christopher Powers	2009-03-03
      Bug #42651 "Regression: falcon_bug_22169-big started to fail with error 305"
      Bug #33177 "Table creation fails after error 305 and tablespace change"
      Bug #32838 "Falcon; error 1296 : Got error 305 'record memory is exhausted'"
      
      The fix for these bugs is the first of several improvements
      to Falcon's memory management (Worklog TBD).
      
      Falcon out-of-memory errors are caused by a combination of things.
      Recent improvements to the Scavenger and to the Backlogging subsystem
      (Bug#42592) have contributed to the resolution of these bugs, however,
      certain operations can still fill the record cache to the point where
      scavenging is ineffective.
      
      Scavenging efficiency will be greatly improved by allocating record
      data and metadata separately. The record cache now stores only
      actual record data, and Record and RecordVersion objects (metadata)
      are allocated from separate memory pools.
      
      The metadata memory pools are completely homogeneous, with no memory
      fragmentation. The record cache will also be far less fragmented,
      because large blocks of record data will no longer be interspersed
      with very small blocks of object data.
      
      Decoupling the data and metadata will also greatly reduce the number of
      out-of-memory conditions--typically seen during large inserts and
      updates--because the memory pools are allowed to grow independently.
      
      These memory pools may flucuate considerably during massive transactions,
      depending upon the record makeup and type of operation. This flucuation,
      however, serves only to emphasize the value managing these memory pools
      separately.
      
      One side-effect of this change is that, while the record cache max size
      remains fixed, the record metadata caches can grow unbounded. Although
      this is not unprecedented (Falcon's general purpose memory pool has
      always been unbounded), one remaining challenge is to ensure that
      the Falcon memory manager releases resources back to the system as
      soon as possible.
[30 Mar 2009 15:46] Philip Stoev
The LOAD DATA problem described in the last comments has been separated as new bug #43963 . This way, this bug remains about INSERT ... SELECT.
[2 Apr 2009 17:38] Bugs System
Pushed into 6.0.11-alpha (revid:hky@sun.com-20090402144811-yc5kp8g0rjnhz7vy) (version source revid:christopher.powers@sun.com-20090303070929-ig36zlo3luoxrm2t) (merge vers: 6.0.11-alpha) (pib:6)
[7 May 2009 13:16] Elena Stepanova
For the initially described scenario, there is no improvement in 6.0.11 comparing to 6.0.10, and performance has declined considerably.

Scenario:

create table t1 (i1 int) engine=falcon;
insert into t1 values(1);
insert into t1 select * from t1;
# .... (keep repeating)

Result summary:

1) With falcon_record_memory_max=1024000000, in 6.0.10 I can insert 2M records before I hit the 'record memory is exhausted', and in 6.0.11 -- 8M records; *but in fact*, 6.0.11 consumes 3 x falcon_record_memory_max memory: after inserting 8M records it's about 3Gb. If I want to allow server to use that much memory, I can set falcon_record_memory_max=3072000000 in 6.0.10 and get the same 8M records.

2) In the same scenario and with the same settings, 6.0.11 latency rises significantly starting from 256K records (15-20 times comparing to 6.0.10): 

256K records: 
  6.0.10 - 0.74 sec
  6.0.11 - 22.11 sec
...
2M records:
  6.0.10 - 8.52 sec
  6.0.11 - 2 min 14.88 sec

3) Side-effect of excessive memory consumption: if I set falcon_record_memory_max = 4Gb on a 8Gb machine, the server uses up all memory and dies during a query.

More results:

6.0.10 / falcon_record_memory_max=1024000000:

...

mysql> insert into t1 select * from t1;
Query OK, 131072 rows affected (0.38 sec)

mysql> insert into t1 select * from t1;
Query OK, 262144 rows affected (0.74 sec)

...

mysql> insert into t1 select * from t1;
Query OK, 2097152 rows affected (8.52 sec)

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
25606 qauser    16   0 1404m 1.0g 6548 S    0 13.4   0:31.23 mysqld

mysql> insert into t1 select * from t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon

6.0.11 / falcon_record_memory_max=1024000000:

...

mysql> insert into t1 select * from t1;
Query OK, 131072 rows affected (0.39 sec)

mysql> insert into t1 select * from t1;
Query OK, 262144 rows affected (22.11 sec)

...

mysql> insert into t1 select * from t1;
Query OK, 2097152 rows affected (2 min 14.88 sec)

mysql> insert into t1 select * from t1;
Query OK, 4194304 rows affected (4 min 29.79 sec)

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
22939 qauser    16   0 1686m 1.5g 6668 S   25 19.1   1:06.05 mysqld

mysql> insert into t1 select * from t1;
Query OK, 8388608 rows affected (10 min 59.82 sec)

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
22939 qauser    16   0 3101m 2.9g 6668 S    1 36.9   3:00.06 mysqld

mysql> insert into t1 select * from t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon

6.0.10 / falcon_record_memory_max=3072000000:

...

mysql> insert into t1 select * from t1;
Query OK, 131072 rows affected (0.37 sec)

mysql> insert into t1 select * from t1;
Query OK, 262144 rows affected (0.73 sec)

...

mysql> insert into t1 select * from t1;
Query OK, 4194304 rows affected (17.00 sec)

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
24942 qauser    16   0 2214m 1.9g 6572 S    0 24.4   1:05.04 mysqld

mysql> insert into t1 select * from t1;
Query OK, 8388608 rows affected (56.05 sec)

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
24942 qauser    16   0 3094m 2.9g 6576 S    2 36.9   2:33.19 mysqld

mysql> insert into t1 select * from t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon

======================
Linux 2.6.16.60-0.21-smp #1 SMP Tue May 6 12:41:02 UTC 2008 x86_64 GNU/Linux

LSB_VERSION="core-2.0-noarch:core-3.0-noarch:core-2.0-x86_64:core-3.0-x86_64"
SUSE Linux Enterprise Server 10 (x86_64)
VERSION = 10
PATCHLEVEL = 2
[15 May 2009 16:09] MC Brown
Another user reports a problem with this in a recent version. Putting back to NDI for confirmation.
[17 Nov 2016 7:44] Erlend Dahl
The Falcon project was discontinued a long time ago.