Bug #32413 Memory usage not constrained by falcon_record_memory_max, assertion failure
Submitted: 15 Nov 2007 16:06 Modified: 3 Dec 2007 14:22
Reporter: Dean Ellis Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S1 (Critical)
Version:6.0.4 OS:Any
Assigned to: Christopher Powers CPU Architecture:Any

[15 Nov 2007 16:06] Dean Ellis
Description:
Falcon memory usage does not seem to be constrained by falcon_record_memory_max.

That aspect was originally reported and fixed as bug 30084, so part of this may be a duplicate of 30084, and has some similarities to 31286, but opening a new report by request.

The test case below consistently produces the same assertion failure.

How to repeat:
The test case is obviously arbitrary and could be cleaned up:

CREATE TABLE t1 ( a INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, b INT, c VARCHAR(200), d VARCHAR(1000), INDEX (b,c), INDEX (d) ) ENGINE=FALCON;

SET GLOBAL falcon_record_memory_max=32*1024*1024;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF'));

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;

-- Should have exceeded the configured RAM limit at this point (64-128K rows)
-- Repeat a few more times and assertion fails
[15 Nov 2007 16:06] Dean Ellis
#0  0x00002b1e7176dfcb in raise () from /lib/libpthread.so.0
#1  0x000000000087dab8 in Error::debugBreak () at Error.cpp:92
#2  0x000000000087dbde in Error::error (
    string=0xb956c8 "assertion failed at line %d in file %s\n") at Error.cpp:69
#3  0x000000000087dc75 in Error::assertionFailed (
    fileName=0xba05ad "SRLUpdateRecords.cpp", line=51) at Error.cpp:76
#4  0x00000000008f5c8e in SRLUpdateRecords::chill (this=0x2aaaaab534d0, 
    transaction=0x2aaaaad25740, record=0xf594fb8, dataLength=133)
    at SRLUpdateRecords.cpp:51
#5  0x00000000008f5f49 in SRLUpdateRecords::append (this=0x2aaaaab534d0, 
    transaction=0x2aaaaad25740, records=0x89a0fb8, chillRecords=true)
    at SRLUpdateRecords.cpp:168
#6  0x000000000087253c in Dbb::logUpdatedRecords (this=0x2aaaaace2898, 
    transaction=0x2aaaaad25740, records=0x89a0fb8, chill=true) at Dbb.cpp:1206
#7  0x0000000000846932 in Transaction::chillRecords (this=0x2aaaaad25740)
    at Transaction.cpp:509
#8  0x0000000000846a62 in Transaction::addRecord (this=0x2aaaaad25740, 
    record=0x13c092f0) at Transaction.cpp:580
#9  0x00000000008403df in Table::insert (this=0x2aaaaad2a2f8, 
    transaction=0x2aaaaad25740, stream=0x2aaaaab5cb58) at Table.cpp:2643
#10 0x0000000000829c6d in StorageDatabase::insert (this=0x2aaaaace21b0, 
    connection=0x2aaaaad29b88, table=0x2aaaaad2a2f8, stream=0x2aaaaab5cb58)
    at StorageDatabase.cpp:225
#11 0x000000000082f989 in StorageTable::insert (this=0x2aaaaab5bfe0)
    at StorageTable.cpp:88
#12 0x00000000008239ab in StorageInterface::write_row (this=0x2f64120, 
    buff=0x2f64510 "#####") at ha_falcon.cpp:894
#13 0x0000000000767914 in handler::ha_write_row (this=0x2f64120, 
    buf=0x2f64510 "#####") at handler.cc:4597
#14 0x00000000006ef659 in write_record (thd=0x1f042b0, table=0x2f63890, 
    info=0x1f5d290) at sql_insert.cc:1549
#15 0x00000000006ef968 in select_insert::send_data (this=0x1f5d258, 
    values=@0x3187ce0) at sql_insert.cc:3038
#16 0x00000000006bd3ef in end_send (join=0x31862c0, join_tab=0x31880b8, 
    end_of_records=false) at sql_select.cc:14257
#17 0x00000000006c7e8b in evaluate_join_record (join=0x31862c0, 
    join_tab=0x3187e20, error=0) at sql_select.cc:13423
#18 0x00000000006c811a in sub_select (join=0x31862c0, join_tab=0x3187e20, 
    end_of_records=false) at sql_select.cc:13209
#19 0x00000000006cfc72 in do_select (join=0x31862c0, fields=0x3187ce0, 
    table=0x0, procedure=0x0) at sql_select.cc:12956
#20 0x00000000006e9d50 in JOIN::exec (this=0x31862c0) at sql_select.cc:2696
#21 0x00000000006e5685 in mysql_select (thd=0x1f042b0, 
    rref_pointer_array=0x1f05eb0, tables=0x1f5cef8, wild_num=0, 
    fields=@0x1f05dd0, conds=0x0, og_num=0, order=0x0, group=0x0, having=0x0, 
    proc_param=0x0, select_options=3489942016, result=0x1f5d258, 
    unit=0x1f05898, select_lex=0x1f05cc8) at sql_select.cc:2884
#22 0x00000000006e9fe1 in handle_select (thd=0x1f042b0, lex=0x1f057f8,
    result=0x1f5d258, setup_tables_done_option=1073741824) at sql_select.cc:282
#23 0x000000000066885b in mysql_execute_command (thd=0x1f042b0)
    at sql_parse.cc:2655
#24 0x000000000066ddfc in mysql_parse (thd=0x1f042b0, 
    inBuf=0x1f5c270 "INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1", length=91, found_semicolon=0x44088bf8)
    at sql_parse.cc:5366
#25 0x000000000066e88f in dispatch_command (command=COM_QUERY, thd=0x1f042b0, 
    packet=0x1f54241 "INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1", packet_length=91) at sql_parse.cc:886
#26 0x000000000066f9fe in do_command (thd=0x1f042b0) at sql_parse.cc:657
#27 0x00000000006601d9 in handle_one_connection (arg=0x1f042b0)
    at sql_connect.cc:1145
[15 Nov 2007 19:31] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/37884

ChangeSet@1.2679, 2007-11-15 13:30:13-06:00, chris@xeno.mysql.com +4 -0
  Bug#32413, "Memory usage not constrained by falcon_record_memory_max, assertion failure"
  
  - Changed MemRecordMgrSetMaxRecordMember() to allow dynamic changes to falcon_record_memory_max.
  - Correct thawed byte accounting.
[15 Nov 2007 22:42] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/37902

ChangeSet@1.2680, 2007-11-15 16:42:01-06:00, chris@xeno.mysql.com +4 -0
  Bug#32413, "Memory usage not constrained by falcon_record_memory_max, assertion failure"
  
  - Added testcase to exercise falcon_record_memory_max and out-of-memory condition
  - Added out-of-memory exception handling to Table::databaseFetch()
[15 Nov 2007 23:05] Christopher Powers
Resolved the following issues:

1. This assertion in SRLUpdateRecord::chill() verifies that the chill/thaw byte counts are correct:

       ASSERT(transaction->totalRecordData >= dataLength);

Re-chilling a thawed record could result in an incorrect value for totalRecordData. Added a check to ensure that thawed records are not counted twice.

2. Changes to falcon_record_memory_max after initialization were ignored. Modified MemMgrSetMaxRecordMember() to accept record memory cache size changes after initialization.

3. Table::databaseFetch() lacked out-of-memory exception handling.

Also created testcase falcon_bug_32413 to verify that changes to falcon_record_memory_max are recognized and that an out-of-memory condition in the record cache is handled gracefully.
[15 Nov 2007 23:08] Christopher Powers
Above references to Bug#30084 should probably be Bug#30083, "Falcon memory usage grows without bound".
[16 Nov 2007 1:00] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/37909

ChangeSet@1.2682, 2007-11-15 19:00:05-06:00, chris@xeno.mysql.com +1 -0
  Bug#32413: Disabled falcon_bug_32413 for now. Conflicts with other pushbuild tests.
[30 Nov 2007 17:37] Hakan Küçükyılmaz
No assertion anymore. Record memory exhausted is reported correctly now:

[18:36] root@test>INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;
Query OK, 131072 rows affected (11.54 sec)
Records: 131072  Duplicates: 0  Warnings: 0

[18:36] root@test>INSERT INTO t1 SELECT NULL, 12345, SHA('12345'), CONCAT(SHA('12345'),SHA('ABCDEF')) FROM t1;
ERROR 1296 (HY000): Got error 305 'record memory is exhausted' from Falcon
6.0.4-alpha-debug
[30 Nov 2007 20:42] Bugs System
Pushed into 6.0.4-alpha
[3 Dec 2007 14:22] MC Brown
A note has been added to the 6.0.4 changelog: 

Falcon options to set the limits of memory usage would not be honoured. This could lead to crashes and assertions during normal usage, instead of generating a suitable warning.