Bug #43925 error 305 'record memory is exhausted' from Falcon
Submitted: 27 Mar 2009 22:00 Modified: 30 Mar 2009 21:10
Reporter: Vincent Carbone Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Falcon storage engine Severity:S1 (Critical)
Version:6.0.10-alpha OS:Solaris ( 5.11 snv_109 sun4v sparc SUNW,T5240)
Assigned to: CPU Architecture:Any
Tags: 6.0.10-alpha, CMT, error 305, falcon, solaris, SPARC, T5240

[27 Mar 2009 22:00] Vincent Carbone
Description:
When loading a 100 warehouse DBT-2 database the load of the order_line table fails with error 305:

Loading table order_line
Executed command: /export/vcarbone/myopt/mysql-6.0.10-alpha/bin/mysql  -h localhost -u root --socket=/tmp/mysql.sock test -e "LOAD DATA  INFILE \"/poola/dbt2-data/100warehouse/order_line.data\"                 INTO TABLE order_line FIELDS TERMINATED BY '\t'"
ERROR 1296 (HY000) at line 1: Got error 305 'record memory is exhausted' from Falcon
ERROR: rc=1
SCRIPT INTERRUPTED

After this error occurs and mysqld is restarted any activity against the database causes mysqld to crash.

This error occurs with both the binary (mysql-6.0.10-alpha-solaris10-sparc-64bit) and source built on the system. 

After first encountering this error with the downloaded binary Kelly Long suggested that this might be due to known problems with backlogging and sent the following workaround:

In Database.cpp
1825          recordScavenge.retiredActiveMemory = recordDataPool->activeMemory;
1826          recordScavenge.retireStop = deltaTime;
1827
1828  #ifdef FALCON_USE_BROKEN_BACKLOGGING
1829          // Enable backlogging if memory is low
1830
1831          if (recordScavenge.retiredActiveMemory > recordScavengeFloor)
1832                  setLowMemory();
1833          else
1834                  clearLowMemory();
1835  #endif // FALCON_USE_BROKEN_BACKLOGGING

I reran the load with falcon_debug_mask=512 to capture Scavenge information.The log verifies that LowMemory condition does not occur and I confirmed that the backlog was not created. 

Here are the falcon configuratio parameters:
default_storage_engine=Falcon
falcon_io_threads=10
falcon_record_memory_max=4096M
falcon_page_cache_size=1650M
falcon_page_size=8k
falcon_serial_log_dir=/data/mysql-6.0/logs
falcon_serial_log_buffers=100
falcon_consistent_read=off
falcon_record_scavenge_floor=50
falcon_record_scavenge_threshold=50
falcon_debug_mask=512
falcon_debug_server=1

I will attach .err file after bug is opened

How to repeat:
- Download this version of dbt-2: https://intranet.mysql.com/~hakank/dbt2-fixed-20042007.tar.gz

- Configure dbt-2

./configure --with-mysql --enable-nonsp --with-mysql-libs=$MYSQL_HOME/lib/mysql --with-mysql-includes=$MYSQL_HOME/include/mysql --prefix=$HOME/binaries/dbt2-fixed-20042007 CFLAGS='-m64 -O3' CXXFLAGS='-m64 -O3'  LDFLAGS='-lmtmalloc'
gmake

- Generate load 100 warehouse data:

dbt2-fixed-20042007/src/datagen -w 100 --mysql -d $LOAD_DATA_DIR

- my.cnf:
# The following options will be passed to all MySQL clients
[client]
user=mysql
port            = 3306
socket          = /tmp/mysql.sock

# The MySQL server
[mysqld]

datadir=/data/mysql-6.0/var
port            = 3306
socket          = /tmp/mysql.sock

query_cache_size = 0
max_connections=2049
thread_cache=1024
query_cache_type=0
max_allowed_packet=512M
max_connections=1601

default_storage_engine=Falcon
falcon_io_threads=10
falcon_record_memory_max=4096M
falcon_page_cache_size=1650M
falcon_page_size=8k
falcon_serial_log_dir=/data/mysql-6.0/logs
falcon_serial_log_buffers=100
falcon_consistent_read=off
falcon_record_scavenge_floor=50
falcon_record_scavenge_threshold=50
falcon_debug_mask=512
falcon_debug_server=1
table_open_cache = 2048

- Execute dbt2-fixed-20042007/scripts/mysql/mysql_load_db.sh:

mysql_load_db.sh -dtest -c $MYSQL_HOME/bin/mysql -f$LOAD_DATA_DIR -s /tmp/mysql.sock -hlocalhost -uroot -e FALCON -v
[27 Mar 2009 22:06] Vincent Carbone
error log file containing LogScavenge trace and crash information

Attachment: saemrmb2.err (application/octet-stream, text), 15.46 KiB.

[27 Mar 2009 22:06] Vincent Carbone
my.cnf used for testing

Attachment: my.cnf (application/octet-stream, text), 711 bytes.

[27 Mar 2009 22:27] Hakan Küçükyılmaz
I think this is fixed already. Can you please try latest mysql-6.0-falcon tree from our bzr repository?

Thanks,

Hakan
[30 Mar 2009 20:11] Vincent Carbone
I downloaded and installed 432627.mysql-6.0.11-alpha-solaris10-sparc.tar.gz. I am now able to successfully load the dbt-2 tables and execute a test run. Also as part of this issue I installed mysql-6.0.10-alpha on a 16-core tigerton (tucani system) that dual boots to rhel 5 and solaris nevada. In both cases mysql-6.0.10 sucessfully loaded the database but after restarting mysqld any attempt to query the tables caused mysqld to crash with no error messages. Since the T5240 configuration is working I am making the assumption that the RHEL 5 and Solaris Nevada X86 configurations will work also. 

One last point, early in the process I set the falcon_record_scavenge_floor and falcon_record_scavenge_threshold to 50%. On the linux config I tried lowering the record_cache_size to 1024M from 4096M and the falcon_page_cache_size from 1650M to 512M. I then let the record_scavenge parameters default. In this case the load failed due to error 305. When I reset the record_scavenge parameters to 50% and kept the cache sizes the same it completed successfully. Perhaps more investigation should be done on the ideal default values for the record_scavenge  parameters.
[30 Mar 2009 20:16] Vladislav Vaintroub
6.0.10 has known problems with recovery and your inability to access tables after restart is possibly caused by them. In fact, README for 6.0.10  discourages users  to try anything with Falcon and encourages to wait for the next alpha release.
[30 Mar 2009 21:10] Vincent Carbone
This problem is fixed in 6.0.10.11-alpha