Bug #2034 Heap tables improper limit enforced.
Submitted: 7 Dec 2003 7:04 Modified: 9 Dec 2003 21:20
Reporter: Peter Zaitsev (Basic Quality Contributor) Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:4.0.16 OS:Linux (linux)
Assigned to: Paul DuBois CPU Architecture:Any

[7 Dec 2003 7:04] Peter Zaitsev
Description:
I've ran into rather strange behavior of heap tables today:

mysql> set max_heap_table_size=2000000000;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into sbtest1 select * from sbtest limit 5000000;
ERROR 1114: The table 'sbtest1' is full
mysql> select count(*) from sbtest1;
+----------+
| count(*) |
+----------+
|    90112 |
+----------+
1 row in set (0.00 sec)

mysql> truncate table sbtest1;
Query OK, 0 rows affected (0.00 sec)

mysql> insert into sbtest1 select * from sbtest limit 5000000;
Query OK, 5000000 rows affected (49.00 sec)
Records: 5000000  Duplicates: 0  Warnings: 0

As you see until you truncate table the larger limit you applied previously does not seems to work.

Note the problem happens only with existing table  (I started MySQL with this table existing already) It does not happen if you create table after setting the limit.

How to repeat:
 CREATE TABLE `sbtest` (
  `id` int(10) unsigned NOT NULL auto_increment,
  `k` int(10) unsigned NOT NULL default '0',
  `c` char(100) NOT NULL default '',
  `pad` char(60) NOT NULL default '',
  PRIMARY KEY  (`id`),
  KEY `k` (`k`)
) TYPE=InnoDB

CREATE TABLE `sbtest1` (
  `id` int(10) unsigned NOT NULL default '0',
  `k` int(10) unsigned NOT NULL default '0',
  `c` char(100) NOT NULL default '',
  `pad` char(60) NOT NULL default '',
  PRIMARY KEY  (`id`),
  KEY `k` (`k`)
) TYPE=HEAP

Table sbtest is filled with 200.000.000 of random records so I do not attach the dump:

mysql> select * from sbtest limit 5;
+----+---+-----------------------------------------------------------------------------------------------------+----------------------------------------------------+
| id | k | c                                                                                                   | pad                                                |
+----+---+-----------------------------------------------------------------------------------------------------+----------------------------------------------------+
|  1 | 1 |                                                                                                     | aaaaaaaaaaffffffffffrrrrrrrrrreeeeeeeeeeyyyyyyyyyy |
|  2 | 2 | 756595865-503195482-789958360-469748395-584853264-765379612-950566206-746637184-852373297-918996466 | aaaaaaaaaaffffffffffrrrrrrrrrreeeeeeeeeeyyyyyyyyyy |
|  3 | 1 | 160133130-540661298-238328643-697850318-29334526-831556288-983609003-409046626-546002114-1038160378 | aaaaaaaaaaffffffffffrrrrrrrrrreeeeeeeeeeyyyyyyyyyy |
|  4 | 4 | 883700700-716853438-312857495-168583100-591164591-691415761-525948290-443247628-795195428-784406661 | aaaaaaaaaaffffffffffrrrrrrrrrreeeeeeeeeeyyyyyyyyyy |
|  5 | 1 |                                                                                                     | aaaaaaaaaaffffffffffrrrrrrrrrreeeeeeeeeeyyyyyyyyyy |
+----+---+-----------------------------------------------------------------------------------------------------+----------------------------------------------------+
5 rows in set (0.00 sec)
[8 Dec 2003 9:55] Dean Ellis
This may need to just be clarified in the documentation, although it does say:

"max_heap_table_size Don't allow creation of heap tables bigger than this."

with "creation" being the key word.

Otherwise something would need to trigger recalculation of max_rows.
[9 Dec 2003 21:20] Paul DuBois
Thank you for your bug report. This issue has been addressed in the
documentation. The updated documentation will appear on our website
shortly, and will be included in the next release of the relevant
product(s).

Additional info:

I am closing this bug report because the manual now
more precisely specifies the meaning of max_heap_table_size.
If the actual server behavior is deemed not to be what should
happen, then this report should be reopened. But in that case,
it should not be assigned to me. :-)