Bug #15667 The total number of locks exceeds the lock table size
Submitted: 11 Dec 2005 13:51 Modified: 19 Oct 2008 7:30
Reporter: Kevin xu Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S1 (Critical)
Version:4.1, 6.0.4 OS:Windows (Win XP)
Assigned to: CPU Architecture:Any

[11 Dec 2005 13:51] Kevin xu
Description:
When load a large data file (2GB) file to the database via MySQL Query (version 1.1.13), it's going to fail with with this error message: "The total number of locks exceeds the lock table size"

How to repeat:
When load a large data file (2GB) file to the database via MySQL Query (version 1.1.13), it's going to fail with with this error message: "The total number of locks exceeds the lock table size"
[11 Dec 2005 14:36] Valeriy Kravchuk
Thank you for a problem report. Please, inform about the exact version of MySQL server used (4.1.x). Your server variables' values will be usefull to. So, please, send the results of the SHOW VARIABLES statement (or your my.ini file).
[12 Jan 2006 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[25 May 2006 20:22] Zack Exley
Same exact thing just happened to me - but on Mac OS X - on a MacBook Pro. using DBVis. But wondering if it's really a bug - or are we just doing something wrong?
[20 May 2007 16:14] noformsplease@mysql.com noformsplease@mysql.com
Same thing on Linux, MySQL 5.

insert into archive select * from largetable where conditions...;

archive table is myisam fixed with no keys.
largetable is innodb, has 6mln short rows, conditions match 20% of it.
[29 Jun 2007 14:08] Nils Juenemann
Same with MySQL 5.1.19-beta

root@xyz[abc]> alter table messages_user_inbox partition by hash(message_id) partitions 50;
ERROR 1206 (HY000): The total number of locks exceeds the lock table size

The error occurs after 3 hours...

*************************** 1. row ***************************
           Name: messages_user_inbox
         Engine: InnoDB
        Version: 10
     Row_format: Compact
           Rows: 6272490
 Avg_row_length: 428
    Data_length: 2688548864
Max_data_length: 0
   Index_length: 405438464
      Data_free: 0
 Auto_increment: 50374572
    Create_time: 2007-06-28 22:18:33
    Update_time: NULL
     Check_time: NULL
      Collation: utf8_general_ci
       Checksum: NULL
 Create_options:
        Comment: InnoDB free: 17646592 kB
[4 Jan 2008 0:10] Jason Eacott
this generates the same "total number of locks exceeds the lock table size" error
for a table of a few hundred thousand rows, just trying to merge 2 rows)

update mytable set myidcol=concat(myidcol,colb);
[8 Jan 2008 15:42] Valeriy Kravchuk
This looks like InnoDB-related bug. Any repeatable test case for version 5.x.y is greatly appreciated.
[9 Feb 2008 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[21 Feb 2008 18:13] Dan McDonald
I get this error when trying to delete more than 10000 records from a 6.5 million row dataset.  There are multiple `on delete cascade` statements associated with the records being deleted.

Running 5.0.24 on Mandriva Corporate Server 4.0

I'd love to be able to delete all of my stale data, but it's incredibly slow to delete in its current state.
[8 Aug 2008 12:54] vijay babu
I have 10 millions of rows in one table. When i try to delete the 4 lakhs rows, i got "The total number of locks exceeds the lock table size".

Please advise.
[10 Aug 2008 6:24] Valeriy Kravchuk
All reporters:

Please, increase your innodb_buffer_pool_size ten times at least if you have a default size:

| innodb_buffer_pool_size         | 8388608                                           

Check if it will make any difference.
[10 Sep 2008 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[10 Sep 2008 23:14] MySQL Verification Team
Reporters please if you are able to provide feedback as requested by Valery
comment here and open the bug report. Thanks in advance.
[11 Oct 2008 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[18 Oct 2008 16:07] yair wein
Encountered the same problem, and after applying Valeriy's solution it was fixed.

Server version: 5.0.51a-3ubuntu5.1
Os: Ubuntu server 64bit 8.04
[19 Oct 2008 7:30] Valeriy Kravchuk
So, this is not a bug. You should just set server variables properly.
[29 Apr 2009 6:50] kasun Ekanayake
I am getting the same error. I do have innodb table

| user                           | InnoDB |      10 | Compact    | 4289383 |            517 |  2220900352 |               0 |   2933047296 | 2291138560 |       63300260 | 2009-04-28 04:32:29 | NULL                | NULL       | latin1_swedish_ci |     NULL |                |         | 

when try to change the engine to ndbcluster it giving the error. 
I execute the ALTER TABLE on one of the sql nodes in cluster. That SQL node have this innodb table. 

please help.
[1 Sep 2009 21:43] Dana Holland
I tried increasing innodb_buffer_pool_size to 2gb and it's still timing out on me.  I've even tried using phpMyAdmin to go in and delete a single record - and it times out.  In our case, this is running on a RedHat server, not Windows.  This is 60gb worth of data that I can't get rid of.
[18 Aug 2010 18:29] Pankaj Jain
Not a permanent solution but dbas might want to check if underlying table has gone through lots of truncate/delete and rebuilding table will reduce Data_length, Avg_row_length and that would mean less number of locks required. I had similar problem and i just did alter table tablename Engine= InnoDB; which rebuilt the table and now insert works without "The total number of locks exceeds the lock table size" error.
[3 Jan 2011 21:43] damon wood
i had the same problem selecting a few million records out of one table into an archive table on an innodb database. i bumped the default value for innodb_buffer_pool_size from 8388608 to 83886080 and it solved my problem, however going from 8 megs to 16 megs didn't work, neither did 32 megs or 64 megs. i had to bump it up 10x for it to work. 

There has to be way of calculating an adequate value for this based on the massiveness of your dataset. I would love to see that formula.