Bug #23200 Cluster INSERT INTO .. ON DUPLICATE KEY UPDATE locks and memory leaks
Submitted: 12 Oct 2006 8:18 Modified: 2 Nov 2006 10:00
Reporter: Bernd Ocklin Email Updates:
Status: Closed Impact on me:
Category:MySQL Cluster: Cluster (NDB) storage engine Severity:S2 (Serious)
Version:4.1, 5.0, 5.1 OS:Linux (FC 5 / Suse 10.1)
Assigned to: Jonas Oreland CPU Architecture:Any

[12 Oct 2006 8:18] Bernd Ocklin
Using the INSERT INTO ... ON DUPLICATE KEY UPDATE SQL statement on mysql cluster can lead to an deadlock. In certain situations it will also lead to a memory leak (or rather explosion).

How to repeat:
4 ndbds, 1 mysqld all on Opertons.

CREATE TABLE `xf_sessions` (
  `session_id` varchar(255) character set utf8 collate utf8_bin NOT NULL default '',
  `session_expires` int(10) unsigned NOT NULL default '0',
  `session_data` text collate utf8_unicode_ci,
  PRIMARY KEY  (`session_id`)
) ENGINE=ndbcluster DEFAULT CHARSET=utf8 COLLATE=utf8_unicode_ci;

Issue the exact command 

INSERT INTO xf_sessions (
('2fqdllua7qmhjqq8jrsgetlqb4','1160584451', '') 
UPDATE session_data = '', session_expires = '1160584451';

in parallel with the very same primary key value. Single statements issued sequentially will work fine. 

In order to create the situation jeremy.zawodny.com/mysql/mybench/ is helping a lot. I used concurrency 10 parallel clients and 100 runs per client.

One statement will succeed. All others fail with a dead lock detection timeout. Eventually the mysqld's memory will explode.

Suggested fix:
We need a way to INSERT into a table and just update particular columns if we hit a duplicate key. 

In situations where all table columns can be updated REPLACE INTO works around the problem.
[12 Oct 2006 12:56] Bernd Ocklin
perl script to reproduce the crash; mybench.pm needed

Attachment: crash.pl (application/octet-stream, text), 1.98 KiB.

[12 Oct 2006 13:24] Geert Vanderkelen
Thanks for the report!

Verified using 5.1bk on Linux using the provided my-bench script to run processes in parallel.
Care when reproducing, machine might get out of memory easy.
[12 Oct 2006 13:30] Geert Vanderkelen
Verified using latest 5.0bk too. It looks like I got more Lock Wait Timeouts using 5.0 than using 5.1, but it could have been luck.
[18 Oct 2006 14:49] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


ChangeSet@1.2552, 2006-10-18 16:48:44+02:00, jonas@perch.ndb.mysql.com +1 -0
  ndb - bug#23200
    Make sure postExecute is not run for blobs if AO_IgnoreError
[18 Oct 2006 16:27] Jonas Oreland
Patch fixes mem-problem.

But possible transaction abort in case of "insert on dup key update" is not
  supported nativly by ndb, so this will not be fixed shortly.
[19 Oct 2006 7:28] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:


ChangeSet@1.2261, 2006-10-19 09:27:58+02:00, jonas@perch.ndb.mysql.com +1 -0
  ndb - bug#23200
    this changes lock taken during peek, to decrease likelyhood of transaction abort
[25 Oct 2006 6:42] Jonas Oreland
pushed to 5.0-ndb
[25 Oct 2006 6:46] Jonas Oreland
pushed into 4.1-ndb
[25 Oct 2006 6:57] Jonas Oreland
pushed into 5.1-ndb
[1 Nov 2006 14:28] Jonas Oreland
pushed into 4.1.22
[1 Nov 2006 14:49] Jonas Oreland
pushed into 5.0.29
[1 Nov 2006 14:57] Jonas Oreland
pushed into 5.1.13
[2 Nov 2006 10:00] Jon Stephens
Thank you for your bug report. This issue has been committed to our source repository of that product and will be incorporated into the next release.

If necessary, you can access the source repository and build the latest available version, including the bug fix. More information about accessing the source trees is available at


Documented bug fix for 4.1.22/5.0.29/5.1.13.
[4 Nov 2006 3:32] Jon Stephens
*Fix for 5.0 documented in 5.0.30 Release Notes.*