Bug #14647 strange deadlock on parallel insert
Submitted: 4 Nov 2005 16:39 Modified: 10 Nov 2005 16:10
Reporter: Dmitry Panov Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: InnoDB storage engine Severity:S3 (Non-critical)
Version:5.0.15/BK 5.0 OS:Linux (Linux)
Assigned to: CPU Architecture:Any

[4 Nov 2005 16:39] Dmitry Panov
Description:
Deadlock occurs when I try deleting and then inserting duplicate records in 2 transations running in parallel. It's not a normal case when deadlock should happen.

How to repeat:
create table test (a int) type InnoDB;
create unique index test_unique on test(a);

then start 2 parallel transactions:

TRANSACTION  1:
begin;
delete from test where a = 0;

TRANSACTION 2:
begin;
delete from test where a = 0;
insert into test values (0); <hangs here>

TRANSACTION 1:
insert into test values (0);

after this I get deadlock error message in TRANSACTION 1 which is aborted:
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction

Without the preceeding "DELETE" query it works fine.
[4 Nov 2005 16:48] MySQL Verification Team
trans1 >insert into test values (0);
ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction
[4 Nov 2005 17:13] Heikki Tuuri
Dmitry,

after the DELETEs both transactions have a 'gap' lock on the (empty) index interval. The deadlock is then the expected behavior.

You can use the my.cnf option innodb_locks_unsafe_for_binlog to avoid the deadlock, but read the caveats in the manual. You lose the serializability of transactions.

Regards,

Heikki
[4 Nov 2005 17:41] Dmitry Panov
But still the deadlock should not happen on any isolation level: there is just no reason for it. I ran a similar test on postgresql and oracle and got the expected result: the second insert hung until I committed (and I got duplicate key error) or rolled back (and insert succeeded) the other transation.

I would still consider it as InnoDB bug.
[5 Nov 2005 2:03] Heikki Tuuri
Dmitry,

I think you did not test Oracle or PostgreSQL in the SERIALIZABLE transaction isolation level. For PostgreSQL, such isolation level probably does not exist at all. InnoDB by default executes all data-modifying SQL statements using the SERIALIZABLE isolation level. Only consistent read SELECTs happen on the REPEATABLE READ level.

Oracle and PostgreSQL in their default mode do the locking like InnoDB does when innodb_locks_unsafe_for_binlog is defined.

Regards,

Heikki
[5 Nov 2005 11:37] Dmitry Panov
I did test with SERIALIZABLE isolation level. It does exist in posgresql (see http://www.postgresql.org/docs/current/static/transaction-iso.html). 

Current mysql specs claim the default isolation level for InnoDB is  REPEATABLE READ.

Ok, I tried "set transaction isolation level read committed" in mysql and still got the deadlock. I tested the isolation behavior and it matched the "read committed" pattern so the setting definitely became effective.

Must note again there wasn't a deadlock even in SERIALIZABLE mode neither on Oracle nor on Postgresql.
[10 Nov 2005 15:41] MySQL Verification Team
This is truly not a bug, but current behaviour.

Some of the future 5.0 releases will see improvements in non setting locks on examined, but only on affected rows.

This will be effective only on installation that don't use binary logging or that use row level logging.

Comments that you have on different behaviour of InnoDB are mostly related to gap locking, which are necessitated by binary logging.
[10 Nov 2005 16:10] Dmitry Panov
Well, it would have been ok if the second "DELETE" had hung waiting for the lock. The problem is that the case results in a deadlock which is difficult for me to understand.