Bug #89897 Incorrect get_lock behaviour
Submitted: 3 Mar 2018 14:58 Modified: 4 Mar 2018 10:00
Reporter: Damjan Vujnovic Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server Severity:S2 (Serious)
Version:5.7.21 OS:Any (MacOS & Linux)
Assigned to: CPU Architecture:x86
Tags: GET_LOCK

[3 Mar 2018 14:58] Damjan Vujnovic
Description:
A small part of our logic cannot be executed concurrently so we are using get_lock function to serialize the execution of that critical part (rather than using SERIALIZABLE transaction isolation level throughout the app). Also, not all of them have to be serializable - there are actually 'clusters' (or groups) of transactions that cannot happen concurrently (it's data driven).

The behaviour we're seeing is that despite using get_lock()/release_lock() the transactions are executed concurrently. The example below is the simplest I could come up with (ours is more complicated).

I believe that due to how get_lock/release_lock are used, the "ERROR 1062 (23000): Duplicate entry '921083' for key 'PRIMARY'" should never occur.

I reproduced this with the v5.7.21 on both MacOS and Linux, as well as v 5.7.13 on Linux.

Many thanks,
damjan

How to repeat:
create the following schema:

drop database bug_repro;

create database bug_repro;

create table bug_repro.my_table(
  counter int not null primary key
);

insert into bug_repro.my_table values (0);

drop procedure if exists bug_repro.my_proc;

delimiter $$
create procedure bug_repro.my_proc()
proc_my_proc:begin
  declare l_counter int;
  declare l_lock_name varchar(255);

  declare exit handler for sqlexception
  begin
    do release_lock(l_lock_name);
    resignal;
  end;

  set l_lock_name = 'bug_repro';

  while true do
    start transaction;

    if get_lock(l_lock_name, 10) != 1 then
      leave proc_my_proc;
    end if;

    select max(counter)
    into l_counter
    from bug_repro.my_table;

    insert into bug_repro.my_table values (l_counter + 1);

    do release_lock(l_lock_name);

    commit;

    do sleep(0.1);
  end while;
end$$
delimiter ;

And then from 4 different sessions do

mysql -e "call bug_repro.my_proc()"

After a while, one of them would fail with the following error:

ERROR 1062 (23000): Duplicate entry '921083' for key 'PRIMARY'
[4 Mar 2018 10:00] Damjan Vujnovic
Actually, this is not a bug, due to order of start transaction and get_lock statements...