Bug #43472 insert from trigger fires erroneous thread stack overrun error
Submitted: 7 Mar 2009 14:31 Modified: 25 Oct 2009 11:19
Reporter: Eric Marsh Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server: Errors Severity:S2 (Serious)
Version:5.0.45, 5.0.77 OS:Linux
Assigned to: CPU Architecture:Any
Tags: trigger thread stack overrun innodb

[7 Mar 2009 14:31] Eric Marsh
Description:
I have a fairly simple database (five simple tables, four views) using the innodb engine. I want to create a record in a "sendmail" table when records are inserted into other tables. I created a simple trigger to insert into the sendmail table but when I add a row to the triggering table I get the following:

Thread stack overrun:  6424 bytes used of a 131072 byte stack, and 131072 bytes needed.  Use 'mysqld -O thread_stack=#' to specify a bigger stack.

This seems odd on several levels. Not only is there the issue that the trigger is causing the error, but if you look at the message you can see that only a fraction of the available stack is being used so I'd think that there is plenty of stack still available.

How to repeat:
I've make a copy of test database I'm working with at http://files.me.com/emarsh/hpu4yu. Download it and run the sql file then try to insert a row into the peopletolocations join table.
[7 Mar 2009 16:28] Valeriy Kravchuk
Thank you for the problem report. Please, try to repeat with a newer version, 5.0.77, and inform about the results.
[7 Mar 2009 16:53] Eric Marsh
This defect does not occur on version 5.1.31 running on Windows XP. It may take me a while to test 5.0.77  because I need to set a machine up to test on.
[7 Mar 2009 20:08] Valeriy Kravchuk
Please, inform about any results with 5.0.77.
[7 Apr 2009 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[25 Sep 2009 10:57] Jean-Bernard Valentaten
Just upgraded a MySQL 5.0.45 to 5.0.77 because of this bug. Alas upgrading din't change anything, Triggers still bring up "General error: 1436 Thread stack overrun:"
If you really need triggers the only way seems to be a downgrade to 5.0.32 where Triggers work like a charm.
[25 Sep 2009 11:19] Valeriy Kravchuk
Jean-Bernard,

If you have a repeatable test case for 5.0.77, please, upload it.
[25 Sep 2009 12:39] Jean-Bernard Valentaten
Here's an example that will raise the thread stack overrun on 5.00.45 and 5.0.77:

create table trigger_test (
  id int(11) unsigned auto_increment not null primary key,
  value_1 int(10) not null,
  value_2 int(10) null
);

delimiter //
create trigger trg_bef_ins_trigger_test
  before insert on trigger_test
  for each row
begin
   if NEW.value_2 IS NULL then
     set NEW.value_2 = NEW.value_1 * 2;
   end if;
end//

delimiter;
insert into trigger_test (value_1) values (1);

HTH,
Jean
[25 Sep 2009 13:35] Valeriy Kravchuk
This is what I've got on Linux:

openxs@suse:/home2/openxs/dbs/5.0> bin/mysql -uroot test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 5.0.86-debug Source distribution

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> create table trigger_test (
    ->   id int(11) unsigned auto_increment not null primary key,
    ->   value_1 int(10) not null,
    ->   value_2 int(10) null
    -> );
Query OK, 0 rows affected (0.01 sec)

mysql> delimiter //
mysql> create trigger trg_bef_ins_trigger_test
    ->   before insert on trigger_test
    ->   for each row
    -> begin
    ->    if NEW.value_2 IS NULL then
    ->      set NEW.value_2 = NEW.value_1 * 2;
    ->    end if;
    -> end//
Query OK, 0 rows affected (0.03 sec)

mysql> insert into trigger_test (value_1) values (1)//
Query OK, 1 row affected (0.04 sec)

mysql> exit
Bye
openxs@suse:/home2/openxs/dbs/5.0> ulimit -a
core file size        (blocks, -c) 0
data seg size         (kbytes, -d) unlimited
file size             (blocks, -f) unlimited
max locked memory     (kbytes, -l) 32
max memory size       (kbytes, -m) unlimited
open files                    (-n) 1024
pipe size          (512 bytes, -p) 8
stack size            (kbytes, -s) unlimited
cpu time             (seconds, -t) unlimited
max user processes            (-u) 4096
virtual memory        (kbytes, -v) unlimited

openxs@suse:/home2/openxs/dbs/5.0> getconf GNU_LIBPTHREAD_VERSION
NPTL 2.3.4
openxs@suse:/home2/openxs/dbs/5.0> uname -a
Linux suse 2.6.11.4-20a-default #1 Wed Mar 23 21:52:37 UTC 2005 i686 i686 i386 GNU/Linux
openxs@suse:/home2/openxs/dbs/5.0>

What am I missing?
[25 Sep 2009 13:43] Jean-Bernard Valentaten
Hi Valeriy,

it seems to me, that you're using MySQL 5.0.86. If so, it could be that the bug doesn't occur in that version. The only versions I could test on were:
5.0.32 -> works perfectly
5.0.45 -> raises thread stack error
5.0.77 -> raises thread stack error

What I forgot to mention is that I didn't test as root, I tested using a db-schema user. The user has all privileges granted by "GRANT ALL PRIVILEGES", without any further options. I'm not sure whether that makes a difference, imho it shouldn't, but you never know.

Thx for reacting so fast :)
[25 Sep 2009 15:47] Jean-Bernard Valentaten
So, it seems we found the problem or at least the cause for this behaviour under 5.0.77. In our my.cnf the thread_stack-Parameter was set to 128k while default is 256k. As soon as we deleted this entry, the triggers started working perfectly.
Still I'd say there is some kind of bug, as the errormessage told me that only 6k of stack were used while 128k were available and I should free up 128k.
[26 Oct 2009 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".