Bug #19610 Some client processes hang in kill status forever
Submitted: 8 May 2006 14:06 Modified: 21 Apr 2010 8:57
Reporter: Oli Sennhauser Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server: General Severity:S3 (Non-critical)
Version:4.1.14, 5.1.30 OS:Any (OpenVMS)
Assigned to: CPU Architecture:Any

[8 May 2006 14:06] Oli Sennhauser
Description:
Some processes hang since hours without doing anything (longer than wait_timeout and interactive_timeout). When they are killed manually (kill <pid>) they hang in status kill forever and database has to be shut down hard.

How to repeat:
Possible at customer site (they can do it but not every session at will).
[11 May 2006 12:23] Valeriy Kravchuk
Please, send the SHOW PROCESSLIST results when you see that hang, at least. Is there anything unusual in the error log for that time period? Do you use InnoDB?
[11 May 2006 12:47] Oli Sennhauser
Yes customer uses InnoDB.
[11 May 2006 13:04] Valeriy Kravchuk
Please, send my.cnf settings and describe hardware used. 

Can it be a long running transaction that is rolling back? SHOW PROCESSLIST results from the right moment will help to figure out.
[12 May 2006 9:08] Kerry Gibbings
Hardware;

HP RX2600 1 X Itanium2 1.3GHz (3Mb Cache)
2Gb Ram
HP 36.4Gb Ultra-SCSI disk
[29 May 2006 6:22] Oli Sennhauser
Please find below evidence of hung “killed” processes. You will also see that processes 588 and 590 are not getting timedout. There is nothing relevant in the error log last entry is:

/KURT$DKB200/SYS0/SYSCOMMON/MYSQL/VMS/BIN/mysqld.exe: ready for connections.
Version: '4.1.14-log'  socket: ''  port: 3306  Source distribution
 
mysql> show variables like '%timeout%';
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| connect_timeout          | 5     |
| delayed_insert_timeout   | 300   |
| innodb_lock_wait_timeout | 50    |
| interactive_timeout      | 28800 |
| net_read_timeout         | 30    |
| net_write_timeout        | 60    |
| slave_net_timeout        | 3600  |
| sync_replication_timeout | 0     |
| wait_timeout             | 28800 |
+--------------------------+-------+

+------+------+------------------+------------+---------+--------+-------+------------------+
| Id   | User | Host             | db         | Command | Time   | State | Info             |
+------+------+------------------+------------+---------+--------+-------+------------------+
| 588  | root | 10.123.1.50:1174 | millmaster | Sleep   | 240914 |       |                  |
| 590  | root | 10.123.1.50:1176 | millmaster | Sleep   | 240920 |       |                  |
| 1017 | root | localhost:51047  | millmaster | Sleep   | 224771 |       |                  |
| 4827 | root | localhost:56385  |            | Sleep   | 22     |       |                  |
| 4828 | root | localhost:56386  |            | Query   | 0      |       | show processlist |
+------+------+------------------+------------+---------+--------+-------+------------------+

mysql> kill 588;
Query OK, 0 rows affected (0.00 sec)

mysql> kill 590;
Query OK, 0 rows affected (0.00 sec)

+------+------+------------------+------------+---------+--------+-------+------------------+
| Id   | User | Host             | db         | Command | Time   | State | Info             |
+------+------+------------------+------------+---------+--------+-------+------------------+
| 588  | root | 10.123.1.50:1174 | millmaster | Killed  | 241034 |       |                  |
| 590  | root | 10.123.1.50:1176 | millmaster | Killed  | 241040 |       |                  |
| 1017 | root | localhost:51047  | millmaster | Sleep   | 224891 |       |                  |
| 4827 | root | localhost:56385  |            | Sleep   | 9      |       |                  |
| 4830 | root | localhost:56390  |            | Query   | 0      |       | show processlist |
+------+------+------------------+------------+---------+--------+-------+------------------+

+------+------+------------------+------------+---------+--------+-------+------------------+
| Id   | User | Host             | db         | Command | Time   | State | Info             |
+------+------+------------------+------------+---------+--------+-------+------------------+
| 588  | root | 10.123.1.50:1174 | millmaster | Killed  | 241155 |       |                  |
| 590  | root | 10.123.1.50:1176 | millmaster | Killed  | 241161 |       |                  |
| 1017 | root | localhost:51047  | millmaster | Sleep   | 225012 |       |                  |
| 4832 | root | localhost:56392  |            | Query   | 0      |       | show processlist |
+------+------+------------------+------------+---------+--------+-------+------------------+
[11 Jun 2006 15:08] Valeriy Kravchuk
Can you try to repeat with a newer version of MySQL server, 4.1.20? What version of OpenVMS is used?
[14 Jun 2006 6:35] Oli Sennhauser
They probably will use 5.0 or 5.1 anyway... So let's forget about it.
[14 Jun 2006 9:19] Valeriy Kravchuk
Please, reopen this bug report in case of similar problems with versions 5.x.y.
[14 Jun 2006 9:40] Oli Sennhauser
OK. I will do so.
[15 Jul 2006 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[30 Dec 2008 14:10] Pavel Baranov
We are experiencing the same problem with the newest 5.1.30.

Unfortunately we cannot reproduce the same behavior but this how mytop looks like:

MySQL on 10.1.0.105 (5.1.30-log)                                                                                                                up 4+09:06:07 [09:06:29]
 Queries: 179.2M  qps:  497 Slow:     0.0         Se/In/Up/De(%):    40/00/00/00
             qps now:  679 Slow qps: 0.0  Threads:   37 (   3/  25) 44/00/00/00
 Key Efficiency: 100.0%  Bps in/out:   0.0/  5.9   Now in/out:   8.3/ 2.2k

        Id      User         Host/IP         DB      Time    Cmd Query or State
        --      ----         -------         --      ----    --- ----------
 1956956 system us                               133863 Connec Waiting for master to send event
 3209287 system us                    peekyou     27641 Connec InSert into region_global_tags_222 set profile_id = 165628419, global_tag_id = 934, global_tag_type = '
 3874390 peekyou_a           web02    peekyou     27364 Killed Analyze table region_global_tags_222
 4456322   peekyou           web03    peekyou       120  Query SELECT DISTINCT r.id AS id,r.name AS name FROM region_global_tags_222 AS pgt jOIN search_city_3844 AS s
 4441022 peekyou_a           web01       test         0  Query show full processlist
 4458881   peekyou           web02    peekyou         0  Query select p.*, fm.name as first_name, ln.name as last_name, mn.name as middle_name from profiles as p inne

after 20 minutes or so, the query might disappear unless there are 500 other queries running - in which case a complete server shutdown is needed!
[8 Jan 2009 12:38] Roman Krewer
We've got the same Problem...

Sometimes an update-query seems to hang. Killing this thread results in such a zombie thread.

This happens about every few days or weeks. I will post the Processlist the next time, it happens.

Server Version: 5.1.29
openSuse Linux 10.1
[18 Feb 2009 14:32] Miguel Solorzano
Bug: http://bugs.mysql.com/bug.php?id=42907 presents same behavior however with MyISAM tables though.
[14 Aug 2009 6:20] Sveta Smirnova
Pavel, Roman,

thank you for the feedback.

Do you use InnoDB storage engine? Do you use transactions with more than 1 statement (either autocommit=0 or BEGIN is used).

Pavel, please send output of SHOW  CREATE TABLE region_global_tags_222

Roman, additionally to SHOW PROCESSLIST send output of SHOW  CREATE TABLE for all involved tables.
[14 Sep 2009 23:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".
[21 Apr 2010 8:57] Sveta Smirnova
There is verified bug #52528 about very same problem. So closing this one as duplicate.