Bug #39091 | A process stucks in END state and never dies locking a table forever. | ||
---|---|---|---|
Submitted: | 28 Aug 2008 12:57 | Modified: | 14 Jan 2013 18:04 |
Reporter: | francesco pinta | Email Updates: | |
Status: | Can't repeat | Impact on me: | |
Category: | MySQL Server: Query Cache | Severity: | S1 (Critical) |
Version: | 5.1.26, 5.1.31, 5.1.45, 5.1.49 | OS: | Any (w2k,sles10) |
Assigned to: | Matthew Lord | CPU Architecture: | Any |
Tags: | end state, KILL, lock, PROCESSLIST |
[28 Aug 2008 12:57]
francesco pinta
[28 Aug 2008 12:59]
francesco pinta
Output of the "SHOW PROCESSLIST"
Attachment: showprocesslist.txt (text/plain), 8.99 KiB.
[28 Aug 2008 16:20]
Sveta Smirnova
Thank you for the report. Please provide output of SHOW CREATE TABLE inventario
[29 Aug 2008 10:36]
francesco pinta
Here is the 'SHOW CREATE TABLE'. mysql> show create table inventario \G *************************** 1. row *************************** Table: inventario Create Table: CREATE TABLE `inventario` ( `entita` varchar(255) NOT NULL DEFAULT '', `sede_old` varchar(255) DEFAULT NULL, `rete` varchar(255) NOT NULL DEFAULT '', `piattaforma` varchar(255) NOT NULL DEFAULT '', `servizio` varchar(255) NOT NULL DEFAULT '', `apparato` varchar(255) NOT NULL DEFAULT '', `misura` varchar(255) NOT NULL DEFAULT '', `workdir` varchar(255) NOT NULL DEFAULT '', `mib` text NOT NULL, `mib_in` text NOT NULL, `mib_out` text NOT NULL, `text_url` varchar(255) DEFAULT NULL, `php_in` text, `php_out` text, `id_aggregato` text NOT NULL, `descrizione` text NOT NULL, `opzioni` varchar(255) NOT NULL DEFAULT '', `periodoc` int(11) NOT NULL DEFAULT '0', `chkContatore` tinyint(3) unsigned NOT NULL DEFAULT '0', `chkToBits` tinyint(3) unsigned NOT NULL DEFAULT '0', `chkPerMinute` tinyint(3) unsigned NOT NULL DEFAULT '0', `ylegend` varchar(255) NOT NULL DEFAULT '', `yscale` double NOT NULL DEFAULT '0', `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `localsnmp` tinyint(4) NOT NULL DEFAULT '0', `id_modulo` bigint(20) NOT NULL DEFAULT '0', `disabilitato` tinyint(4) NOT NULL DEFAULT '0', `lastpoll` bigint(20) NOT NULL DEFAULT '0', `lastpoll_in` double NOT NULL DEFAULT '0', `lastpoll_out` double NOT NULL DEFAULT '0', `last_time` bigint(20) NOT NULL DEFAULT '0', `last_inval` double NOT NULL DEFAULT '0', `last_outval` double NOT NULL DEFAULT '0', `last_t30m` bigint(20) NOT NULL DEFAULT '0', `avg_30m_in` double NOT NULL DEFAULT '0', `avg_30m_out` double NOT NULL DEFAULT '0', `last_t2h` bigint(20) NOT NULL DEFAULT '0', `avg_2h_in` double NOT NULL DEFAULT '0', `avg_2h_out` double NOT NULL DEFAULT '0', `last_t24h` bigint(20) NOT NULL DEFAULT '0', `avg_24h_in` double NOT NULL DEFAULT '0', `avg_24h_out` double NOT NULL DEFAULT '0', `polling_retry` int(11) NOT NULL DEFAULT '1', `web_url` text, `web_header` text, `web_group` varchar(50) DEFAULT NULL, `web_timemin` double NOT NULL DEFAULT '0', `web_timemax` double NOT NULL DEFAULT '0', `web_timeavg` double NOT NULL DEFAULT '0', `web_pollnum` bigint(20) NOT NULL DEFAULT '0', `web_proxyip` varchar(25) NOT NULL DEFAULT '', `web_proxyport` int(11) NOT NULL DEFAULT '0', `web_pok` int(11) NOT NULL DEFAULT '0', `web_pwarn` int(11) NOT NULL DEFAULT '0', `web_pko` int(11) NOT NULL DEFAULT '0', `script` varchar(255) NOT NULL DEFAULT '', `script_param` varchar(255) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `sede` (`sede_old`), KEY `rete` (`rete`), KEY `piattaforma` (`piattaforma`), KEY `servizio` (`servizio`), KEY `apparato` (`apparato`), KEY `misura` (`misura`), KEY `lastuse` (`lastpoll`) ) ENGINE=MyISAM AUTO_INCREMENT=25138 DEFAULT CHARSET=latin1 1 row in set (0.05 sec)
[15 Oct 2008 4:16]
Valeriy Kravchuk
Please, try to repeat with a newer version, 5.1.28, and inform about the results.
[16 Nov 2008 0:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[14 May 2009 13:43]
Alexander Gorniy
Here's the processlist
Attachment: processlist.txt (text/plain), 5.02 KiB.
[14 May 2009 13:44]
Alexander Gorniy
We repeat it on 5.1.34 Have MySQL-server-community-5.1.34-0.rhel5.i386.rpm installed.
[14 May 2009 13:46]
Alexander Gorniy
I have such problem on different tables with mysql 5.1.30 and 5.1.31
[14 May 2009 13:49]
Valeriy Kravchuk
Alexander, Please, send your my.cnf/my.ini file content. I'd like to check if you have big query cache.
[14 May 2009 14:01]
Alexander Gorniy
my.cnf
Attachment: my.cnf.txt (text/plain), 1.71 KiB.
[14 May 2009 14:27]
Valeriy Kravchuk
You have query cache, but it is small. This query hangs in the "end" status: UPDATE chat_online SET room=1,time=1242300146 WHERE name=50765 LIMIT 1 Please, send the results of SHOW CREATE TABLE and SHOW TABLE STATUS for this chat_online table. Send also the results of: explain select * from chat_online WHERE name=50765 LIMIT 1\G
[15 May 2009 9:28]
Alexander Gorniy
mysql> show create table chat_online\G *************************** 1. row *************************** Table: chat_online Create Table: CREATE TABLE `chat_online` ( `name` int(11) NOT NULL DEFAULT '0', `login` tinytext NOT NULL, `time` int(11) NOT NULL DEFAULT '0', `room` int(8) NOT NULL DEFAULT '0', `clan` int(11) NOT NULL DEFAULT '0', `win` int(11) NOT NULL DEFAULT '0', `pass` text NOT NULL, `ban` int(10) unsigned NOT NULL DEFAULT '0', `clan_name` varchar(40) NOT NULL DEFAULT '', `clanz` text NOT NULL, `a_name` int(1) unsigned NOT NULL DEFAULT '0', `punish` tinyint(2) unsigned NOT NULL DEFAULT '0', `alliance` int(10) unsigned NOT NULL DEFAULT '0', `alliancen` varchar(120) NOT NULL, PRIMARY KEY (`name`), KEY `room` (`room`,`time`) ) ENGINE=MyISAM DEFAULT CHARSET=cp1251 mysql> show table status where Name='chat_online'\G *************************** 1. row *************************** Name: chat_online Engine: MyISAM Version: 10 Row_format: Dynamic Rows: 614 Avg_row_length: 89 Data_length: 55124 Max_data_length: 281474976710655 Index_length: 19456 Data_free: 0 Auto_increment: NULL Create_time: 2009-05-15 13:22:34 Update_time: 2009-05-15 13:22:34 Check_time: 2009-05-15 13:22:34 Collation: cp1251_general_ci Checksum: NULL Create_options: mysql> explain select * from chat_online WHERE name=50765 LIMIT 1\G *************************** 1. row *************************** id: 1 select_type: SIMPLE table: chat_online type: const possible_keys: PRIMARY key: PRIMARY key_len: 4 ref: const rows: 1 Extra: 1 row in set (0.00 sec)
[28 Sep 2009 15:53]
MySQL Verification Team
It is still repeatable with released version 5.1.39?. Thanks in advance.
[29 Oct 2009 0:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[19 Nov 2009 5:37]
Chatchai Komrangded
I' have a same problem anyone can help me please , and my oppinion for this problem may be someone issue LOCK TABLE command before UPDATE command statement that will be lock hold all table.
[17 Jan 2010 11:40]
Jochen Mueller
I have the same problem using 5.1.35 under linux. Any new ideas for this? Regards Jochen
[26 Jan 2010 7:25]
Sveta Smirnova
Chatchai, if one issued LOCK TABLE command before UPDATE it is expected what other threads which use same table are locked. This only can be considered as a bug if you can not kill the thread. Please inform us if this true in your environment. Jochen, please try to repeat the problem with current version 5.1.42 and if this is still repeatable provide output of SHOW PROCESSLIST
[2 Feb 2010 9:10]
Jochen Mueller
After updating to version 5.1.42 the problem did not occur again. Thanks Jochen
[2 Feb 2010 9:14]
Sveta Smirnova
Thank you for the feedback. Set to "Can't repeat" because last comment.
[19 Apr 2010 16:23]
Christopher Lörken
We have a very similar / the same problem with the most recent MySQL 5.1. INSERT queries on InnoDB tables tend to take a couple of seconds to finish and the queries on one particular table lock the complete database. The symptoms are very similar but in our case, when we stop/kill the apache server, MySQL sometimes revives after ~5-10 minutes or so. Sometimes it takes to long for us to bear and we have to kill mysqld as well. The specific queries which are not shown to be locked are in "end" state and cannot be killed. This behavior is for us "somewhat" reproducable. After the database locked, subsequent update queries normally have no problem. When the system runs for some days with just reading from that table, it nearly always results in a DB lock with queries captured for some minutes in "end" state. Our setup: mysql Ver 14.14 Distrib 5.1.44, for debian-linux-gnu (x86_64) using readline 5.2 on Ubuntu 8.04 LTS, current Kernel 2.6.24-25-generic SMP Dual Quad-Core with 16GB Ram I will attach our SHOW PROCESSLIST and our mysqladmin debug output. Note the top query in the processlist which is shown as Killed since I have send a kill command to it about 1 or 2 minutes before producing the output (table "pages"). This database has 151 rows and it's own ibd file of 320 KB so it is _really_ small. (the locks where the same with all innodb tables in one file).
[19 Apr 2010 16:26]
Christopher Lörken
show processlist in locked/end/killed state
Attachment: processes.txt (text/plain), 28.31 KiB.
[19 Apr 2010 16:26]
Christopher Lörken
mysqladmin debug output of same situation
Attachment: syslog.txt (text/plain), 11.16 KiB.
[20 Apr 2010 13:05]
Sveta Smirnova
Christopher, thank you for the feedback. I assume you are using Debian binaries? If so please upgrade to current version 5.1.46 available from our web site and try with it, so we are sure this is not package problem. Please also send us output of SHOW ENGINE INNODB STATUS in time when problem occurs if it is repeatable with new version.
[26 Apr 2010 17:13]
Christopher Lörken
Dear Sveta, thanks for your quick reply. I have just encountered the same problem with (still) version 5.1.44 and will attach the SHOW ENGINE INNODB STATUS which I took this time. Regarding your version concerns: I totally understand why you would like to make sure it is not the version but in fact I would actually rule out a packaging problem since the above posters have been using other linux versions and systems than we have. Furthermore, I am sorry, but I have just updated to 5.1.44 (after the exact same problem existed in a recent 5.0.x) and do therefore not see us in the position to do another update unless anyone can tell me that there actually has been a bugfix which can have had any effect on this. If upgrading a major revision does not change anything I doubt that the last two minors will change a lot. I hope that the attached file helps anyway and can give us more clues on what's going on. Thanks for your help. One final remark: We do use the inbuild InnoDB engine and not the plugin of the Oracle engine.
[26 Apr 2010 17:14]
Christopher Lörken
Output of SHOW ENGINE INNODB STATUS
Attachment: innodb_engine_status.txt (text/plain), 62.58 KiB.
[6 Jun 2010 7:29]
Noam Ambar
I'm seeing the same problem with MyISAM. It happens after several weeks of normal operation. Last time, it happened 3 days in a row. Restarting the server fixed it, but on the 3rd time, the table got corrupted. After repairing the table, things seems to be OK.
[6 Jun 2010 7:34]
Noam Ambar
Process list output, MyISAM, version 5.1.31-community
Attachment: nambar_processlist.txt (text/plain), 70.09 KiB.
[6 Jun 2010 9:45]
Christopher Lörken
I can confirm that this happens for MyISAM as well. I have changed the type of the table to MyISAM a couple of weeks ago but it didn't change the problem at all. We have by now moved the database to a stand-alone database server with an SSD raid. The load on the server is something like 300 queries per seconds (>95% reads) so performance cannot be the problem. The queries that fail to end are really very atomic stuff like updating a unix_timestamp() in a particular row of a 300 rows table... Unfortunately, this bug _is_ reproducable for us.
[6 Jun 2010 9:54]
Noam Ambar
Same case for me - the stuck query is always an UPDATE of a single record by primary key.
[6 Jun 2010 10:08]
MySQL Verification Team
would be interesting if this hangup still happens with my.cnf settings like this: thread_cache_size=0 concurrent_insert=0 query_cache_size=0 query_cache_type=0 Had anybody been able to get stack traces for each thread in mysqld during the hangup? gdb -p `pidof mysqld` thread apply all bt full thread apply all bt
[14 Jun 2010 13:57]
Christopher Lörken
output of gdb: thread apply all bt full
Attachment: gdb_all_bt_full.txt (text/plain), 461.55 KiB.
[14 Jun 2010 13:58]
Christopher Lörken
output of gdb: thread apply all bt
Attachment: gdb_all_bt.txt (text/plain), 310.71 KiB.
[14 Jun 2010 14:01]
Christopher Lörken
I have just uploaded the output of those gdb commands. Please note that I have removed the output for threads 300 to 400 since the max upload size of this bugtracker is 500KB per file. If you need that data please contact me and I will provide it. It would also be great if you could add version 5.1.45 to the list of affected versions since that is the one we are currently running. Best regards, Christopher
[23 Jun 2010 9:22]
MySQL Verification Team
Christopher, disabling query cache maybe can solve a problem for you too. However it's hard to say since your mysqld is stripped of any debug symbols. Probably a consequence of RHEL RPM that store that data info debuginfo packages. Also, try avoid any use of INSERT DELAYED. That had many locking related bugs in the passed, it might be possible it introduces some further problems here. in pthread_cond_wait@@GLIBC_2.3.2 () from /lib/libpthread.so.0 in Query_cache::invalidate_table () from /usr/sbin/mysqld in Query_cache::invalidate_table () from /usr/sbin/mysqld in Query_cache::invalidate () from /usr/sbin/mysqld in mysql_insert () from /usr/sbin/mysqld in mysql_execute_command () from /usr/sbin/mysqld in mysql_parse () from /usr/sbin/mysqld
[23 Jun 2010 11:23]
MySQL Verification Team
Another potential cause: clients who connect over slow network and use mysql_use_result to iterate through resultset slowly can cause query cache hangups. Or, if the client does alot of other processing inbetween the mysql_use_result calls.. Workaround: use mysql_store_result().
[24 Jun 2010 9:57]
MySQL Verification Team
we need to know if 5.1.48 fixes this hangup even if query cache is enabled..
[30 Jun 2010 19:00]
Christopher Lörken
Thanks for your input Shane, we are not using mysql_use_result. Also the DB is currently connected on a dedicated network so it is not slow. We did furthermore experience the problem when database and webserver were still on the same physical machine. Regarding INSERT DELAYED: We do not have any problems using these queries in normal operation. Only when writing to those seldomly changed tables. I have tried an update on the table after changing the INSERT DELAYED into immediate inserts. The databse locked again and I had to kill the server after 6-7 minutes. Just killing the webserver and waiting a bit did not suffice in this case. Regarding 5.1.48: Have there been changes which might affect this? I did not see anything related in the change history. Since this problem severely degrades our availability, I am, however, willing to install a new version. I assume I need the MySQL-server-5.1.48-1.glibc23.x86_64.rpm package. Does that contain the debug info you've been missing?
[1 Jul 2010 6:03]
Sveta Smirnova
Thank you for the feedback. > I assume I need the MySQL-server-5.1.48-1.glibc23.x86_64.rpm package. Does that contain the debug info you've been missing? Yes, It contains mysqld-debug binary which is same as mysqld, but contains additional debug information.
[1 Aug 2010 23:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[17 Aug 2010 9:14]
Christopher Lörken
Sorry for the delay, updating live servers is nothing we like to do on a daily basis. Sadly, I was able to reproduce the bug using this MySQL version: mysql Ver 14.14 Distrib 5.1.49, for debian-linux-gnu (x86_64) using readline 5.2 (Package from dotdeb.org)
[18 Aug 2010 5:34]
Noam Ambar
I have several instances of 5.1.48 and didn't see it happen there for a while (~1-2 months).
[18 Aug 2010 14:48]
Sveta Smirnova
Christopher, thank you for the feedback. Could you please start debug server and get output Shane was asking for: gdb -p `pidof mysqld` thread apply all bt full thread apply all bt
[24 Aug 2010 13:43]
Christopher Lörken
Output of gdb running mysqld-debug and havin threads stuck in "end" state
Attachment: gdb_mysqld-debug_output.zip (application/zip, text), 397.89 KiB.
[24 Aug 2010 13:52]
Christopher Lörken
I have attached a zip file containing the output of thread apply all bt full and thread apply all bt running the mysqld-debug server of the 5.1.49 tarball: mysqld-debug Ver 5.1.49-debug-log for unknown-linux-gnu on x86_64 (MySQL Community Server - Debug (GPL)) The archive file contains three text files: 1) gdb_locked_pre_kill.txt This file has the state of the database with some processes showing the end state but not ending. The thread that caused the database deadlock is thread #13 ("UPDATE pages SET SYS_LASTCHANGED='1282655885' WHERE uid=25") 2) gdb_locked_post_kill.txt I've issued a kill command to that thread using the MySQL Administrator (GUI for windows) The thread did not get killed. 3) gdb_only_end_locked.txt I issued a kill command to all threads. Most of them were killed but a handful (< 20 threads) did not react. With the exception of one single not reacting thread, all of the others were shown to be in status "end". I hope these files contain the necessary debug info. If not, please tell me what options to change were, but at least the stacktraces look good to me.
[30 Aug 2010 11:47]
Reinoud van Santen
We are experiencing similar problems in very similar conditions. What happens is the following: A query (thread) goes in end-state and stays in that state for about ten minutes. It is impossible to kill it. All other queries being sent to the server lock up, disregarding the table or database of the query on end-state. Our configuration is: MySQL Version: 5.0.77-log Processor: Quad-Core AMD Opteron Processor 2378 Disks: SSD Corsair X32 32GB OS: Linux CentOS 5.4 (Final) RPM: mysql-server x86_64 5.0.77 4.el5_4.2
[30 Aug 2010 11:53]
Reinoud van Santen
As an addition to my previous post: The threads that get go in end-state and stay there are always performing update or a delete query. I've never seen this problem on other query types. Also when this happens it always happens on a table that is being frequently updated at the very moment that the thread gets stuck.
[3 Sep 2010 16:43]
Bobby Buten
I suggest anyone having a problem that sounds anything like this check out this bug report: http://bugs.mysql.com/bug.php?id=28382 We were having a similar issue and as soon as I reduced our query_cache from 4G to 512M the problem almost completely cleared up.
[3 Sep 2010 18:54]
Christopher Lörken
Thank you very much for this link Bobby! This does exactly look like our problem. I can indeed confirm, that also a RESET QUERY CACHE; statement resulted in the same lock-up of the database. We have decreased our query cache from 3GB to 256MB and will see if it has any effect. The tasks that were linked from the one Bobby posted focus on the reset query cache performance (e.g. http://bugs.mysql.com/bug.php?id=21051) or the invalidation of query cache entries (http://bugs.mysql.com/bug.php?id=39253) or runtime problems with a full cache (note that ours was not full) (http://bugs.mysql.com/bug.php?id=21074). All these tasks are closed and patches have been released at the latest for version 5.1.42. Since we are currently running 5.1.49, I can say that this problem is not yet solved. (Bug http://bugs.mysql.com/bug.php?id=21074 has some confusing statements about the 5.1 tree, I am not sure if that patch has really been submitted to 5.1). Here are our query cache settings right before I executed the killing RESET QUERY CACHE statement: mysql> SHOW STATUS LIKE 'Qcache%'; +-------------------------+-----------+ | Variable_name | Value | +-------------------------+-----------+ | Qcache_free_blocks | 183238 | | Qcache_free_memory | 629555736 | | Qcache_hits | 43853622 | | Qcache_inserts | 12397841 | | Qcache_lowmem_prunes | 802 | | Qcache_not_cached | 191961 | | Qcache_queries_in_cache | 567751 | | Qcache_total_blocks | 1331004 | +-------------------------+-----------+ Regards, Christopher
[22 Sep 2010 16:31]
Christopher Lörken
I can by now confirm, that limiting the query cache to the significantly smaller size of 256MB does work around this bug in our setup. This should help for reproducing the initial problem.
[9 Nov 2010 8:59]
John Doe
I have the same problem. Mysql just stuck on some queries (INSERT usualy) which executed in one table. I had big query cache (about 3GB) and limiting it to 1GB solve problem for now.
[23 Apr 2011 14:31]
Valeriy Kravchuk
I wonder if this problem is still repeatable for anybody on the recent version, 5.1.56, and with reasonable small query cache (<=128M).
[23 May 2011 23:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[24 May 2011 15:51]
Christopher Lörken
Hi Valeriy, I do not quite understand your question. Reducing the Query Cache to such a small size has already been reported for older versions to be a workaround for the problem. (In our case 3GB had the problem, 256MB did not show the problem, see above.) Is your question if that workaround has been broken by the new version? Otherwise I do not get the question, since afaik noone has reported the problem with a small query cache size.
[28 Oct 2011 15:18]
Valeriy Kravchuk
Indeed, my last question seems wrong. So, let's put it this way: Had anybody seen server stuck forever with big (>512M) query cache on a reasonably new version, 5.1.56+?
[29 Nov 2011 7:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".
[14 Jan 2013 7:25]
Thomas Mack
I observed this problem yesterday on FreeBSD 8.3 with MySQL 5.1.67. query_cache_size was set to 80000000, which is still much less than even 128 MB.
[14 Jan 2013 18:04]
Matthew Lord
I am setting this to can't repeat as the core problem appears to have been related to this bug report: http://bugs.mysql.com/bug.php?id=21074 If anyone feels that they are encountering this in a more recent version, then I need far more info than "I'm encountering this". The "END" connection state is used for writing binary log records, query cache operations, and other things. I need to see something that clearly demonstrates that a query is stuck in the END state "forever", and forever does not mean a number of seconds as large query cache purges still take a long time (even around 100MB can take ~ 10 seconds) and it holds onto global locks while doing so. Over the long term, the query cache needs to be largely re-written or removed/replaced. In any event, I need a test case that I can at least attempt to repeat. In lieu of that, I would need to see clear output that showed it hanging for a very long period of time, which at a minimum would be: 1) mysql> show full processlist; 2) mysql> show engine innodb status; 3) gdb -ex "set pagination 0" -ex "thread apply all bt" --batch -p $(pidof mysqld) If you have a support agreement, please open a support ticket so that we can work with you to resolve your specific problem.
[17 Jan 2013 18:50]
Thomas Mack
Ah - ok. Today a simple update was stuck in the end state again, and I killed it about 9 minutes later. So it doesn't seem to be bug 21074. Wasn't aware on the mysql version being very old. FreeBSD ports offer mysql 5.5 for me, so I will update this week or next week. show full processlist gave me about 45 processes in LOCKED state and the one simple update query in the END state. Don't think, the problem is easily repeatable, as it doesn't occur on every update, but rather rarely. For now I set the query cache size to 0. If the problem persists on version 5.5, I will try to get the desired information, but I could think of installing some kind of debug version, if it might help. We don't have any support agreement.
[20 Jun 2013 11:04]
Thomas Mack
Did not seem to happen anymore since we increased swap space dramatically. So maybe it was due to lack of memory, but I cannot verify this from the past.