Bug #12788 | Server Crash on Update with join | ||
---|---|---|---|
Submitted: | 24 Aug 2005 15:34 | Modified: | 13 Sep 2005 21:56 |
Reporter: | Chris DiMartino | Email Updates: | |
Status: | Can't repeat | Impact on me: | |
Category: | MySQL Server | Severity: | S2 (Serious) |
Version: | 5.0.11 Beta | OS: | Linux (Linux 2.6) |
Assigned to: | philip antoniades | CPU Architecture: | Any |
[24 Aug 2005 15:34]
Chris DiMartino
[24 Aug 2005 16:54]
MySQL Verification Team
Not enough information was provided for us to be able to handle this bug. Please re-read the instructions at http://bugs.mysql.com/how-to-report.php If you can provide more information, feel free to add it to this bug and change the status back to 'Open'. Thank you for your interest in MySQL. Additional info: If you are able for to provide the dump file for to reproduce it at our side, please upload it at ftp://ftp.mysql.com/pub/mysql/upload/ with the zip file identifying this bug report. Thanks.
[24 Aug 2005 19:50]
Chris DiMartino
There is something different between the way that the "CREATE TABLE" statement is created in a dumpfile, and the way that it is being used when doing "CREATE TABLE LIKE <table>" and this difference seems to be the key to this bug. If there is a way to dump what is happening with "CREATE TABLE LIKE <table>" that I can get into a file somehow, I will upload it. As it stands now, when reloading the tables from a dump the bug is gone, and is only there after creating the tables from another table. I've uploaded a file (mysql_crash.zip) to the server that was mentioned, including a dump of the databases (which will not show the bug when reloaded) and also a tee of what I do to recreate the bug. If there is some way of dumping this information in a way that will recreate what I am doing in the tee, please let me know, and I'll be more than happy to do so.
[24 Aug 2005 19:51]
Chris DiMartino
Tee and Dump of tables causing errors
Attachment: mysql_crash.zip (application/zip, text), 5.13 KiB.
[12 Sep 2005 20:39]
Chris DiMartino
Still exists in 5.0.12: 050912 16:37:18 [Note] Slave I/O thread: connected to master 'repl@kobe.asiscan.com:3306', replication started in log 'kobe-bin.000119' at position 128403887 mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=134217728 read_buffer_size=16773120 max_used_connections=1 max_connections=100 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 3407471 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x95cbad8 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0xbfe1ede8, backtrace may not be correct. Stack range sanity check OK, backtrace follows: 0x809cd52 0x82d2be8 0x8296b22 0x82951da 0x8294d1d 0x812df3b 0x80fe473 0x80ec066 0x80eae59 0x80eacac 0x80eae59 0x80eacac 0x80ea990 0x80dc4d0 0x80dd051 0x80fd7fd 0x80af84c 0x80b436d 0x8108256 0x8107eef 0x816134a 0x816213b 0x82d039c 0x82f9daa New value of fp=(nil) failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/Using_stack_trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x957ad86 = UPDATE scan s, package p SET destination = 'GIG', flight = 8865, old_destination = 'ZZZ', old_flight = '8865' WHERE s.barcode = p.barcode AND s.carrier = 'RG' AND location = 'JFK' AND itsDate = '2005-08-23' AND itsDate between assignment_date and assignment_date + interval 6 day AND type in ('P', 'C') AND flight = 8865 AND bincode = 'ake04615' AND NOT (load1_city <=> 'JFK' or load2_city <=> 'JFK' or load3_city <=> 'JFK' or load4_city <=> 'JFK' or load5_city <=> 'JFK' or load6_city <=> 'JFK') thd->thread_id=3 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. Number of processes running now: 0 050912 16:37:22 mysqld restarted 050912 16:37:25 InnoDB: Started; log sequence number 21 1648510915 050912 16:37:25 [Note] Recovering after a crash using beefcake-bin 050912 16:37:25 [Note] Starting crash recovery... 050912 16:37:25 [Note] Crash recovery finished. 050912 16:37:25 [Note] /usr/local/mysql/bin/mysqld: ready for connections.
[13 Sep 2005 21:56]
philip antoniades
Thank you for your bug report. This issue has been committed to our source repository of that product and will be incorporated into the next release. If necessary, you can access the source repository and build the latest available version, including the bugfix, yourself. More information about accessing the source trees is available at http://www.mysql.com/doc/en/Installing_source_tree.html Additional info: Hi, I'm unable to duplicate with the UPDATE statement and dump provided (using 5.0.13): mysql> select count(*) from package; +----------+ | count(*) | +----------+ | 20 | +----------+ 1 row in set (0.00 sec) mysql> UPDATE scan s, package p SET destination = 'GIG', flight = 8865 WHERE s.barcode = p.barcode AND location = 'JFK'; Query OK, 20 rows affected (0.08 sec) Rows matched: 20 Changed: 20 Warnings: 0
[15 Sep 2005 17:26]
Chris DiMartino
Just to let you know, I donwloaded and built 5.0.13 from source. The Bug still existed in that one as well. I did manage to find a way to repair the problem on the affected tables though. Although the two tables involved in the join were both innodb already, and I'd already run checks against them to ensure that they were not corrupted, which did not help, I managed to repair the issue by doing "ALTER TABLE <table> engine=innodb". Since I did this, the replication has moved on passed this point and has not yet crashed again.