Bug #71511 | Server Hang Dropping Table | ||
---|---|---|---|
Submitted: | 29 Jan 2014 15:32 | Modified: | 4 Mar 2014 18:31 |
Reporter: | David Fetterman | Email Updates: | |
Status: | No Feedback | Impact on me: | |
Category: | MySQL Server: DDL | Severity: | S1 (Critical) |
Version: | 5.6.15 | OS: | Linux |
Assigned to: | CPU Architecture: | Any |
[29 Jan 2014 15:32]
David Fetterman
[29 Jan 2014 15:35]
David Fetterman
error log excerpt
Attachment: mysql.log (application/octet-stream, text), 12.49 KiB.
[30 Jan 2014 21:27]
Sveta Smirnova
Thank you for the report. I can not repeat described behavior with generic test. Please describe what your application is doing in more details. Do you use foreign keys? Do you run DML/select queries right before dropping the table?
[30 Jan 2014 22:10]
David Fetterman
I can not repeat escribed behavior with generic test. Please describe what your application is doing in more details. Do you use foreign keys? Do you run DML/select queries right before dropping the table? The application is moving log files from our sharded application db servers to a central dataware house server using the following pseudo code: start loop drop work_table if exists; create work_table; load data infile 'file' into work_table; insert into dest_table select * from work_table; loop if more batch data drop tmp_table if exists; There are no foreign keys on these tables. The work flow is such that there is DML (inserts, load data infile, and selects) just prior to the drop DDL. One thing I have since noticed is a steady increase in the 'open_tables' status, which increases over time until it hits the 'table_open_cache' value (2000 on this server), and it never decreases on its own. A 'flush tables' will clear the open_tables. This open_tables at max value *always* precedes a crash. I have set up a scheduled 'flush tables' to see if this stops the problem from occurring. It does however, appear that the server is not clearing this open_tables on it's own. The srv_master_evict_from_table_cache calls in srv0srv.cc:2129 & 2198 is jumping out at me as maybe where part of the problem is coming from...
[30 Jan 2014 22:10]
David Fetterman
The application is moving log files from our sharded application db servers to a central dataware house server using the following pseudo code: start loop drop work_table if exists; create work_table; load data infile 'file' into work_table; insert into dest_table select * from work_table; loop if more batch data drop tmp_table if exists; There are no foreign keys on these tables. The work flow is such that there is DML (inserts, load data infile, and selects) just prior to the drop DDL. One thing I have since noticed is a steady increase in the 'open_tables' status, which increases over time until it hits the 'table_open_cache' value (2000 on this server), and it never decreases on its own. A 'flush tables' will clear the open_tables. This open_tables at max value *always* precedes a crash. I have set up a scheduled 'flush tables' to see if this stops the problem from occurring. It does however, appear that the server is not clearing this open_tables on it's own. The srv_master_evict_from_table_cache calls in srv0srv.cc:2129 & 2198 is jumping out at me as maybe where part of the problem is coming from...
[4 Feb 2014 18:31]
Sveta Smirnova
Thank you for the feedback. Pseudo-code looks like you are writing to and dropping single table. While following: > One thing I have since noticed is a steady increase in the 'open_tables' status, which increases over time until it hits the 'table_open_cache' value (2000 on this server), and it never decreases on its own. A 'flush tables' will clear the open_tables. Shows that my guess was wrong. Do you run the pseudo-code on thousands of log tables? Please also specify exact server version you use (file name you downloaded) and send us output of SHOW PROCESSLIST and SHOW ENGINE INNODB STATUS, taken during time when issue occurs.
[5 Mar 2014 1:00]
Bugs System
No feedback was provided for this bug for over a month, so it is being suspended automatically. If you are able to provide the information that was originally requested, please do so and change the status of the bug back to "Open".