Bug #34555 | QueryTimeout does not work for batch statements waiting on a locked table. | ||
---|---|---|---|
Submitted: | 14 Feb 2008 16:40 | Modified: | 16 Sep 2009 16:04 |
Reporter: | dominique lasalle | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | Connector / J | Severity: | S3 (Non-critical) |
Version: | 5.1.8 | OS: | Any |
Assigned to: | Mark Matthews | CPU Architecture: | Any |
Tags: | setquerytimeout batch statement |
[14 Feb 2008 16:40]
dominique lasalle
[20 Jul 2009 12:20]
Tonci Grgin
Hi Dominique and thanks for your report. I missed it somehow, sorry. Your code snippet does not work, as there is no setQueryTimeout property on connection object... If I set it on pstmt object, all works fine, ie. entire batch is killed as soon as lock is detected. Please attach fully functional test case if this is still a problem for you.
[21 Jul 2009 5:12]
dominique lasalle
Sorry about the faulty reproduction statement. I just tested it and I'm still getting the same problem. Here is the exact code I am using: import java.sql.*; public class mysql_test { public static void main(String[]args) { Connection con = null; try { String driverName = "org.gjt.mm.mysql.Driver"; Class.forName(driverName); String serverName = "localhost"; String mydatabase = "foobar"; String url = "jdbc:mysql://" + serverName + "/" + mydatabase; String username = "root"; String password = "mysql"; con = DriverManager.getConnection(url, username, password); PreparedStatement ps = con.prepareStatement("insert into foo values(?)"); ps.setQueryTimeout(5); for (int i = 0; i < 100; i++) { ps.setInt(1, i); ps.addBatch(); } ps.executeBatch(); } catch (ClassNotFoundException e) { e.printStackTrace(); } catch (SQLException e) { e.printStackTrace(); } } } After locking my table foo with "mysql> lock table foo read;" and then executing the above java class and waiting longer than 5 seconds, "mysql> show processlist;" gives me this: +----+------+-----------+--------+---------+------+--------+---------------------------+ | Id | User | Host | db | Command | Time | State | Info | +----+------+-----------+--------+---------+------+--------+---------------------------+ | 1 | root | localhost | foobar | Query | 0 | NULL | show processlist | | 5 | root | localhost | foobar | Query | 273 | Locked | insert into foo values(1) | +----+------+-----------+--------+---------+------+--------+---------------------------+ 2 rows in set (0.00 sec) As you can see the first query in the batch statement as canceled "insert into foo values(0)", and the second one now waits indefinitely. Looking at the connecter's source, I think I can see the problem in PreparedStatement.java: protected int[] executeBatchSerially(int batchTimeout) throws SQLException { ... for (batchCommandIndex = 0; batchCommandIndex < nbrCommands; batchCommandIndex++) { The for loop doesn't check to see if the statement has been cancelled. The "KILL QUERY" statement executed by the cancel task only kills the currently executing statement (the "insert into table foo values(0)"). I believe changing the for loop condition to this should solve the problem. for (batchCommandIndex = 0; batchCommandIndex < nbrCommands && !wasCancelled; batchCommandIndex++) { Hope this helps. Thank you, Dominique
[22 Jul 2009 8:32]
Tonci Grgin
Thanks Dominique, looking into this.
[22 Jul 2009 9:26]
Tonci Grgin
Verified as described with test case attached. Now, if one uses Statement like this one: assertEquals(1, this.stmt.executeUpdate("INSERT INTO bug40439_values (id,data, ordr) values (0,'a',0)")); a proper exception is throws ("Table xxx is locked and can not be updated..."). If a prepared statement is used (no batching): this.pstmt.setInt(1, 22); this.pstmt.setString(2, "toto22"); this.pstmt.setInt(3, 22); this.pstmt.executeUpdate(); exception is thrown but I think it's not the correct one (there is no hang, of course): java.sql.SQLException: Query execution was interrupted ExecuteBatch hangs the executing thread (Id 13): mysql> show processlist; +----+------+--------------------------------------+------+---------+------+---- ----+-----------------------------------------------------------------+ | Id | User | Host | db | Command | Time | Sta te | Info | +----+------+--------------------------------------+------+---------+------+---- ----+-----------------------------------------------------------------+ | 11 | root | QCW2K8.dummy.porta.siemens.net:50729 | test | Query | 0 | NUL L | show processlist | | 13 | root | QCW2K8.dummy.porta.siemens.net:50761 | test | Execute | 42 | Loc ked | INSERT INTO bug40439_values (id,data, ordr) values (2,'toto',2) | +----+------+--------------------------------------+------+---------+------+---- ----+-----------------------------------------------------------------+ After killing thread id 13 with this.conn.setAutoCommit(true): E Time: 448,794 There was 1 error: 1) testBug34555(testsuite.simple.TestBug34555)com.mysql.jdbc.exceptions.MySQLNonTransientConnectionException: Communications link failure during commit(). Transaction resolution unknown. If this.conn.setAutoCommit(false), the error changes to: There was 1 error: 1) testBug34555(testsuite.simple.TestBug34555)java.sql.SQLException: Can't call commit when autocommit=true So I see several problems here (some of which could be in server actually): o executeBatch against locked table hangs indefinitely. o Killing hanged thread that was executing batch updates leads to misleading error messages depending on conn.setAutoCommit value. o pStmt.ExecuteUpdate against locked table throws wrong exception message. Environment: Connected to 5.1.31-log - remote, x64, OpenSolaris host java.vm.version : 1.5.0_17-b04 java.vm.vendor : Sun Microsystems Inc. java.runtime.version : 1.5.0_17-b04 os.name : Windows Server 2008 os.version : null sun.management.compiler : HotSpot Client Compiler
[23 Jul 2009 13:49]
Mark Matthews
Setting "continueBatchOnError=false" as a URL parameter today would give you the behavior you're looking for.
[23 Jul 2009 13:55]
Mark Matthews
Note to anyone who fixes this (more than likely me), a better place to handle this is in handleExceptionForBatch(), since that will fix it for all variants of statements and batch execution strategies. We already look for transactional deadlock timeout there (because it rolls back the batch so far), so we can look for timed out there too.
[23 Jul 2009 13:55]
Tonci Grgin
Thanks Mark, changing workaround triage.
[13 Aug 2009 23:48]
Mark Matthews
Fixed for 5.1.9.
[16 Sep 2009 16:04]
Tony Bedford
An entry was added to the 5.1.9 changelog: QueryTimeout did not work for batch statements waiting on a locked table. When a batch statement was issued to the server and was forced to wait because of a locked table, Connector/J only terminated the first statement in the batch when the timeout was exceeded, leaving the rest hanging.