Bug #39156 active streaming result set error
Submitted: 1 Sep 2008 14:01 Modified: 3 Sep 2008 10:32
Reporter: Umberto Cappellini Email Updates:
Status: Not a Bug Impact on me:
None 
Category:Connector / J Severity:S2 (Serious)
Version:6.0.5-alpha-log OS:Linux (Linux fataxp 2.6.14.3-vs2.0.1cisco)
Assigned to: CPU Architecture:Any
Tags: streaming result set

[1 Sep 2008 14:01] Umberto Cappellini
Description:
I'm running a Java Enterprise application (JBoss 4.2) on a Linux based Cisco box with MySQL as backend db.

DB connections are created by JBoss's Connection Pool, and accessed through the JDBC Driver.

The application scans all the rows of a (potentially big) table, and for each result opens two extra connections to perform row-related queries.

The results of the first "scan-all" query are retrieved using a cursor, in streaming mode. Therefore when the second sub-queries are performed, the first connection is still open. All the query are performed on separate connections.

At start, when the "scan-all" table is almost empty, I see no problem. Later, when it start growing up (say, 5000 rows), I get the following exception:

java.sql.SQLException: Streaming result set com.mysql.jdbc.RowDataDynamic@1950740 is still active. No statements may be issued when any streaming result sets are open and in use on a given connection. Ensure that you have called .close() on any active streaming result sets before attempting more queries.
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:914)
	at com.mysql.jdbc.MysqlIO.checkForOutstandingStreamingData(MysqlIO.java:2145)
	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:1534)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1715)
	at com.mysql.jdbc.Connection.execSQL(Connection.java:3243)
	at com.mysql.jdbc.Connection.execSQL(Connection.java:3172)
	at com.mysql.jdbc.Statement.executeQuery(Statement.java:1197)
	at org.jboss.resource.adapter.jdbc.WrappedStatement.executeQuery(WrappedStatement.java:171)

I obtain streamed result sets setting Integer.MIN_VALUE to the Statement, as follow: stmt.getStmt().setFetchSize(Integer.MIN_VALUE);

If I run the first "scan all" query in non-streaming mode (all the result are kept in memory), I have no such problem (but eventually other, memory related, problems).

How to repeat:
Hard to reproduce, as with small amounts of data the error doesn't show up.

A way could be to open a connection in streaming mode and perform other queries (with other connections) while the first one keeps being open.
[1 Sep 2008 15:08] Valeriy Kravchuk
Thank you for a problem report. What exact version of Connector/J do you use?
[3 Sep 2008 9:06] Umberto Cappellini
Hello,

I'm using mysql-connector-java-5.0.5-bin.jar
Java version "1.5.0_15" (Sun)
[3 Sep 2008 9:51] Sveta Smirnova
Thank you for the feedback.

According to http://dev.mysql.com/doc/refman/5.1/en/connector-j-reference-implementation-notes.html:

 If the statement is within scope of a transaction, then locks are released when the transaction completes (which implies that the statement needs to complete first). As with most other databases, statements are not complete until all the results pending on the statement are read or the active result set for the statement is closed.

Therefore, if using streaming results, you should process them as quickly as possible if you want to maintain concurrent access to the tables referenced by the statement producing the result set. 

With "almost empty" initial table Reader finishes quickly and you just miss error which you see when the table contain more data. So I close the report as "Not a Bug".
[3 Sep 2008 10:32] Umberto Cappellini
Thanks for the answer.

Indeed the table actually read in streaming may be concurrently accessed in write mode, using transactions. 

But I find this a quite usual use case for a DB, it should just lock until the previous (streaming) query in completed and then release the lock. I don't see the reason to throw an exception for it and to require the client to "process them as quickly as possible".

We deploy the same application (= same code) on Oracle, and in streaming mode we're able to process the same table with hundreds of thousand rows, which may keep the table locked for hours, without the need to release exceptions.

Lastly, choosing a streamed or non-streamed mode should affect just the performance and the memory usage of the application, and not the expected behavior. If in a non-streamed mode I'm able to grow up such table as much I want, I would expect the same using streamed queries.
[3 Sep 2008 11:11] Sveta Smirnova
Thank you for the feedback.

I can verify this report as feature request if you want, although can not guarantee this will be implemented: we should check if what you requested is compatible with the standard and MySQL server.
[17 Jul 2018 22:02] Girish Iyer
Any update on this fix / enhancement?
[31 Aug 2018 6:35] dong chunguo
Is there something done.