Bug #410 Huge query result.
Submitted: 9 May 2003 5:01 Modified: 9 May 2003 7:27
Reporter: Henryk Szal Email Updates:
Status: Won't fix Impact on me:
Category:Connector / J Severity:S4 (Feature request)
Version:3.0.7 OS:Any (All)
Assigned to: CPU Architecture:Any

[9 May 2003 5:01] Henryk Szal
Problem with huge query result on client side.
I read in README that I should call stmt.setFetchCount(Integer.MIN_VALUE).
This solution work fine, but I think its incompatible with other JDBC drivers
and JDBC reference.

In JDK 1.4 API doc. I read that 'parameter value for setFetchCount'  must be between 0 and getMaxCount (0 <= param <= getMaxCount).

Currently I have to have many IF statements for mySQL JDBC driver to fetch 1000000 rows from a query. 

How to repeat:

Suggested fix:
I think that setFetchCount(0) should set "store result on client side" mode
setFetchCount(n) where: n > 0 should set "send rows to client in 'n' packs" mode
[9 May 2003 7:27] Mark Matthews
This can't be fixed in way until MySQL has server-side cursors.

In fact, many other JDBC drivers have this limitation (not being able to process huge result sets, without reading all rows into memory) unless you tell them to use cursors for _every_ query, which is actually very slow.

The way it is now (by using a special fetch size _and_ setting a forward-only type result set) is the closest to the JDBC spec that can be used, without adding MySQL-specific methods to the implementation of the interfaces, which _don't_ work when you're using an application server.