Bug #91858 | Closing not fully retrieved resultset takes long time | ||
---|---|---|---|
Submitted: | 1 Aug 2018 18:37 | Modified: | 3 Aug 2018 15:59 |
Reporter: | Dmitriy Shirokov | Email Updates: | |
Status: | Verified | Impact on me: | |
Category: | Connector / J | Severity: | S4 (Feature request) |
Version: | master | OS: | Any |
Assigned to: | Assigned Account | CPU Architecture: | Any |
[1 Aug 2018 18:37]
Dmitriy Shirokov
[2 Aug 2018 12:43]
Chiranjeevi Battula
Hello Dmitriy Shirokov, Thank you for the bug report. Could you please provide repeatable test case (exact steps, full stack trace, sample code etc. - please make it as private if you prefer) to confirm this issue at our end? Thanks, Chiranjeevi.
[2 Aug 2018 15:11]
Dmitriy Shirokov
Hello, Chiranjeevi, This is top of the callstack: close:151, RowDataDynamic (com.mysql.jdbc) realClose:6676, ResultSetImpl (com.mysql.jdbc) close:851, ResultSetImpl (com.mysql.jdbc) close:-1, HikariProxyResultSet (com.zaxxer.hikari.pool) closeResultSet:265, DefaultResultSetHandler (org.apache.ibatis.executor.resultset) ... If you look at https://github.com/mysql/mysql-connector-j/blob/release/5.1/src/com/mysql/jdbc/RowDataDyna..., while loop at line 155 with comment // drain the rest of the records, it will be obvious that it will take time to to drain millions of records if this is what SQL returns. If I will have time I will build pure mysql.jdbc sample. I already implemented workaround: page through the records in 100K bunches. Not ideal, need to write more code, but works. Draining ~100K records is reasonably fast. Thank you Dmitriy
[3 Aug 2018 12:19]
Alexander Soklakov
Hi Dmitriy, After ResultSet is closed the Connection still may be used. So if we do not drain all resultset data it may be wrongly consumed by other statements result retrieving. I see only one way to quickly ignore this unneeded data - call ((ConnectionImpl)conn).abortInternal(). It forces the socket close, so it's fast, but you'll need to create a new connection to proceed.
[3 Aug 2018 13:38]
Dmitriy Shirokov
Hi, Alexander, Thank you for the explanation. Maybe then my paging approach is the best :) I guess you could add a config parameter for controlling record drain, maybe maxRecordsToDrain, after which it will close the socket. Or more sophisticated approach when you measure time of reopening connection (say 100ms), measure speed of record drain (say 10K in 100 ms) and then calculate the maxRecordsToDrain dynamically ( ~10K in this case). Of course there will be unfortunate cases when number of records to drain is just above the maxRecordsToDrain threshold :( Thank you Dmitriy
[3 Aug 2018 15:59]
Alexander Soklakov
Ok, I'll keep it as a feature request inhope we could find an appropriate solution in the future.