Bug #27652 | J connector - Memory leak when using &rewriteBatchedStatements. | ||
---|---|---|---|
Submitted: | 4 Apr 2007 14:15 | Modified: | 4 May 2007 21:55 |
Reporter: | Joseph Dunleavy | Email Updates: | |
Status: | Not a Bug | Impact on me: | |
Category: | Connector / J | Severity: | S2 (Serious) |
Version: | 5.0.5 | OS: | Any |
Assigned to: | CPU Architecture: | Any |
[4 Apr 2007 14:15]
Joseph Dunleavy
[4 Apr 2007 14:47]
Mark Matthews
Are you sure you're not picking up 5.0.4 _somewhere_ in your classpath? This sounds like a duplicate of BUG#25073, which was fixed for 5.0.5.
[4 Apr 2007 15:22]
Joseph Dunleavy
Yep. I don't have 5.0.4 loaded only 5.0.5. I checked the md5sum on the 5.0.5 zip file and it's correct. Here's the metadata for the driver version from my java program. mysql-connector-java-5.0.5 ( $date: 2007-03-01 00:01:06 +0100 (thu, 01 mar 2007) $, $revision: 6329 $ ) Are you sure the bug fix made it into the distribution?
[4 Apr 2007 18:09]
Joseph Dunleavy
HPROF output with &rewriteBatchesStatements=true. It looks like the driver is not releasing the memory. percent live alloc'ed stack class rank self accum bytes objs bytes objs trace name 1 30.97% 30.97% 131523792 6431956 154628992 7460048 330688 byte[] 2 16.37% 47.34% 69544416 859589 76181752 1002240 337650 byte[][] Associated traces: Responsible for 130MB of living objects after run is complete. TRACE 330688: com.mysql.jdbc.Buffer.getBytes(Buffer.java:198) com.mysql.jdbc.Buffer.readLenByteArray(Buffer.java:318) com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1366) com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2333) com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:435) com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2040) com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1443) com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1777) Responsible for 70MB of living objects after run is complete. TRACE 337650: com.mysql.jdbc.MysqlIO.nextRow(MysqlIO.java:1361) com.mysql.jdbc.MysqlIO.readSingleRowSet(MysqlIO.java:2333) com.mysql.jdbc.MysqlIO.getResultSet(MysqlIO.java:435) com.mysql.jdbc.MysqlIO.readResultsForQueryOrUpdate(MysqlIO.java:2040) com.mysql.jdbc.MysqlIO.readAllResults(MysqlIO.java:1443) com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:1777) com.mysql.jdbc.Connection.execSQL(Connection.java:3249) com.mysql.jdbc.PreparedStatement.executeInternal(PreparedStatement.java:1268)
[4 Apr 2007 18:51]
Mark Matthews
Those traces appear to be from something _reading_ results. As far as I can tell, none of the batching code does _that_. Is there something in your application near the same functionality that is doing SELECTs?
[4 Apr 2007 19:12]
Joseph Dunleavy
If I don't set the parameter &rewriteBatchedStatements I don't get the memory leak. I will run the same test and post the HPROF output in the bug.
[5 Apr 2007 17:02]
Joseph Dunleavy
I'm running the two tests now collecting hprof data. One with batch on and one with batch off in the drvier url. The one with driver batching consumes much more memory that is not "cleaned up". My test is for a certain size and my heap is now 1600M. If I run the test for a larger size with batching on the heap blows up. To me it looks there is a memory leak with the driver when using batching. This is what the hprof will show. I'll send the hprof output up after the jobs complete. I'm running the tests through a java application - I'm not a java programmer - I'm a dba. I'll pass on the test case to a developer to see if the test case simulates what we are doing. The test I run, select data, delete data, and inserts data accross tables in the jdbs url.
[5 Apr 2007 18:54]
Mark Matthews
Can you post your test (or some form of it)?
[5 Apr 2007 19:06]
Joseph Dunleavy
My hprof tests were inconclusive. I am bumping up the volume of data to try to duplicate the problem again.
[7 Apr 2007 1:34]
Joseph Dunleavy
debug files - mysql crashing jvm
Virus scan engine found a threat. This file might be infected. Attachment: mysql.zip (application/zip, text), 414.28 KiB.
[11 Apr 2007 14:08]
Joseph Dunleavy
Any progress with the zip files I sent up?
[20 Apr 2007 17:15]
Todd Farmer
Test Case
Attachment: Bug27652.java (application/octet-stream, text), 1.66 KiB.
[20 Apr 2007 17:18]
Todd Farmer
Thanks for your bug report! Unfortunately, I am unable to recreate the problem using the attached test case (which implements the conditions defined in How To Repeat). The most recent trace files don't appear to include any com.mysql.* classes, so they didn't prove useful in identifying the root cause of the problem you report. If you are able to replicate the problem using the attached test case, or modify it so that the problem is recreated, please let us know.
[1 May 2007 15:59]
Todd Farmer
Joseph, Thanks for the revised test case. Summarizing our findings for the purposes of this bug report, the out-of-memory errors occur when using client-side prepared statements only. As of Connector/J 5.0.5, the default configuration was changed from server-side prepared statements to client-side prepared statements. The same exceptions are seen in earlier versions of C/J when useServerPrepStmts=false is used. Setting useServerPrepStmts=true in 5.0.5 and 5.1.0 eliminated the exceptions.
[1 May 2007 18:22]
Mark Matthews
Must be a bug with client-side prepared statements and batching, so I'll investigate. Turning off client-side prepared statements with older versions of the driver takes "batching" out of the equation.
[4 May 2007 21:55]
Mark Matthews
This looks like it has nothing to do with batching, just the issue of getting bitten by the fact that non-server-side prepared statements don't support cursors, and thus don't honor setFetchSize(), and that we disable server-side prepared statements by default in 5.0.5 because of various issues. If you _do_ want to use cursor-based fetch for _large_ result sets, you'll need to add "useCursorFetch=true" and "useServerPrepStmts=true" to your JDBC URL. If you don't want to use MySQL-5.0's "cursor-based" fetch, and can deal with processing a row at a time in your application, then set the fetch size to Integer.MIN_VALUE. However note that you can't issue any further queries on the given connection until all the rows have been consumed or the result set created by the statement has been closed.