Bug #10847 Persistent OutOfMemory Exceptions
Submitted: 25 May 2005 0:02 Modified: 9 Apr 2008 8:51
Reporter: Lane Holcombe Email Updates:
Status: Closed Impact on me:
None 
Category:Connector / J Severity:S3 (Non-critical)
Version:3.1.8a/3.1.8 OS:FreeBSD (FreeBSD/Windows)
Assigned to: CPU Architecture:Any

[25 May 2005 0:02] Lane Holcombe
Description:
I'm trying to port an existing VB6/iSeries application to Tomcat/mysql.

I am using mySQL 5.03 server on FreeBSD 5.4-STABLE with 1Gig of memory, both compiled from sources.

I have an existing set of tables on the AS/400 created using native DDS.  I have recreated these tables in mySQL as follows:

Create Table OFCTABLE (ID BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE PRIMARY KEY, JHOOFF varchar(5), SECURED decimal(12,2), UNSECURED decimal(12,2), JHONAME varchar(30));

Create Index idx_OFCTABLE_JHOOFF on OFCTABLE (JHOOFF);

Create Table DOCDESC (DOCNO BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE PRIMARY KEY, 
CATEGORY varchar(80), 
CATDESC varchar(100), DOCDESC varchar(50), CALLCODE varchar(5), RISKRATING varchar(5), GRACEDAYS int default 0, CATTYPE varchar(4) default 'P', RESP varchar(1) default 'A', RETAIN int default 0);

Create Table INFO (RECORDNO BIGINT UNSIGNED NOT NULL AUTO_INCREMENT UNIQUE PRIMARY KEY, 
	CIF varchar(12), ACCTNO varchar(20), BORROWER varchar(30), 
	LOANTYPE varchar(4), OFFICER varchar(5), REMARKS BLOB, ENTERDATE datetime null, CLEARDATE datetime null, ORIGDATE datetime null, 
	MATDATE datetime null, MAINTDATE datetime null, LOANAMT decimal(12,2), COLLCODE varchar(5), ENTITYTYPE varchar(2), 
	REJECTED varchar(3), CREDITLIFE varchar(3), LNSYSTYPE varchar(5), BRANCH varchar(4), DODD varchar(4), 
	REMARKSYN varchar(1), ENTEREDBY varchar(11), CLEAREDYN varchar(1), COSIGNOR varchar(40), GRADE varchar(3), 
	LOANSTATUS varchar(3), LOARIT varchar(50), POLICYVIO varchar(3), VIOLATION varchar(3), REPORTDATE datetime null, 
	REP varchar(100), PHONE varchar(15), EXTENSION varchar(8), FAX varchar(15), RISKRATING varchar(5), 
	CLEAREDBY varchar(10), DOCNO BIGINT,
	Constraint fk_DOCNO FOREIGN KEY (DOCNO) references DOCDESC (DOCNO));

Create Index idx_INFO_ACCTNO on INFO (ACCTNO);
Create Index idx_INFO_BORROWER on INFO (BORROWER);
Create Index idx_INFO_OFFICER on INFO (OFFICER);
Create Index idx_INFO_DOCNO on INFO (DOCNO);
Create Index idx_INFO_RISKRATING on INFO (RISKRATING);

If i use the mysql client from the command line, the following query yields the expected results:

SELECT ACCTNO, BORROWER, OFFICER, ENTERDATE, 
CATDESC, CLEAREDYN, REMARKS, DOCDESC, GRACEDAYS, 
ORIGDATE, JHONAME, RETAIN, 
CLEARDATE,ETMDESC.RISKRATING 
FROM INFO INNER JOIN OFCTABLE ON 
INFO.OFFICER=OFCTABLE.JHOOFF inner 
join DOCDESC on 
INFO.DOCNO=DOCDESC.DOCNO 
Order by INFO.OFFICER, DOCDESC.RISKRATING

For testing purposes I created a simple java class that only executes the sql statement, (but does not process it), using first the DB2 driver then the mySql (Connector/J) driver.

This same query yields identical results if I run it using the DB2 driver from the FreeBSD-hosted Tomcat server, or from the test java class when run from Windows2000 or from FreeBSD.

However, if I run the query from Tomcat using the Connector/J driver, or if I run the query from the simple class that I created from either Windows or from FreeBSD, the query fails with "OutOfMemory Exception."  This occurs even if all X servers are shutdown and all users are logged off of the server.

I am able to get around the error for Tomcat development by adding the environment variable JAVA_OPTS=-Xms256m -Xmx512m.

However this does not seem to fix it for command line query from Windows or from FreeBSD.  With each of these I still get OutOfMemory Exception.

How to repeat:
Oh Lord!  I really hope I've given you enough ... but here goes....
Build the tables detailed above.  Then run the above listed query from the mySql (CLI) client.  You should get some sort of output (even if the tables are empty).

Next build a simple java class like

import java.sql.*
public class simple {
    private Connection conn=null;
    private String sql="SELECT ACCTNO, BORROWER, OFFICER, ENTERDATE,  CATDESC, CLEAREDYN, REMARKS, DOCDESC, GRACEDAYS,  ORIGDATE, JHONAME, RETAIN, 
CLEARDATE,ETMDESC.RISKRATING  FROM INFO INNER JOIN OFCTABLE ON 
INFO.OFFICER=OFCTABLE.JHOOFF inner  join DOCDESC on INFO.DOCNO=DOCDESC.DOCNO  Order by INFO.OFFICER, DOCDESC.RISKRATING";
   public main() {
     try{
      conn = DriverManager.getConnection("com.mysql.jdbc.Driver","user", "password");
      Statement stmt = conn.createStatement().execute(SQL);
    }
    catch (Throwable t) {
     System.out.print (t.getMessage());
   }
}
};
[25 May 2005 0:31] Lane Holcombe
I put a syntax error in the Bug:  ETMDESC.RISKRATING  should be DOCDESC.RISKRATING.  This was a simple typo when I entered the bug.  Correcting the typo does not get around the "OutOfMemory Exception" error.
[25 May 2005 0:46] Mark Matthews
You don't make available the actual data that's in the table, so we're not able to repeat this.

My guess is that you have quite a bit of a record set, either in number of rows or in size of data. Because of the "clash" of JDBC specification requirements and limitations in the MySQL protocol prior to MySQL-5.0.x, the driver has to read the entire result set into memory before it can return control to your program. 

If this is the case, there are workarounds (they're listed in the documentation that comes with the driver, or on the website), or you can upgrade to Connector/J 3.2.x and MySQL-5.0.x when they become available.

You should be aware that the "stock" setting for most VMs of 64M of maximum heap space isn't a whole lot of headroom, especially when you're talking about running an application server or any framework on top of it (which is why your code works when you specify a larger maximum heap).
[9 Apr 2008 8:51] Susanne Ebrecht
Many thanks for writing a bug report.

I closed this because the actual versions are: FreeBSD 7.0 or 6.3, MySQL 5.0.51a and MySQL JDBC 5.1.6.
This issue shouldn't occur when you use these newer versions.

If you still have these problems by using the newer versions, please feel free to open this bug again.