Bug #18496 Gross inefficiency in the Decimal -> BigDecimal conversion
Submitted: 24 Mar 2006 19:14 Modified: 5 Apr 2006 20:14
Reporter: wynne crisman Email Updates:
Status: Closed Impact on me:
Category:Connector / J Severity:S5 (Performance)
Version:3.1.12 OS:Any (any)
Assigned to: Mark Matthews CPU Architecture:Any

[24 Mar 2006 19:14] wynne crisman
The process of converting to BigDecimal is very very wasteful and should be easy to fix.  In fact converting most types to a byte array appears to also be wasteful.

The problem is first that the conversion process converts from the database's bytes to a string which is then quickly discarded by most conversions.  It would be better to not convert to a string unless a string is required.  The second problem with BigDecimal is that the conversion to a string creates a big decimal which is then discarded when it is converted back to a string.  The string is then converted again into a big decimal for the result.  So every load of a BigDecimal value from the database generates an extra String and BigDecimal object which are never used and must be GC'd.  I am sure a simple fix can be found to avoid the unnecessary GC time.

How to repeat:
Create a db with a Decimal column.  Add a row.  Use JDBC to query for the row and ask the result set for a BigDecimal (resultSet.getBigDecimal("columnName")).  Inspect the process of retreiving the big decimal object.

Suggested fix:
Short circuit the calls to getNativeString(int) and getNativeConvertToString(int, Field) in com.mysql.jdbc.ResultSet.  Instead for the desired type write more efficient code that converts directly from the database returned byte array to the desired output.  

//First check for null, else...//
BigDecimal result = new BigDecimal(new String((byte[]) this.thisRow[columnIndex - 1]));
[26 Mar 2006 1:25] Mark Matthews
Or alternatively, use the appropriate getter for the given DBMS type.

This quickly becomes an "n^2" problem to do this for every type that can somewhat straightforwardly be converted.

Modern JVMs usually don't have a problem with short-lived objects (in fact, they deal with those much better than medium or long-lived objects). 

I'll put this on the TODO, but we haven't seen it show up as a performance problem in our tests. If you have a particular benchmark with a profiler trace that shows different, we'd love to see it.
[26 Mar 2006 3:12] wynne crisman
What is the appropriate getter for Decimal?  I couldn't find one that didn't have extra objects being created, but I may have overlooked one.  Also, what is the appropriate database type for a java BigDecimal?   Are these questions documented somewhere that I should be referencing?

I don't have bench marks that would show the performance differences between the current design and the proposed design.  My experience is that the Sun JVM's  don't handle GC very well, perticularly on multi-processor systems when using the stop the world GC design, but 'well' is a relative measure.  

I don't think what I proposed would have to result in a n^2 order magnitude amount of code.  I am suggesting that the data shouldn't be convered from a byte array to a string to a big decimal to a string to a big decimal.  The big decimal problem could be solved by taking the raw bytes, converting to a string and returning the string from the getNativeString(int) and
getNativeConvertToString(int, Field) methods.  Due to the fact that BigDecimal does not allow for a byte array constructor, a single string object is required.

Hope this helps and thank you for the reply.
[28 Mar 2006 19:13] Mark Matthews
The JDBC specification states that the SQL type "DECIMAL" maps to java.math.BigDecimal. We support that mapping.

I think I see where there might be an extra type conversion now, I'll start fixing that right away.
[3 Apr 2006 19:43] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from: