Bug #40440 Performance bottleneck using batched statements
Submitted: 30 Oct 2008 17:34 Modified: 22 May 2009 15:56
Reporter: Stephane Varoqui Email Updates:
Status: Closed Impact on me:
Category:Connector / J Severity:S5 (Performance)
Version:5.1.7 OS:Any
Assigned to: CPU Architecture:Any

[30 Oct 2008 17:34] Stephane Varoqui
When executing a huge insert batch with rewriteBatchedStatements set to true and a big max-allowed-packet, the rewritten prepared statement may very long which causes a performance bottleneck during parsing when looking for "ON DUPLICATE UPDATE".

This happen in com.mysql.jdbc.PreparedStatement.ParseInfo(...). The constructor calls containsOnDuplicateKeyInString(String) which is painfully slow on big queries. This may affect standard client side PreparedStatement but is very apparent with batches and rewriteBatchedStatements.

See attached profiling info.

How to repeat:
make inserts with blobs in rewriteBatchedStatements
[30 Oct 2008 20:06] Tonci Grgin
Hello Stephane.

One can't argue against profiler, right? So, verified as described by looking into code and attachment. I have also lowered severity to S5 (Performance) as it looks more appropriate.

If I might add, I do not see the way to deal with such problems short of putting full parser inside c/J... I don't think that option is realistic but let's see what Mark has to say.
[22 May 2009 15:56] Mark Matthews
This is fixed by the fix for Bug#41532.