Bug #38006 some batched statements don't get executed when rewriteBatchedStatements is true
Submitted: 10 Jul 2008 5:33 Modified: 12 Aug 2008 5:40
Reporter: Chris Davaz Email Updates:
Status: Duplicate Impact on me:
None 
Category:MySQL Server Severity:S3 (Non-critical)
Version:5.1 OS:Linux
Assigned to: CPU Architecture:Any
Tags: batch batched statements

[10 Jul 2008 5:33] Chris Davaz
Description:
Setting rewriteBatchedStatements to true on the connection and submitting a large batch apparently causes the connection to close. I'm not sure if this has to do with:

"Notice that for prepared statements, server-side prepared statements can not currently take advantage of this rewrite option, and that if you don't specify stream lengths when using PreparedStatement.set*Stream(), the driver won't be able to determine the optimum number of parameters per batch and you might receive an error from the driver that the resultant packet is too large"

from the manual.

How to repeat:
Create a table and stored procedure in test like so:

CREATE TABLE t1 ( id INT UNSIGNED NOT NULL AUTO_INCREMENT, f1 INT, f2 INT, PRIMARY KEY (id) );

DELIMITER //
CREATE PROCEDURE insert_t1 (i INT, fone INT, ftwo INT) BEGIN INSERT t1 SET f1 = fone, f2 = ftwo; END//
DELIMITER ;

Then compile and run this class to generate the exception:
package com.chrisdavaz.test;

import java.sql.*;

public class BigBatch {
	private static final String JDBC_URI = "jdbc:mysql://localhost/test?user=root&password=lemonlime&rewriteBatchedStatements=true";
	public static void main (String[] args) {
		try {
			Connection con = DriverManager.getConnection(JDBC_URI);
			CallableStatement stm = con.prepareCall("{ CALL insert_t1(?, ?) }");
			for (int i = 0; i < 30000; i++) {
				stm.setInt(1, i * 2);
				stm.setInt(2, i * 3);
				stm.addBatch();
			}
			stm.executeBatch();
		} catch (SQLException e) {
			System.err.println("SQLException: " + e.getMessage());
			System.err.println("SQLState: " + e.getSQLState());
			System.err.println("VendorError: " + e.getErrorCode());
			System.exit(1);
		}
		System.exit(0);
	}
}

You should see the following output:
SQLException: Operation not allowed on closed connection
SQLState: 08003
VendorError: 0

Running "SELECT COUNT(*) FROM t1" after the program runs shows that 3105 rows were created before the program bailed.
[17 Jul 2008 11:35] Tonci Grgin
Hi Chris and thanks for your report. I too do not think it's c/J fault... Can you please attach my.cnf file used to start MySQL server and check <your_server_name>.err log and see if there is some info there relevant to problem you're experiencing.
[17 Jul 2008 11:46] Chris Davaz
Error file generated after running the Java program in the original bug report

Attachment: ontolog.err (application/octet-stream, text), 2.12 KiB.

[17 Jul 2008 11:47] Chris Davaz
The my.cnf used for this server

Attachment: my.cnf (application/octet-stream, text), 4.82 KiB.

[17 Jul 2008 11:48] Chris Davaz
Interesting, it looks like mysqld actually crashes (and restarts) according to the error log.
[18 Jul 2008 12:07] Tonci Grgin
Verily so, 080717 19:43:50 [ERROR] Out of memory; check if mysqld or some other process uses all available memory; if not, you may have to use 'ulimit' to allow mysqld to use more memory or you can add more swap space

Can you please tune your server settings and retest. From what I see so far there is no c/J nor MySQL server bug here...
[21 Jul 2008 10:32] Chris Davaz
Could you give me some pointers on exactly which memory settings to tune to allow for larger batched statements to be executed? So far my experimenting has the same result.
[21 Jul 2008 10:54] Tonci Grgin
Chris, please run top while your test is running. If mysqld eats all the memory I'll change bug to server. But, for now, I believe it is your test eating whole system memory.
[21 Jul 2008 12:08] Chris Davaz
I have been monitoring the memory usage of both mysqld and the java process and neither take up all available memory. After about adding 3000 rows to t1 the server produces the error message posted about running out of memory. I have tried increasing the "key_buffer" value in my.cnf but to no effect. I'm not even sure if that is the right memory setting. If there are some other settings I should tweak and test again with, let me know and I'll give it a shot.
[22 Jul 2008 8:56] Tonci Grgin
Chris, from what I see this does not appear to be c/J problem so I changed category accordingly. Also, I have assigned a colleague of mine who knows better how to check/tune things on Linux.
[24 Jul 2008 19:22] Sveta Smirnova
Thank you for the report.

Verified as described.

Real problem is server eats enourmous amount of memory when executing multiple statements like "call foo(); call foo(); call foo(); ... call foo();" To repeat this one should use either script provided in the initial description or any C API program which sends multiple statements. Like example in PHP below (use file attached):

<?php
$mysqli = mysqli_init();
$mysqli->real_connect('127.0.0.1', 'root', '', 'bug38006',  3351);
$res = $mysqli->multi_query(file_get_contents(dirname(__FILE__) . DIRECTORY_SEPARATOR . 'bug38006_insert.query'));
var_dump(mysqli_errno($mysqli), mysqli_error($mysqli));
?>

Workaround: split statements in smaller chuncks.
[24 Jul 2008 19:23] Sveta Smirnova
test data

Attachment: bug38006_insert.query.gz (application/x-gzip, text), 149.79 KiB.

[28 Jul 2008 19:55] Sveta Smirnova
SHOW PROCESSLIST shows "query; query; query; ... query" in case if some connector used. So this is server problem. I think checking general log is not needed in this case. Also server eats memory, not application.
[11 Aug 2008 16:24] MySQL Verification Team
folks this looks like a duplicate of bug #27863 ..
[12 Aug 2008 5:40] Sveta Smirnova
Shane, you are reight: duplicate of bug #27863