Bug #26824 Libmysql, possibly C/XYZ, does not handle error of COM_STMT_SEND_LONG_DATA
Submitted: 3 Mar 2007 14:21 Modified: 17 Jan 2014 15:11
Reporter: Andrey Hristov Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: Prepared statements Severity:S2 (Serious)
Version:5.1.17-bk, probably 4.1 and 5.0 too OS:Any (All)
Assigned to: Assigned Account CPU Architecture:Any

[3 Mar 2007 14:21] Andrey Hristov
Description:
 Hi,
according to the C/S protocol specification, the server doesn't send a response packet for COM_STMT_SEND_LONG_DATA command, used for sending BLOBs with the PS API. Efficieny is the probable cause. libmysql comment states that the only error which can happen is out of memory on the server side and it will be caught during execute. However, this statement does not hold true and does not take into consideration that the server has input buffer, the size of which is controlled by the max_allowed_packet_size server variable. If the buffer is smaller than the long data coming, an error packet is sent back, without even reaching the code which handles COM_STMT_SEND_LONG_DATA. libmysql doesn't expect this and problems will occur during COM_EXECUTE or any other command sent to the wire, like the folowing scenario. net_clear(), of net.c, checkes for such stale data, but it might not catch if packet sequence number is correct and report an error, but it could be that other connectors do not do it and can experience the following problem.

SEND_LONG_DATA <-- error packet generated
COM_QUERY "SELECT 1 FROM DUAL" <-- fetches error "packet too large"!?!

or
SEND_LONG_DATA <-- error packet generated
COM_EXECUTE <-- fetches error "packet too large"!?!

The following will be caught by net_clear()
SEND_LONG_DATA <-- error packet generated
SEND_LONG_DATA <-- error packet generated
COM_EXECUTE <-- fetches error "packet too large"!?!

andrey@whirlpool:~/dev/php6_libmysql> ./php -r 'echo "\n";
$str=str_repeat("andrey", 1000000);
$c=mysqli_connect("127.0.0.1","foo","bar","test");$s=mysqli_stmt_init($c);
mysqli_stmt_prepare($s,"insert into mysqlnd_blob (a) values (?)");
var_dump($s);
mysqli_stmt_bind_param($s, "b", $null);
var_dump(mysqli_stmt_send_long_data($s, 0, $str));
var_dump(mysqli_stmt_send_long_data($s, 0, $str));
var_dump(mysqli_stmt_error($s), mysqli_error($c));
var_dump(mysqli_stmt_execute($s), mysqli_stmt_error($s));'

object(mysqli_stmt)#2 (0) {
}
bool(true)
bool(true)
string(0) ""
string(0) ""
Error: net_clear() skipped 64 bytes from file: TCP/IP (3)
bool(false)
string(51) "Got a packet bigger than 'max_allowed_packet' bytes"

How to repeat:
This will let libmysql emit an error

php -r '$str=str_repeat("andrey", 1000000);
$c=mysqli_connect("127.0.0.1","foo","bar","test");$s=mysqli_stmt_init($c);
mysqli_stmt_prepare($s,"insert into mysqlnd_blob (a) values (?)");
var_dump($s);
mysqli_stmt_bind_param($s, "b", $null);
var_dump(mysqli_stmt_send_long_data($s, 0, $str));
var_dump(mysqli_stmt_send_long_data($s, 0, $str));
var_dump(mysqli_stmt_error($s), mysqli_error($c));
var_dump(mysqli_stmt_execute($s), mysqli_stmt_error($s));'

This (sends only one long data packet), however won't and there will be an WTF factor :

php -r 'echo"\n";$str=str_repeat("andrey", 1000000);
$c=mysqli_connect("127.0.0.1","foo","bar","test");$s=mysqli_stmt_init($c);
mysqli_stmt_prepare($s,"insert into mysqlnd_blob (a) values (?)");
var_dump($s);
mysqli_stmt_bind_param($s, "b", $null);
var_dump(mysqli_stmt_send_long_data($s, 0, $str));
var_dump(mysqli_stmt_error($s), mysqli_error($c));
var_dump(mysqli_stmt_execute($s), mysqli_stmt_error($s));'

Suggested fix:
libmysql, and if applicable to other connectors which don't use libmysql, should try a non-blocking read after COM_STMT_SEND_LONG_DATA to see if there was an error. Automatic recovery will be to split the data in chunks which won't trigger max_allowed_packet_size errors and retransmit or just return an error to the user.
[3 Mar 2007 14:25] Andrey Hristov
The problem was found by a test created by Ulf Wendel for Connector/PHP .
[6 Mar 2007 8:45] Sveta Smirnova
Thank you for the report.

Verified as described.
[8 Mar 2007 19:54] Konstantin Osipov
This whole no-reply idea was a grand mistake.
It doesn't speed up a thing, it actually slow downs _every_single_clinet_ unless the server turns of the Nagle algorithm, which slows down the general case.
It's funny it broke down on a lower layer.
The fix, I guess, would be to suppress the lower layer error.
[7 Oct 2008 10:58] Konstantin Osipov
A minor problem, and to fix it cleanly we need a significant or incompatible change. Setting to "To be fixed later".