Bug #23533 CREATE SELECT max_binlog_cache_size test case needed
Submitted: 22 Oct 2006 7:25 Modified: 2 Mar 2008 19:33
Reporter: Lars Thalmann Email Updates:
Status: Closed Impact on me:
None 
Category:Tests: Replication Severity:S4 (Feature request)
Version:5.1 source OS:Any
Assigned to: Serge Kozlov CPU Architecture:Any

[22 Oct 2006 7:25] Lars Thalmann
Description:
"CREATE TABLE SELECT..." is a transaction: if it fails in the middle
(e.g. duplicate key) it is rolled back (the created table is
dropped). In SBR this does not have real consequences, but in RBR it
does: the RBR events for it are written into a binlog transaction
cache.

This is notable because it means that the limit of
max_binlog_cache_size applies: any CREATE SELECT inserting more than
max_binlog_cache_size (~0L by default) of data into a table (even if
this table is not transactional) will fail in RBR.

The RBR events will fail to enter the binlog transaction cache, 
if there is more than ~0L.  So if you have a 32-bit machine, 
creation of a table from a SELECT returning more than 4G of data, 
will fail or will at least fail to binlog (thus endangering recovery).

How to repeat:
Code review

Suggested fix:
1. Create a test case with a small max_binlog_cache_size and check 
   that CREATE TABLE SELECT is properly rolled back when it is 
   bigger than the max_binlog_cache_size

2. Document that the statement works in this way when using ROW binlog 
   format
[22 Oct 2006 12:26] Valeriy Kravchuk
Thank you for a resonable feature and documentation request.
[2 Mar 2008 19:33] Serge Kozlov
fixed, test case added to bugs suite
[25 Mar 2008 11:23] Bugs System
Pushed into 5.1.24-rc
[26 Mar 2008 19:00] Bugs System
Pushed into 6.0.5-alpha