Bug #13579 Possible binary log DoS vulnerability
Submitted: 28 Sep 2005 17:34 Modified: 29 Sep 2005 8:13
Reporter: Michael Dopheide Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server Severity:S2 (Serious)
Version:4.1.13 OS:Linux (Linux, kernel 2.4.31)
Assigned to: CPU Architecture:Any

[28 Sep 2005 17:34] Michael Dopheide
Description:
It appears that large values in INSERT statements will be written out to the binary logs even if the specific fields are much shorter.  For example, we had a case where a script bug was attempting to insert 800K of data into a varchar(100) field, over and over.  The result is that MySQL truncates the data when inserting it into the table, but the full 800K is written to the binary log every time.  We ended up with 250MB in the tables and over 30GB of binary logs.

So even if the user is "limited" by a table size of 4GB, they can still write much more data to the system in a very short period of time.  This could result in serious data integrity issues for replication or possibly even a server crash (we can't test the limits in our production environment) when the partition holding the binary logs fills up.

How to repeat:
Create a table with a varchar(100) field and turn on binary logging.  Then try inserting very large values for the varchar(100) field.

Suggested fix:
Unknown.  Possibly an additional option so that the insert values written to the binary log are actually checked against the data types of the field in question.  However, that would increase the overhead of logging and reduce the ability to debug problems a little.

Or maybe a running total of the number of bytes a user is allowed to write to the binary log.. similar to a table size limit.  This would be a better solution.
[29 Sep 2005 8:13] Hartmut Holzgraefe
there are two ways to avoid this already: in 4.x you can limit max_allowed_packet
and in 5.0 you can set sql_mode=TRADITIONAL so that attempts to insert data
that exceeds field lengths leads to SQL errors
[3 Oct 2005 20:30] Michael Dopheide
So in order to protect ourselves from a large binary log DoS, our options are to lose functionality in one of two ways?   That sounds like a work around, not a solution.