Bug #2691 CONCAT deletes data after exceeding size tollerance
Submitted: 9 Feb 2004 10:00 Modified: 19 Feb 2004 7:30
Reporter: Peter Souza Email Updates:
Status: Not a Bug Impact on me:
None 
Category:MySQL Server: MyISAM storage engine Severity:S1 (Critical)
Version:4.0.17-max-nt OS:Windows (Windows 2000 Server)
Assigned to: Dean Ellis CPU Architecture:Any

[9 Feb 2004 10:00] Peter Souza
Description:
Using CONCAT to append data to LONGBLOB / LONGTEXT fails after approximately 1.25 MB.  For example, if I need to insert 5 MB of data into a LONGTEXT field and used several 64kb queries to update the field [per field=CONCAT(field, new_data)], after the 18th-19th query, the field is suddenly set to NULL (without error).  I've tried this with real data and dummy data (all ASCII 65/A characters).

I haven't discovered any other way to append data to an existing record in a database and our project requires us take advantage of MySQL's support for the large capacity that LONGTEXT offers (we require upwards of nearto 50+ MB in a single record).  If there is one, wonderful and I would be glad to hear of it, however under no circumstance should CONCAT set a LONGTEXT field to NULL if there's some kind of capacity threshold I'm crossing.

Also, I've noticed that CONCAT is very slow, even on a very fast system (dual 2.40 Ghz P4's w/1GB RAM), often taking nearly a full second to finish concatening one 64kb chunk of data to the existing field data.

How to repeat:
I have run into this problem with 100% reproducability using queries in both MySQL-front and using the MySQL API (mysql_query()).  The example below resembles a simple message queue for a mail server using MySQL as a means to store MIME-encoded messages of variable size.  Any data exceeding 1.25MB causes the error, so assume that XXXXX is any data of 64kb in size.

My table struct:

CREATE TABLE `MessagesToDeliver`
(
	`MessageID` INT (12) UNSIGNED,
	`ReceivedTime` INT (10) UNSIGNED,
	`DomainTo` CHAR (65),
	`ToMailBoxID` INT (10) UNSIGNED,
	`FromMailBoxID` INT (10) UNSIGNED,
	`FromHostName` TEXT,
	`FromIP` TEXT,
	`FileSize` INT (10) UNSIGNED,
	`MailFrom` TEXT,
	`ReceiptTo` TEXT,
	`MessageText` LONGTEXT,

	INDEX(`MessageID`, `ReceivedTime`, `DomainTo`, `ToMailBoxID`, `FromMailBoxID`)
);

My insert query:
INSERT INTO MailServer.MessagesToDeliver VALUES (1, 1076347325, "domain.com", 0, 0, "domain.com", "12.34.56.78", 4452080, "email@domain.com", "email@domain.com", "")

My update query (run it 15-20 times until the field data becomes NULL):

UPDATE MailServer.MessagesToDeliver SET MessageText = CONCAT(MessageText, "XXXXX") WHERE (MessageID = 1) AND (ReceivedTime = 1076347325) AND (ReceiptTo = "email@domain.com")

Suggested fix:
(1) Some kind of method to append data to a LONGTEXT without requiring existing text.  I.e.:  UPDATE db.table APPEND field WITH data WHERE where_clause

(2) CONCAT may have a bug that's causing a buffer overrun.  While the exact problem isn't obvious, it's pretty clear that CONCAT should not delete data under any circumstances (even if we were to concat a NULL string to the end of a good string of data)
[9 Feb 2004 10:15] Dean Ellis
Verified against 4.0.18/Linux.

If the update exceeds the size of max_allowed_packet the column value is set to NULL.  Workaround is, of course, to increase max_allowed_packet, but this is counterintuitive when the query itself is short:

CREATE TABLE test ( a LONGTEXT );
SET MAX_ALLOWED_PACKET=65536;
INSERT INTO test VALUES ( SPACE(65536) );
SELECT LENGTH(a) FROM test;
UPDATE test SET a = SPACE(65537);
SELECT LENGTH(a) FROM test;
DROP TABLE test;
[9 Feb 2004 10:25] Peter Souza
I have not exceeded the packet size (I double-checked to make sure).  The data does not change between updates, either.  I can send this in ~275-300 4kb chunks instead of ~16-19 64kb chunks and reproduce the problem every time.  Wish it were that simple, but it's not.
[9 Feb 2004 10:29] Peter Souza
By "the data does not change", I mean that I can insert the same data between packets.  I.e: ... = CONCAT(field, "AAAAAA") ...

Over and over without changing what I'm concatening.
[9 Feb 2004 11:29] Dean Ellis
I cannot reproduce a problem using 4.0.18 unless I exceed max_allowed_packet.  I have successfully used CONCAT() in 64k increments to well over 100MB.

I would need a complete test case, including the SQL to create, populate/update the table, which demonstrates the issue.  This should also include a setting for max_allowed_packet to guarantee that that is unrelated to the issue.
[9 Feb 2004 12:46] Peter Souza
While 4.0.18 is not yet a released production build (and thus unavailable), I did upgrade our server to 4.0.17 and am still having the same problem.  I have established a user account with access to my live data and will attach 3 queries: (1) CREATE TABLE, (2) INSERT INTO, and (3) UPDATE.

I have notices included with the UPDATE query that I'll copy here for your (and others') convenience:

---
Run this UPDATE query untill the field sets itself to NULL.  It should should take exactly 15 times.  Subsequent CONCAT's will affect 0 rows (concating something to NULL).
 
The UPDATE query below has 704 lines of 102 bytes (a 71,808-byte update each time) and my max_allowed_packet is set to 1,048,576 bytes (1024KB or ~1MB), so my guess is that CONCAT is actually taking the existing field data, loading it up into memory, concatening the new chunk, and then dumping the whole thing as one UPDATE query back into the table, which is exactly what I'm trying to avoid.  I base this on the calculation that 15 UPDATEs @ 71,808-byte UPDATE size = 1,077,120 total bytes which is when it dies and sts the field to NULL.
---

Would you like me to submit login information to you through e-mail?  If so, what address?  (I'll be / I have attached the query files to this bug report)
[9 Feb 2004 12:47] Peter Souza
CREATE TABLE struct and query

Attachment: create_table.sql (application/octet-stream, text), 524 bytes.

[9 Feb 2004 12:47] Peter Souza
INSERT INTO query

Attachment: insert_record.sql (application/octet-stream, text), 329 bytes.

[9 Feb 2004 12:48] Peter Souza
UPDATE query (with comments) to reproduce bug

Attachment: update_record.sql (application/octet-stream, text), 71.05 KiB.

[9 Feb 2004 12:48] Peter Souza
Bug updated from version 4.0.16-max-nt to 4.0.17-max-nt.
[9 Feb 2004 13:09] Dean Ellis
I think you misunderstood my comments when I verified the bug.  When the size of what you are updating exceeds max_allowed_packet, it is indeed setting the value of the column to NULL.  ie: following your exact steps above, you exceed the 1MB max_allowed_packet and the column value goes to NULL, even though the actual query is far shorter than max_allowed_packet.
[9 Feb 2004 13:24] Peter Souza
I did misunderstand you then.  That being the case, I have four questions left:

(1) is it safe to extend the max_allowed_packet to 100MB (assuming that's what you have yours set to when you ran your tests)?  Are there any potential security or instability hazards of doing so?

(2) should CONCAT be taking so long to evaluate?

(3) is there any other way to append data to an existing BLOB field?

(4) it seems that it may be just as well to malloc() (or GlobalAlloc(), etc.) much larger blocks to have fewer UPDATE queries of CONCAT is the only way to go rather than waste time with smaller 64kb blocks (assuming data is available).  Is there any advantage to having smaller CONCAT blocks versus larger ones?

Some of these questions may be more appropriate for use in a discussion thread with MySQL developers like yourself and I apologize in advance, however I've perused the MySQL documents and various search engine results and have found little in the way of adding data to an existing BLOB, so I feel the best answer given can be only one given by the developers.

Please advise and thank you.
[9 Feb 2004 14:04] Dean Ellis
Yes, other than the bug report itself this would be better discussed on one of our mailing lists.  I'll comment on your questions privately.
[19 Feb 2004 7:30] Sergei Golubchik
as discussed here,  it's not a bug.
Discussion was moved elsewhere
[18 Apr 2006 20:21] Devin Butts
discussion moved WHERE?????