Bug #16356 Single packet limitation on transmissions a problem
Submitted: 10 Jan 2006 23:00 Modified: 20 Jun 2009 9:27
Reporter: Kevin Benton (Candidate Quality Contributor) Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server Severity:S4 (Feature request)
Version:All OS:Any (All)
Assigned to: CPU Architecture:Any

[10 Jan 2006 23:00] Kevin Benton
Description:
MySQL server limitations currently state (paraphrase) that no communication to the server may be larger than max_packet_size.  The same is stated in 3.x, 4.x, and 5.x currently.

http://dev.mysql.com/doc/refman/4.1/en/packet-too-large.html

This is a problem for applications that utilize fields that by themselves are greater than max_packet_size bytes large.  It seems to me that MySQL ought to be able to handle this transparently given a valid commuincation.  One possible implementation to fix the issue would send packets with a total size, size of this packet, a checksum of some sort a continuation flag, packet ID and sequence number on the packet giving the sender the ability to follow-up with another packet to be appended on to the first.  This would aid performance by keeping packet sizes down to a managable level especially over "dirty" connections, and it would (more importantly) allow applications to transfer larger requests transparently.

How to repeat:
See above

Suggested fix:
See above
[11 Jan 2006 7:34] Valeriy Kravchuk
Yes, there are many cases when this limitation (or misconfiguration, in most cases) leads to serious problems. But I do not think that this (protocol-level) issue is a bug - it is design limitation, and any solution will have to take into account compatibility with previous versions of client libraries etc. So, your report sounds like a (valuable and useful) feature request. Do you agree with me?
[11 Jan 2006 16:24] Kevin Benton
How does Oracle handle updates to BLOB's?  They also have a 4GB capacity.

Again, as I see this, it's an S2 because it's a limitaion at the interface layer.  The database can store 4GB per LONGBLOB, but only a small portion of it is usable to the application programmer without having to utilize advanced MySQL features.  From my perspective, this prevents MySQL from being ANSI compliant for LONGBLOB fields.

If it's handled at the protocol level, great.  If at the interface level, OK.  Regardless, IMNSHO, there needs to be a way (with data set sizes increasing) for applications to transparently send updates that are larger than max_packet_size bytes.  In my mind, it's not an if, but a when question on implenting this capability.

As a network engineer, I think that keeping the packet sizes reasonably small (taking < 2s to transmit) would help expedite communications.  By making large packet sizes, any burp in the network will likely cause a retransmit, forcing the application to wait even longer, and making it more likely to time-out, especially over congested or noisy network lines.  Troubleshooting that kind of problem is never easy.
[19 Dec 2006 22:13] Kevin Benton
Having seen the above, how does MySQL import data larger than max_packet_size?
[29 Jul 2008 16:46] Kevin Benton
Any update on this?
[1 May 2009 1:30] James Day
Kevin, accepted as a feature request, seldom encountered as a limitation. I'm not aware of any plans to change this at present and would be surprised if it happened within the next few years, given the number of things that the server could usefully add instead. It is being encountered more often, though, so it may become more pressing at some point.

If I write to a customer having trouble with this I'd normally recommend that they change their application to store smaller chunks. That's how people using MySQL for video serving on cable systems do it.
[20 Jun 2009 9:27] Sveta Smirnova
Bug #45625 was marked as duplicate of this one.