Bug #1647 No way to determine what size blob/clob is being returned into bound buffer
Submitted: 24 Oct 2003 9:10 Modified: 30 Apr 2004 13:24
Reporter: David Ritter Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server Severity:S1 (Critical)
Version:4.1.1 OS:Linux (Linux)
Assigned to: Michael Widenius CPU Architecture:Any

[24 Oct 2003 9:10] David Ritter
Description:
The client library documentation suggests that it is possible to obtain the max
blob size in a result set when using bound parameters if a call to mysql_stmt_store_result is made with a subsequent call to mysql_get_metadata and 
then a check of the field max_length.  (section 11.1.3.15 and section 11.1.7.3
of the manual)

All calls to check the max_length return 0.  It seems that this field is not
getting set.  The problem this is creating is that without this field there is
no way for a developer to dynamically allocate a buffer to store a blob/clob.

How to repeat:
I will upload a C API test case
[24 Oct 2003 9:10] David Ritter
C API Test Case

Attachment: blobTest.cpp (text/plain), 6.95 KiB.

[24 Nov 2003 5:38] MySQL Verification Team
--------------------------------------------------
       Bound Select
--------------------------------------------------

 prepare, SELECT successful
 total parameters in SELECT: 0

 max blob size in SELECT statement: 0

 max blob size in SELECT statement: 0

 max blob size in SELECT statement: 0
[30 Apr 2004 13:24] Michael Widenius
Thank you for your bug report. This issue has been committed to our
source repository of that product and will be incorporated into the
next release.

If necessary, you can access the source repository and build the latest
available version, including the bugfix, yourself. More information 
about accessing the source trees is available at
    http://www.mysql.com/doc/en/Installing_source_tree.html

Additional info:

I have now checked this up in detail.

The problem is that when the MySQL client reads in the data from the server, we don't do any parsing of the read data and this makes it very hard to know the size the maximum possible blob.

If we always would this extra parsing stage, it would make the reading of the result set signifincatly slower, which is not desired in the cases where we don't need max_length (which I belive is the common case)

The normal procedure to do in cases like this:
- For blocks, allocate a resonable buffer size, for example 255.
- When you have done the fetch, check if you got the full column in the supplied buffer by checking if length > 255.  If this is the case, then you can do mysql_stmt_fetch_column() to get the rest of the data.
- Another option would be to fetch the blob data directly from
  bind->intern_buffer (the data is of size *bind->length)

After thinking a bit about this, we decide to solve this problem the following way:
- Introduce a new function:
  mysql_stmt_attr_set(stmt, STMT_ATTRIBUTE_UPDATE_MAX_LENGTH, &true)

If you execute the above function, then mysql_stmt_store_result() will do an extra parsing pass on the data and update MYSQL_FIELD->max_length with the longest string found in the result set for all BLOB and string result types.

We don't do this update by default, becasue of the extra overhead involved in doing this parsing.

Hope this solution is ok for you. Thanks for pointing this limitation out to us!

The fix will be in 4.1.2