Bug #16712 group_concat returns odd string instead of intended result
Submitted: 22 Jan 2006 19:09 Modified: 7 Aug 2006 6:46
Reporter: [ name withheld ] Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Optimizer Severity:S3 (Non-critical)
Version:5.0.18 OS:Linux (Linux (Slackware) kernel 2.4.31)
Assigned to: Georgi Kodinov

[22 Jan 2006 19:09] [ name withheld ]
Description:
When executing a rather involved query, group_concat(concat(...)) returns a value that looks like a memory address (e.g., [B@...) instead of the concatenated string.

When running the query in the mysql command line window the query runs fine and returns the intended result. When running it through Connector/J 3.1.12 the above mentioned error occurs.

I have attempted a much simpler test case from a Windows machine but the simple test case on Windows does not come up with the same problem.

I have not seen a space on this form for uploading files. I would like to upload the objects that contain and run the query as well as a dump of the database (approximately half a meg).

Thanks,

Reg

How to repeat:
Simply create the database from the dump I will supply and run the Java objects (under jdk 1.5)  that I will supply.
[23 Jan 2006 3:57] Mark Matthews
Changing this to a server bug, as the server is returning incorrect metadata.
[23 Jan 2006 10:56] Valerii Kravchuk
As this is a server bug now, please, inform about the exact MySQL server version used.
[23 Jan 2006 16:39] [ name withheld ]
The version of the server is 5.0.18-max as displayed when starting the MySQL client.
[2 Feb 2006 16:07] Valerii Kravchuk
Thank you for the additional information. Can you, please, try to execute that huge SELECT in mysql command line client and check the results. Are they incorrect, as in your Java application?
[2 Feb 2006 16:09] [ name withheld ]
Hi.

The query works perfectly in the mysql command line client.

Thanks.
[3 Feb 2006 11:29] Valerii Kravchuk
Thank you for the additional check. Please, try to do the same query using PREPARE and EXECUTE commands (see http://dev.mysql.com/doc/refman/5.0/en/sqlps.html for the details). I am trying to prove, that it is not a server bug, but Connector/J one's.
[5 Feb 2006 15:37] [ name withheld ]
I have used the prepare and execute syntax described in the previous post using the mysql client and it worked perfectly. In other words, when running the query using prepare and execute I got a result set all of the appropriate rows with all of the appropriate data in the appropriate format.

Hope this helps.
[7 Feb 2006 15:30] Valerii Kravchuk
Looks like I had created a simple test case:

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 3 to server version: 5.0.19

Type 'help;' or '\h' for help. Type '\c' to clear the buffer.

mysql> CREATE TABLE `item_author` (
    ->   `ITEM_AUTHOR_ID` bigint(20) unsigned NOT NULL auto_increment,
    ->   `ITEM_ID` bigint(20) unsigned NOT NULL default '0',
    ->   `AUTHOR_ID` bigint(20) unsigned NOT NULL default '0',
    ->   `PRECEDENCE` tinyint(1) unsigned default '1',
    ->   `CREATE_DATE` datetime NOT NULL default '0000-00-00 00:00:00',
    ->   `CREATE_PARTY_ID` bigint(20) unsigned NOT NULL default '0',
    ->   `CREATE_IP_ADDRESS` varchar(32) NOT NULL default '',
    ->   `UPDATE_DATE` datetime default NULL,
    ->   `UPDATE_PARTY_ID` bigint(20) unsigned default NULL,
    ->   `UPDATE_IP_ADDRESS` varchar(32) default NULL,
    ->   `TIME_STAMP` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
    ->   PRIMARY KEY  (`ITEM_AUTHOR_ID`),
    ->   UNIQUE KEY `ITEM_AUTHOR_IDX_01` (`ITEM_ID`,`AUTHOR_ID`),
    ->   KEY `ITEM_AUTHOR_IDX_02` (`AUTHOR_ID`)
    -> ) ENGINE=InnoDB DEFAULT CHARSET=latin1;
Query OK, 0 rows affected (0.01 sec)

mysql> insert into item_author (item_id, author_id) values (1, 1), (1, 2), (2,1), (2,2);
Query OK, 4 rows affected (0.01 sec)
Records: 4  Duplicates: 0  Warnings: 0

mysql> select item_id, group_concat(concat(precedence, '|', 'Valeriy', ',', 'Kravchuk'))
    -> from item_author
    -> group by item_id;
+---------+-------------------------------------------------------------------+
| item_id | group_concat(concat(precedence, '|', 'Valeriy', ',', 'Kravchuk')) |
+---------+-------------------------------------------------------------------+
|       1 | 1|Valeriy,Kravchuk,1|Valeriy,Kravchuk                             |
|       2 | 1|Valeriy,Kravchuk,1|Valeriy,Kravchuk                             |
+---------+-------------------------------------------------------------------+
2 rows in set (0.01 sec)

mysql> create view vvv1 as select item_id, group_concat(concat(precedence, '|',
 'Valeriy', ',', 'Kravchuk')) from item_author group by item_id;
Query OK, 0 rows affected (0.01 sec)

mysql> desc vvv1\G
*************************** 1. row ***************************
  Field: item_id
   Type: bigint(20) unsigned
   Null: NO
    Key:
Default: 0
  Extra:
*************************** 2. row ***************************
  Field: group_concat(concat(precedence, '|', 'Valeriy', ',', 'Kravchuk')
   Type: longblob
   Null: YES
    Key:
Default: NULL
  Extra:
2 rows in set (0.01 sec)

So, it that LONGBLOB data type of result a real problem for you? I think, it is not completely correct (if group_concat_max_len has the dafault value of 1024, at least). But please, check the manual also (http://dev.mysql.com/doc/refman/5.0/en/group-by-functions.html).
[7 Feb 2006 15:35] [ name withheld ]
Hello.

I don't think the longblob would be a problem. I had already solved the problem when I submitted the bug report by using Connection Java 3.0.16 ga.

Thank you.
[8 Feb 2006 7:59] Valerii Kravchuk
So, can we close this report as not a bug, when using proper Connector/J version?
[8 Feb 2006 13:19] [ name withheld ]
Hi.

The bug was originally submitted as a problem with Connector/J. Right now it appears that we have two workarounds but the bug itself still exists. If, as I originally reported, it is a Connector/J problem then it should re-assigned as a Connector/J bug.

Thanks
[9 Feb 2006 8:36] Valerii Kravchuk
So, that LONGBLOB data type for GROUP_CONCAT results is a possible (server) bug. See my previous test case. The results of GROUP_CONCAT is not longer than 1000 bytes by default anyway. So, it should be BLOB at least. 

And I am not sure that having tunyint column in CONCAT(..., ...) is a good reason to give BLOB data type for the result. It may seems OK according to the manual (http://dev.mysql.com/doc/refman/5.0/en/string-functions.html):

"CONCAT(str1,str2,...)

Returns the string that results from concatenating the arguments. May have one or more arguments. If all arguments are non-binary strings, the result is a non-binary string. If the arguments include any binary strings, the result is a binary string. A numeric argument is converted to its equivalent binary string form; if you want to avoid that, you can use an explicit type cast, as in this example:

SELECT CONCAT(CAST(int_col AS CHAR), char_col);"

But is it intended to give LONGBLOB data type to the result that is a BINARY string? We have VARBINARY data type for that!

Yet another simple test:

mysql> create view vvv4 as select cast('abc' as binary);
Query OK, 0 rows affected (0.01 sec)

mysql> desc vvv4;
+-----------------------+--------------+------+-----+---------+-------+
| Field                 | Type         | Null | Key | Default | Extra |
+-----------------------+--------------+------+-----+---------+-------+
| cast('abc' as binary) | varbinary(3) | NO   |     |         |       |
+-----------------------+--------------+------+-----+---------+-------+
1 row in set (0.25 sec)

mysql> create view vvv5 as select group_concat('1') from vvv4;
Query OK, 0 rows affected (0.00 sec)

mysql> desc vvv5;
+-------------------+--------------+------+-----+---------+-------+
| Field             | Type         | Null | Key | Default | Extra |
+-------------------+--------------+------+-----+---------+-------+
| group_concat('1') | varchar(341) | YES  |     | NULL    |       |
+-------------------+--------------+------+-----+---------+-------+
1 row in set (0.05 sec)

mysql> create view vvv6 as select group_concat(concat(1,'1')) from vvv4;
Query OK, 0 rows affected (0.01 sec)

mysql> desc vvv6;
+-----------------------------+----------+------+-----+---------+-------+
| Field                       | Type     | Null | Key | Default | Extra |
+-----------------------------+----------+------+-----+---------+-------+
| group_concat(concat(1,'1')) | longblob | YES  |     | NULL    |       |
+-----------------------------+----------+------+-----+---------+-------+
1 row in set (0.01 sec)

demonstrates this bug.
[3 May 2006 21:19] David vonThenen
Hi,

I am real interested in the resolution of this bug. I have, what I think, is a similar as describe in this bug report, but I will let you guys be the judge of that. I thought I would post my issue as it might reveal something new or it might be a new bug entirely.

My platform is windows (WinXp) using Ver 14.12 Distrib 5.0.21

mysql> status
--------------
mysql  Ver 14.12 Distrib 5.0.21, for Win32 (ia32)

Connection id:          3
Current database:       sample
Current user:           root@localhost
SSL:                    Not in use
Using delimiter:        ;
Server version:         5.0.21-community-nt
Protocol version:       10
Connection:             localhost via TCP/IP
Server characterset:    utf8
Db     characterset:    utf8
Client characterset:    utf8
Conn.  characterset:    utf8
TCP port:               3306
Uptime:                 10 min 43 sec

Threads: 1  Questions: 37  Slow queries: 1  Opens: 1  Flush tables: 1  Open tabl
es: 11  Queries per second avg: 0.058
--------------

I have a table:

CREATE TABLE file (
id INT(10) UNSIGNED NOT NULL AUTO_INCREMENT,
data LONGTEXT,
PRIMARY KEY (id)
)

mysql> describe file;
+-------+------------------+------+-----+---------+----------------+
| Field | Type             | Null | Key | Default | Extra          |
+-------+------------------+------+-----+---------+----------------+
| id    | int(16) unsigned | NO   | PRI | NULL    | auto_increment |
| data  | longtext         | YES  |     | NULL    |                |
+-------+------------------+------+-----+---------+----------------+
2 rows in set (0.01 sec)

There are two records in this table. Both records contain the same data (to simply things) and the data field in each record is 1302528. Verified by:

mysql> select id, char_length(data) from file;
+----+-------------------+
| id | char_length(data) |
+----+-------------------+
|  1 |           1302528 |
|  2 |           1302528 |
+----+-------------------+
2 rows in set (0.08 sec)

When I try to run a group_concat to concat the two data columns into one, the data that is returned is NOT the correct value. The expected value should be 2 x 1302528 in length. But it is not. Verified by:

mysql> select char_length( group_concat(data) ) from file;
+-----------------------------------+
| char_length( group_concat(data) ) |
+-----------------------------------+
|                             29461 |
+-----------------------------------+
1 row in set (5.72 sec)

PLEASE NOTE!!!! That I did changes the following mysql variables so that you can concat items this large. So that isn't the problem.

Adjusted variables in the my.ini:
max_allowed_packet=100M
group_concat_max_len=100M

If you want a copy of the database, table structure and the two inserted records, you can download it at: http://dev.checkyour6.net/sample.zip

Is this the same issue? or is this a different issue?
[3 May 2006 21:21] David vonThenen
Sorry I should also mention that the "data" in the data columns are in fact binary data. In case that does mean anything special. If anyone is wondering what data is stored, it is actually the libmysql.dll in binary format.
[13 Jul 2006 9:47] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/9114
[19 Jul 2006 8:05] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/9325
[25 Jul 2006 8:45] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/9530
[2 Aug 2006 18:59] Evgeny Potemkin
When calculating GROUP_CONCAT all blob fields are transformed
to varchar when making the temp table.
However a varchar has at max 2 bytes for length. 

Fixed in 5.0.25, 5.1.12
[7 Aug 2006 6:46] Jon Stephens
Thank you for your bug report. This issue has been committed to our source repository of that product and will be incorporated into the next release.

If necessary, you can access the source repository and build the latest available version, including the bug fix. More information about accessing the source trees is available at

    http://www.mysql.com/doc/en/Installing_source_tree.html

Documented bugfix in 5.0.25 and 5.1.12 changelogs.