Bug #68145 GROUP_CONCAT with DISTINCT could return wrong result when rb-trees are on disk
Submitted: 22 Jan 2013 17:15 Modified: 22 Jan 2013 18:43
Reporter: Chehai WU Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: DML Severity:S2 (Serious)
Version:5.5.29, 5.0.97, 5.1.69, 5.5.31, 5.7.1 OS:Any
Assigned to: CPU Architecture:Any
Tags: group_concat

[22 Jan 2013 17:15] Chehai WU
Description:
GROUP_CONCAT with DISTINCT sometimes return wrong result when the red-black tree which is used to remove duplicate rows grows too big and is put into a temp file on disk.

I believe the current logic behind GROUP_CONCAT with DISTINCT ignores the fact that some red-black trees can be on disk.

For each row in the result set, the logic currently inserts it into a red-black tree and then check the number of elements in the red-black tree: if the number of nodes does not change, the row must already exist in the tree and already be concatenated; otherwise, the row is new and can be concatenated.

There are at least two problems here:

First, MySQL assumes inserting a row to a red-black tree always succeeds (sql/item_sum.cc:3203). This is not true. MySQL imposes a size limit (16MB by default) for each red-black tree. Insertion to a red-black tree which reaches the size limit returns error and at the same time the full tree is put into a temp file on disk and a new tree is created, but nothing is inserted(sql/sql_class.h:3420). This means sometimes rows are not inserted in the red-black tree. Based on an incomplete red-black tree, MySQL can return wrong result.

Second, checking whether the number of nodes in a red-black changes or not after insertion a row does not help in determining whether a row is duplicated or not. The insertion operation just adds the row to the red-black tree in memory and it does not check any red-black trees which are on disk(sql/sql_class.h:3422), so the in-memory tree can include rows which already exist in those on-disk trees. This has two implications: 1) MySQL can still return wrong result; 2) Duplicated rows can appear and be saved to disk. If the in-memory tree cannot handle many nodes because of the tree size limit, duplicates can be many and a lot of space can be used by on-disk trees.

How to repeat:
1, Download the attached bug.sql

2, mysql -uroot < bug.sql

3, mysql -uroot bug

SET GLOBAL tmp_table_size = 512 * 1024 ;

SET GLOBAL max_heap_table_size = 512 * 1024;

SET GLOBAL group_concat_max_len = 20480;

exit;

4, mysql -uroot bug

select group_concat(distinct concat_ws(' ', t1.col0, t1.col2, t1.col3, t1.col4, t1.col5, t1.col6, t1.col7, t1.col8, t1.col9) separator "---") from test t1, test t2;

5, If you search for “45321 05182 45220 88257 72725 30915 20971 59573 13401” in the result, you will see a lot of duplicates. There should be no duplicates in the result.

6, If you have tmp_table_size = 16 * 1024 * 1024 and max_heap_table_size = 16 * 1024 * 1024, you will not see the problem, because no on-disk tree is used during the query.

Suggested fix:
In addition to the in-memory tree, look at on-disk trees to determine if a row is duplicated or not. If it is not, insert it to the in-memory tree and handle the case when the in-memory tree is flushed to disk.

I have very limited knowledge about MySQL internals. The above is just my suggestion.
[22 Jan 2013 17:16] Chehai WU
MySQL dump file to redproduce the issue

Attachment: bug.sql (application/octet-stream, text), 10.63 KiB.

[22 Jan 2013 18:16] Valeriy Kravchuk
Bug is repeatable with 5.5.29 on Windows 7. See files attached (with different results) for 512K and 16M settings for the variables.
[22 Jan 2013 18:17] Valeriy Kravchuk
With 512K

Attachment: bug68145.txt (text/plain), 20.62 KiB.

[22 Jan 2013 18:17] Valeriy Kravchuk
With 16M

Attachment: bug68145_2.txt (text/plain), 5.87 KiB.

[22 Jan 2013 18:43] Sveta Smirnova
Thank you for the report.

Verified as described.

With version 5.7.1 same results for

SET tmp_table_size = 512 * 1024 ;
SET max_heap_table_size = 512 * 1024;

and

SET tmp_table_size = 16 * 1024 ;
SET max_heap_table_size = 16 * 1024;
[3 Jan 2017 15:07] Sergey Petrunya
See also BUG#84320