Bug #58081 Duplicate entry error when doing GROUP BY
Submitted: 9 Nov 2010 12:29 Modified: 3 Dec 2014 15:55
Reporter: Alexander Barkov Email Updates:
Status: Closed Impact on me:
Category:MySQL Server: Optimizer Severity:S3 (Non-critical)
Version:5.1.52, 5.5.8 OS:Any
Assigned to: CPU Architecture:Any
Triage: Needs Triage: D3 (Medium)

[9 Nov 2010 12:29] Alexander Barkov
A GROUP BY query returns this error under certain circumstances:

Duplicate entry '107374182410737418241' for key 'group_key'

'group_key' is not a real column name in the table.
It looks like a name for the grouping column in the temporary table.

How to repeat:
set names latin1;
drop table if exists t1;
create table t1(a int) engine=myisam;
insert into t1 values (0),(0),(1),(0),(0);
select count(*) from t1, t1 t2 group by insert('', t2.a, t1.a,(@@global.max_binlog_size));

ERROR 1062 (23000): Duplicate entry '107374182410737418241' for key 'group_key'

Suggested fix:
The SELECT query should return results without any errors reported.
[9 Nov 2010 13:03] Valeriy Kravchuk
Verified with 5.1.52 also:

C:\Program Files\MySQL\MySQL Server 5.1\bin>mysql -uroot -proot -P3310 test
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 12
Server version: 5.1.52-community MySQL Community Server (GPL)

Copyright (c) 2000, 2010, Oracle and/or its affiliates. All rights reserved.
This software comes with ABSOLUTELY NO WARRANTY. This is free software,
and you are welcome to modify and redistribute it under the GPL v2 license

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> set names latin1;
Query OK, 0 rows affected (0.02 sec)

mysql> drop table if exists t1;
Query OK, 0 rows affected (0.11 sec)

mysql> create table t1(a int) engine=myisam;
Query OK, 0 rows affected (0.05 sec)

mysql> insert into t1 values (0),(0),(1),(0),(0);
Query OK, 5 rows affected (0.00 sec)
Records: 5  Duplicates: 0  Warnings: 0

mysql> select count(*) from t1, t1 t2 group by insert('', t2.a,
    -> t1.a,(@@global.max_binlog_size));
ERROR 1062 (23000): Duplicate entry '107374182410737418241' for key 'group_key'
[8 Apr 2011 15:52] Valeriy Kravchuk
Bug #60808 was marked as a duplicate of this one.
[5 Aug 2011 19:35] Sveta Smirnova
Bug #62097 was marked as duplicate of this one.
[29 Aug 2011 8:28] Olivier DASINI
i had the same problem: 

mysql> SELECT DISTINCT(firstname), COUNT(firstname) FROM Member WHERE firstname <> '' GROUP BY firstname;
ERROR 1022 (23000): Can't write; duplicate key in table '/tmp/#sql_47ac_1'

Server version:	5.5.8-log MySQL Community Server (GPL)



[29 Aug 2011 10:21] Olivier DASINI
The workaround i found is to increase the size of the tmp_table:

SET SESSION max_heap_table_size=536870912; 
SET SESSION tmp_table_size=536870912;

now my request work !
[23 Nov 2011 12:52] Miguel Solorzano
http://bugs.mysql.com/bug.php?id=62755 marked as duplicate of this one.
[31 May 2012 16:27] Jonas Stenberg
I also had this problem. It surfaced when I increased the innodb log size.
[10 Jul 2014 12:30] Rafał Głowacz
One thing you can check before adjusting MySQL config is your disk space.
I was getting this error when my disk got full. Freeing up some removed the error.
[3 Dec 2014 15:55] Paul Dubois
Noted in 5.7.5 changelog.

Queries that used GROUP BY INSERT() could produce spurious
duplicate-key errors.
[16 Jul 2015 2:43] Yoseph Phillips
I have seen this same error happen using MySQL 5.6.25
Here I worked at that the cause was a corrupt index on a column used in the GROUP BY clause (doing simple selects using that index would not return the row, using other indexes would return the row).
[22 Dec 2016 0:25] Emanuel Calvo
We hit this same bug on 5.5.46, with a few comments:

- I had a GROUP BY query, on a table with a single index on a column. The query was able to perform if the SELECT wasn't selecting columns (*).
- I increased both max_heap_table_size and tmp_table_size as pointed by Olivier, and it fixed the issue.

Looks like the error is misleading. Probably is related to the writing phase of the tree on sql/uniques.cc when tree uses more memory than 'max_heap_table_size'. That could explain the misleading error.
[16 Jan 2018 16:28] Steve Chambers
Which MySQL version was this bug fixed in?
[8 May 2019 23:10] Jon Stephens

Per the comment above, this was fixed in MySQL 5.7.5. It does not seem to have been fixed in earlier major versions, so (IIUC) you will need to upgrade to MySQL 5.7 or 8.0 to get the fix.
[8 May 2019 23:11] Jon Stephens
Possibly related to this bug: BUG#90398.
[25 Jan 1:11] Thomas Smith
Note that "closed" does not mean fixed, it means this issue was *documented* in the release notes of 5.7.5, and is apparently too difficult to actually fix.  So for anyone else who has made it to this far in the ticket because they are still seeing this issue, here is another workaround.

As contributed above, increasing the values of both max_heap_table_size and tmp_table_size should remove the symptom.  But that's not necessarily a suitable permanent solution.  In our case we had a frequently-running query that would sometimes only work with a 2GB tmp_table_size, which would eventually cause enough memory fragmentation to require rebooting the server.  Not good.

An alternative is if you have identified a problem query you can set the session variable "big_tables" to ON for that query, and then switch it back off after.  This forces MySQL to use on-disk tables for all temporary tables.