Bug #71270 Failures to end bulk insert for partitioned tables handled incorrectly
Submitted: 2 Jan 2014 9:08 Modified: 6 Jan 2014 9:28
Reporter: Laurynas Biveinis (OCA) Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: Storage Engine API Severity:S3 (Non-critical)
Version:5.6.15 OS:Any
Assigned to: CPU Architecture:Any
Tags: end_bulk_insert, ha_end_bulk_insert, handler, partition, TokuDB

[2 Jan 2014 9:08] Laurynas Biveinis
Description:
A debug server build will crash if, while performing a bulk insert to a partitioned table, one of the partitions will return a failure for end_bulk_insert handler call.

The crash happens because the failing ha_partition::end_bulk_insert() call will clear the m_bulk_insert_started bitmap before returning the failure to the caller.  The caller will handle the failure, and at one point it will call select_insert::abort_result_set(), which will attempt end_bulk_insert again, which will hit DBUG_ASSERT(0) because m_bulk_insert_started is cleared.

How to repeat:
InnoDB end_bulk_insert always succeeds. Let's use error injection then (another option is to try MyISAM or another SE with a failing end_bulk_insert, for example TokuDB):

=== modified file 'sql/ha_partition.cc'
--- sql/ha_partition.cc 2013-10-30 23:22:43 +0000
+++ sql/ha_partition.cc 2014-01-02 08:22:44 +0000
@@ -4495,6 +4495,7 @@
       error= tmp;
   }
   bitmap_clear_all(&m_bulk_insert_started);
+  DBUG_EXECUTE_IF("ha_partition_end_bulk_insert_fail", error= 1;);
   DBUG_RETURN(error);
 }
 
and then MTR:
-----------------------------------
--source include/have_debug.inc
--source include/have_innodb.inc
--source include/have_partition.inc

SET debug="+d,ha_partition_end_bulk_insert_fail";

CREATE TABLE t1 (id INT PRIMARY KEY) ENGINE=InnoDB;
INSERT INTO t1 VALUES (1),(2),(3);

CREATE TABLE t2 (id INT PRIMARY KEY) ENGINE=InnoDB PARTITION BY HASH (id) PARTITIONS 2;

--error ER_GET_ERRNO
INSERT INTO t2 SELECT * FROM t1;

DROP TABLE t1, t2;

SET debug=default;
-----------------------------------

which gives

Thread 1 (Thread 0x7f4a69c51700 (LWP 18252)):
#0  0x00007f4a762faf0c in __pthread_kill (threadid=<optimized out>, signo=6) at ../nptl/sysdeps/unix/sysv/linux/pthread_kill.c:62
#1  0x0000000000aa8ba9 in my_write_core (sig=6) at /home/laurynas/percona/src/5.6/mysys/stacktrace.c:422
#2  0x000000000071a4f3 in handle_fatal_signal (sig=6) at /home/laurynas/percona/src/5.6/sql/signal_handler.cc:248
#3  <signal handler called>
#4  0x00007f4a7573ef77 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56
#5  0x00007f4a757425e8 in __GI_abort () at abort.c:90
#6  0x00007f4a75737d43 in __assert_fail_base (fmt=0x7f4a7588ef58 "%s%s%s:%u: %s%sAssertion `%s' failed.\n%n", assertion=assertion@entry=0x1099659 "0", file=file@entry=0x1099810 "/home/laurynas/percona/src/5.6/sql/ha_partition.cc", line=line@entry=4485, function=function@entry=0x109cc80 <ha_partition::end_bulk_insert()::__PRETTY_FUNCTION__> "virtual int ha_partition::end_bulk_insert()") at assert.c:92
#7  0x00007f4a75737df2 in __GI___assert_fail (assertion=0x1099659 "0", file=0x1099810 "/home/laurynas/percona/src/5.6/sql/ha_partition.cc", line=4485, function=0x109cc80 <ha_partition::end_bulk_insert()::__PRETTY_FUNCTION__> "virtual int ha_partition::end_bulk_insert()") at assert.c:101
#8  0x0000000000e24275 in ha_partition::end_bulk_insert (this=0x7f4a5c0585b0) at /home/laurynas/percona/src/5.6/sql/ha_partition.cc:4485
#9  0x000000000062f0a3 in handler::ha_end_bulk_insert (this=0x7f4a5c0585b0) at /home/laurynas/percona/src/5.6/sql/handler.cc:4132
#10 0x00000000007b35ab in select_insert::abort_result_set (this=0x7f4a5c005d30) at /home/laurynas/percona/src/5.6/sql/sql_insert.cc:3870
#11 0x00000000007fa9d5 in handle_select (thd=0x3176dc0, result=0x7f4a5c005d30, setup_tables_done_option=1073741824) at /home/laurynas/percona/src/5.6/sql/sql_select.cc:116
#12 0x00000000007ceb2c in mysql_execute_command (thd=0x3176dc0) at /home/laurynas/percona/src/5.6/sql/sql_parse.cc:3513
#13 0x00000000007d61f0 in mysql_parse (thd=0x3176dc0, rawbuf=0x7f4a5c005010 "INSERT INTO t2 SELECT * FROM t1", length=31, parser_state=0x7f4a69c4fe40) at /home/laurynas/percona/src/5.6/sql/sql_parse.cc:6235
#14 0x00000000007c966a in dispatch_command (command=COM_QUERY, thd=0x3176dc0, packet=0x324cdf1 "INSERT INTO t2 SELECT * FROM t1", packet_length=31) at /home/laurynas/percona/src/5.6/sql/sql_parse.cc:1334
#15 0x00000000007c87ba in do_command (thd=0x3176dc0) at /home/laurynas/percona/src/5.6/sql/sql_parse.cc:1036
#16 0x000000000078f438 in do_handle_one_connection (thd_arg=0x3176dc0) at /home/laurynas/percona/src/5.6/sql/sql_connect.cc:982
#17 0x000000000078ef4e in handle_one_connection (arg=0x3176dc0) at /home/laurynas/percona/src/5.6/sql/sql_connect.cc:898
#18 0x0000000000b7e4c8 in pfs_spawn_thread (arg=0x31c7480) at /home/laurynas/percona/src/5.6/storage/perfschema/pfs.cc:1858
#19 0x00007f4a762f5f6e in start_thread (arg=0x7f4a69c51700) at pthread_create.c:311
#20 0x00007f4a758029cd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:113

Suggested fix:
There are several ways to fix this.  Assuming that the DBUG_ASSERT(0) in the description is correct and duplicate ha_end_bulk_insert() calls are to be avoided, then clearing select_insert::bulk_insert_started after the first ha_end_bulk_insert attempt appears to be correct.
[2 Jan 2014 9:11] Laurynas Biveinis
Testcase and fix

(*) I confirm the code being submitted is offered under the terms of the OCA, and that I am authorized to contribute it.

Contribution: bug71720.patch (text/x-diff), 2.19 KiB.

[2 Jan 2014 9:19] Laurynas Biveinis
s/71720/71270 in the patch
[2 Jan 2014 10:11] MySQL Verification Team
Hello Laurynas,

Thank you for the bug report and contribution.
Verified as described.

Thanks,
Umesh
[6 Jan 2014 9:28] Laurynas Biveinis
The testcase credit goes to Rich Prohaska.

It seems that creating a testcase that uses MyISAM instead of error injection should be quite doable, we already have seen something similar in Percona Server RQG testing: https://bugs.launchpad.net/percona-server/+bug/1204871:

2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
2013-07-20 15:52:08 17093 [Warning] Warning: Enabling keys got errno 137 on query018293.t1, retrying
mysqld: /bzr/5.6_val/Percona-Server-5.6.12-rc60.4/sql/ha_partition.cc:4438: virtual int ha_partition::end_bulk_insert(): Assertion `0' failed.

It appears one would need a partitioned MyISAM table and conditions as in e.g. bug 57168, such as too small myisam_sort_buffer_size.