| Bug #16796 | BLOB does not work with user defined partitioning | ||
|---|---|---|---|
| Submitted: | 26 Jan 2006 2:59 | Modified: | 22 Apr 2006 15:28 |
| Reporter: | Jonathan Miller | Email Updates: | |
| Status: | Closed | Impact on me: | |
| Category: | MySQL Cluster: Cluster (NDB) storage engine | Severity: | S2 (Serious) |
| Version: | 5.1.6-alpha | OS: | Linux (Linux) |
| Assigned to: | Pekka Nousiainen | CPU Architecture: | Any |
[26 Jan 2006 6:27]
Jonas Oreland
Jeb, do you need DD for this? (i.e I think it's a partitioning bug...not a DD bug) Changed title/category... Please change it back if it is DD related
[26 Jan 2006 12:10]
Jonathan Miller
Jonas,
Sorry but I tried with out DD this morning and here is what I get:
Errors are (from /home/ndbdev/jmiller/clones/mysql-5.1-new/mysql-test/var/log/mysqltest
mysqltest: At line 76: query 'INSERT INTO test.t1 VALUES (NULL, "Tested Remotely from T
024))' failed: 1504: Table has no partition for value 600
(the last lines may be the most important ones)
The I put Disk Data back in and get:
Errors are (from /home/ndbdev/jmiller/clones/mysql-5.1-new/mysql-test/var/log/mysqltest-time) :
mysqltest: At line 76: query 'INSERT INTO test.t1 VALUES (NULL, "Tested Remotely from Texas, USA", 600, b'1',600.00,repeat('a',1*1024*1
024))' failed: 2013: Lost connection to MySQL server during query
(the last lines may be the most important ones)
CORE ^^^^
[26 Jan 2006 12:47]
Jonathan Miller
I changed the 600 to 601 and reran with DD. No core this time, but instead got: Errors are (from /home/ndbdev/jmiller/clones/mysql-5.1-new/mysql-test/var/log/mysqltest-time) : mysqltest: At line 76: query 'INSERT INTO test.t1 VALUES (NULL, "Tested Remotely from Texas, USA", 600, 024))' failed: 1296: Got error 311 'Unknown error code' from NDBCLUSTER (the last lines may be the most important ones)
[26 Jan 2006 13:07]
Jonathan Miller
==2992== at 0x1B909222: malloc (vg_replace_malloc.c:130) ==2992== by 0x8570DB5: BaseString::assign(char const*) (BaseString.cpp:56) ==2992== by 0x852E617: NdbDictionary::Datafile::setTablespace(char const*) (NdbDicti onary.cpp:1152) ==2992== by 0x832FA90: ndbcluster_alter_tablespace(THD*, st_alter_tablespace*) (ha_n dbcluster.cc:9200) ==2992== by 0x8310EC3: mysql_alter_tablespace(THD*, st_alter_tablespace*) (sql_table space.cc:33) ==2992== by 0x81EEF08: mysql_execute_command(THD*) (sql_parse.cc:4876) ==2992== by 0x81F3863: mysql_parse(THD*, char*, unsigned) (sql_parse.cc:5695) ==2992== by 0x81F3E2C: dispatch_command(enum_server_command, THD*, char*, unsigned) (sql_parse.cc:1765) ==2992== by 0x81F5BB6: handle_one_connection (sql_parse.cc:1537) ==2992== by 0xA25B7F: start_thread (in /lib/libpthread-2.3.5.so) ==2992== by 0x97D9CD: clone (in /lib/libc-2.3.5.so) 060126 16:00:01 [ERROR] NDB Binlog: logging of blob table ./test/t1 is not supported 060126 16:00:02 [ERROR] NDB Binlog: logging of blob table ./test/t2 is not supported ==2992== ==2992== Invalid read of size 1 ==2992== at 0x1B90A0C6: strlen (mac_replace_strmem.c:189) ==2992== by 0x85F3278: my_vsnprintf (my_vsnprintf.c:91) ==2992== by 0x85C5888: my_error (my_error.c:99) ==2992== by 0x8247DB9: write_record(THD*, st_table*, st_copy_info*) (sql_insert.cc:1 145) ==2992== by 0x824C562: mysql_insert(THD*, st_table_list*, List<Item>&, List<List<Ite m> >&, List<Item>&, List<Item>&, enum_duplicates, bool) (sql_insert.cc:514) ==2992== by 0x81EA63A: mysql_execute_command(THD*) (sql_parse.cc:3267) ==2992== by 0x81F3863: mysql_parse(THD*, char*, unsigned) (sql_parse.cc:5695) ==2992== by 0x81F3E2C: dispatch_command(enum_server_command, THD*, char*, unsigned) (sql_parse.cc:1765) ==2992== by 0x81F5BB6: handle_one_connection (sql_parse.cc:1537) ==2992== by 0xA25B7F: start_thread (in /lib/libpthread-2.3.5.so) ==2992== by 0x97D9CD: clone (in /lib/libc-2.3.5.so) ==2992== Address 0x258 is not stack'd, malloc'd or (recently) free'd
[14 Feb 2006 9:32]
Jonas Oreland
Pekka, The problem is that the blob operations doesnt "use" setPartitionId from base operation Remove disk stuff from test case, apply patch (that's in files) And you'll get 311, (undefined partition) This is on the main table updateing blob header. And API does _not_ send partition id.
[23 Mar 2006 13:05]
Jonathan Miller
Starting to hit the 311 bug more and more. See: http://bugs.mysql.com/bug.php?id=18443 For addition examples. Since this screws up the data, or at least the ability to updated and delete, I am moving to high priority. /jeb
[17 Apr 2006 18:31]
Bugs System
A patch for this bug has been committed. After review, it may be pushed to the relevant source trees for release in the next version. You can access the patch from: http://lists.mysql.com/commits/5025
[22 Apr 2006 15:28]
Jon Stephens
Thank you for your bug report. This issue has been committed to our
source repository of that product and will be incorporated into the
next release.
If necessary, you can access the source repository and build the latest
available version, including the bugfix, yourself. More information
about accessing the source trees is available at
http://www.mysql.com/doc/en/Installing_source_tree.html
Additional info:
Documented bugfix in 5.1.9 changelog. Closed.

Description: #0 0x00d61402 in __kernel_vsyscall () #1 0x00a2855f in pthread_kill () from /lib/libpthread.so.0 #2 0x082eb0bb in write_core (sig=16766) at stacktrace.c:220 #3 0x081d0312 in handle_segfault (sig=11) at mysqld.cc:2187 #4 <signal handler called> #5 0x0091d173 in strlen () from /lib/libc.so.6 #6 0x085f3279 in my_vsnprintf (to=0x6775a9 "9¥\bPUl\b", n=276, fmt=Variable "fmt" is not available. ) at my_vsnprintf.c:91 #7 0x085c5889 in my_error (nr=1504, MyFlags=600) at my_error.c:99 #8 0x08247dba in write_record (thd=0xb76d7ec0, table=0x8a65288, info=0x677800) at sql_insert.cc:1145 #9 0x0824c563 in mysql_insert (thd=0xb76d7ec0, table_list=0x8a46e20, fields=@0xb76d8388, values_list=@0xb76d83ac, update_fields=@0xb76d83a0, update_values=@0xb76d8394, duplic=DUP_ERROR, ignore=false) at sql_insert.cc:514 #10 0x081ea63b in mysql_execute_command (thd=0xb76d7ec0) at sql_parse.cc:3267 #11 0x081f3864 in mysql_parse (thd=0xb76d7ec0, inBuf=0x8a46d38 "INSERT INTO test.t1 VALUES (NULL, \"Tested Remotely from Texas, USA\", 600, b'1',600600600.600600,repeat('a',1*102 4*1024))", length=120) at sql_parse.cc:5695 #12 0x081f3e2d in dispatch_command (command=COM_QUERY, thd=0xb76d7ec0, packet=Variable "packet" is not available. ) at sql_parse.cc:1765 #13 0x081f5bb7 in handle_one_connection (arg=0xb76d7ec0) at sql_parse.cc:1537 #14 0x00a25b80 in start_thread () from /lib/libpthread.so.0 #15 0x0097d9ce in clone () from /lib/libc.so.6 (gdb) How to repeat: CREATE LOGFILE GROUP range_log ADD UNDOFILE './range_log/undofile.dat' INITIAL_SIZE 16M UNDO_BUFFER_SIZE = 3M ENGINE=NDB; CREATE TABLESPACE range_ts ADD DATAFILE './range_ts/datafile.dat' USE LOGFILE GROUP range_log INITIAL_SIZE 15M ENGINE NDB; CREATE TABLESPACE range_ts2 ADD DATAFILE './range_ts2/datafile.dat' USE LOGFILE GROUP range_log INITIAL_SIZE 15M ENGINE NDB; CREATE TABLE test.t1 ( c1 MEDIUMINT NOT NULL AUTO_INCREMENT, c2 TEXT NOT NULL, c3 INT NOT NULL, c4 BIT NOT NULL, c5 DECIMAL(8,3), c6 LONGBLOB, PRIMARY KEY(c1,c3)) TABLESPACE range_ts STORAGE DISK ENGINE=NDB PARTITION BY RANGE (c3) PARTITIONS 3 ( PARTITION x1 VALUES LESS THAN (200), PARTITION x2 VALUES LESS THAN (400), PARTITION x3 VALUES LESS THAN (600)); CREATE TABLE test.t2 ( c1 MEDIUMINT NOT NULL AUTO_INCREMENT, c2 TEXT NOT NULL, c3 INT NOT NULL, c4 BIT NOT NULL, c5 DECIMAL(8,3), c6 LONGBLOB, PRIMARY KEY(c1)) TABLESPACE range_ts2 STORAGE DISK ENGINE=NDB PARTITION BY RANGE (c1) PARTITIONS 3 ( PARTITION x1 VALUES LESS THAN (200), PARTITION x2 VALUES LESS THAN (400), PARTITION x3 VALUES LESS THAN (600)); let $j= 600; --disable_query_log while ($j) { eval INSERT INTO test.t1 VALUES (NULL, "Tested Remotely from Texas, USA", $j, b'1',$j$j$j.$j$j,repeat('a',1*1024*1024)); dec $j; eval INSERT INTO test.t2 VALUES (NULL, "MySQL AB Sweden Cluster Disk Data Testing", $j, b'1',$j$j$j.$j$j,repeat('a',1*1024*1024)); } --enable_query_log SELECT COUNT(*) FROM test.t1; SELECT pk1, c2, c3, hex(c4) FROM test.t1 ORDER BY pk1 LIMIT 5; DROP TABLE test.t1; DROP TABLE test.t2; ALTER TABLESPACE range_ts DROP DATAFILE './range_ts/datafile.dat' ENGINE = NDB; ALTER TABLESPACE range_ts2 DROP DATAFILE './range_ts2/datafile.dat' ENGINE = NDB; DROP TABLESPACE range_ts ENGINE = NDB; DROP TABLESPACE range_ts2 ENGINE = NDB; DROP LOGFILE GROUP range_log ENGINE =NDB; #CREATE TABLE test.t1 (pk1 MEDIUMINT NOT NULL AUTO_INCREMENT, c2 TEXT NOT NULL, c3 INT NOT NULL, c4 BIT NOT NULL, PRIMARY KEY(pk1,c3))T ABLESPACE table_space1 STORAGE DISK ENGINE=NDB PARTITION BY HASH(c3) PARTITIONS 4; #CREATE TABLE test.t2 (pk1 MEDIUMINT NOT NULL AUTO_INCREMENT, c2 TEXT NOT NULL, c3 INT NOT NULL, c4 BIT NOT NULL, PRIMARY KEY(pk1,c3))T ABLESPACE table_space2 STORAGE DISK ENGINE=NDB PARTITION BY KEY(c3) (PARTITION p0 ENGINE = NDB, PARTITION p1 ENGINE = NDB); #End 5.1 test case