Bug #15787 MySQL crashes when archive table exceeds 2GB
Submitted: 15 Dec 2005 18:50 Modified: 7 Jul 2007 18:40
Reporter: Ed Pauley Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Archive storage engine Severity:S2 (Serious)
Version:5.0.16/5.0.18 BK/5.0.42 OS:Linux (Linux)
Assigned to: Sergey Vojtovich CPU Architecture:Any
Tags: bfsm_2007_06_21, regression

[15 Dec 2005 18:50] Ed Pauley
Description:
When loading a table using the Archive storage engine the MySQL server crashes when the size of the table excedes 2GB. The server also crashes when altering a large table from MyISAM to Archive. The file size of the .ARZ file was 2,147,483,647 KB when the server crashed.

How to repeat:
Create an table of the type archive and attemp to load over 2GB worth of data into it. Or take an existing MyISAM table of a large size, in my case 21GB, and run an alter command to change the engine to archive.
[16 Dec 2005 16:21] Miguel Solorzano
What you meant: a server crash or the table marked as crashed ?

Thanks in advance.
[16 Dec 2005 18:02] Ed Pauley
The MySQL server crashed, restarted, and the table was also marked as crashed.
[19 Dec 2005 10:50] Miguel Solorzano
Thank you for the bug report.

 
051218 10:28:33 [Note] /home/miguel/dbs/5.0/libexec/mysqld: ready for connections.
Version: '5.0.18-debug'  socket: '/tmp/mysql.sock'  port: 3306  Source distribution
[New Thread 1131862960 (LWP 5672)]

Program received signal SIGXFSZ, File size limit exceeded.
[Switching to Thread 1131862960 (LWP 5672)]
0xffffe410 in __kernel_vsyscall ()
(gdb) bt full
#0  0xffffe410 in __kernel_vsyscall ()
No symbol table info available.
#1  0x402a556b in __write_nocancel () from /lib/tls/libc.so.6
No symbol table info available.
#2  0x402561bf in _IO_new_file_write () from /lib/tls/libc.so.6
No symbol table info available.
#3  0x40254bc5 in new_do_write () from /lib/tls/libc.so.6
No symbol table info available.
#4  0x402563c1 in _IO_new_file_xsputn () from /lib/tls/libc.so.6
No symbol table info available.
#5  0x4024bdf2 in fwrite () from /lib/tls/libc.so.6
No symbol table info available.
#6  0x40035d92 in gzwrite () from /lib/libz.so.1
No symbol table info available.
#7  0x083669fd in ha_archive::real_write_row (this=0x8e87d70, buf=0x8e89690 "�003", 'A' <repeats 197 times>...,
    writer=0x8e5a610) at ha_archive.cc:602
        written = 0
<cut>
        _db_framep_ = (char **) 0x8e8a86c
#8  0x08366d1d in ha_archive::write_row (this=0x8e87d70, buf=0x8e89690 "�003", 'A' <repeats 197 times>...)
    at ha_archive.cc:654
        _db_func_ = 0x865752b "write_record"
        _db_file_ = 0x825d3e8 "\203�\213E\020\213P\004\213"
        rc = 140866541
        _db_level_ = 1131856824
---Type <return> to continue, or q <return> to quit---
        _db_framep_ = (char **) 0x1
#9  0x0825dbd9 in write_record (thd=0x8e58928, table=0x8e872f0, info=0x8e786c0) at sql_insert.cc:1100
        key = 0x0
        _db_func_ = 0x1 <Address 0x1 out of bounds>
        _db_file_ = 0x8e58c24 ""
        error = 0
        trg_error = 0
        _db_level_ = 149453240
        _db_framep_ = (char **) 0x8e58928
        __PRETTY_FUNCTION__ = "int write_record(THD*, TABLE*, COPY_INFO*)"
#10 0x08261e73 in select_insert::send_data (this=0x8e786a0, values=@0x8e58c24) at sql_insert.cc:2281
        _db_func_ = 0x8656608 "end_send"
        _db_file_ = 0x824a6f6 "\203�\212E�203�001\204�017\204\022\004"
        _db_level_ = 1131857016
        _db_framep_ = (char **) 0x61cf91d8
        error = false
#11 0x0824a7ea in end_send (join=0x8e78708, join_tab=0x8e79930, end_of_records=false) at sql_select.cc:10443
        error = 0
        _db_func_ = 0x8e41408 "\024\031�b�032�b��b��b(\211�b\033�bPh�b(p�b"
        _db_file_ = 0x4376c46c "\205�027@"
        _db_level_ = 1131857008
        _db_framep_ = (char **) 0x4376c474
#12 0x08246fec in evaluate_join_record (join=0x8e78708, join_tab=0x8e797c0, error=0, report_error=0x8e59394 "")
    at sql_select.cc:9768
        rc = NESTED_LOOP_OK
        found = true
        not_exists_optimize = false
        not_used_in_distinct = false
        found_records = 0
        select_cond = (COND *) 0x0
#13 0x082473ba in sub_select (join=0x8e78708, join_tab=0x8e797c0, end_of_records=false) at sql_select.cc:9659
        error = 0
<cut>
[12 Jan 2006 14:33] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/966
[23 Jan 2006 17:26] Sergey Vlasenko
Fix is available in 5.0.19
[24 Jan 2006 23:17] Jon Stephens
Thank you for your bug report. This issue has been committed to our
source repository of that product and will be incorporated into the
next release.

If necessary, you can access the source repository and build the latest
available version, including the bugfix, yourself. More information 
about accessing the source trees is available at
    http://www.mysql.com/doc/en/Installing_source_tree.html

Additional info:

Documented bugfix in 5.0.19 changelog. Closed.
[16 Aug 2006 23:16] Miguel Solorzano
See bug: http://bugs.mysql.com/bug.php?id=21675
[19 Jun 2007 19:14] Harrison Fisk
Test case to cause SIG

Attachment: archive_backup.sql.gz (application/x-gzip, text), 302.05 KiB.

[19 Jun 2007 19:15] Harrison Fisk
This bug appears to have made a come back.  I can repeat a crash with MySQL 5.0.42 on Ubuntu 6.06 LTS.  The message given is:

Program terminated with signal SIGXFSZ, File size limit exceeded.

The exact package I used was: mysql-enterprise-gpl-5.0.42-linux-i686.tar.gz

The test case I used is attached.  You can run it as:

zcat archive_backup.sql.gz | mysql test
[19 Jun 2007 21:27] Brian Aker
[brian@zim test]$ du -ms *
1       archive_test.ARM
2206    archive_test.ARZ
1       archive_test.frm

I did this with a build out of the latest BK. Used the script that Harrison provided. 

Compiled with  ./BUILD/compile-amd64-max
[20 Jun 2007 16:13] Brian Aker
Sergey, I tried this again this morning, and it worked. I am wondering if this is not a build problem.
[20 Jun 2007 16:24] Sergey Vojtovich
Brian,

IIRC BUILD/compile-amd64-max uses bundled zlib, which is not affected. This may likely happen with system installed zlib only. 5.1 is also unaffected, since we use azio instead of gzio.

And BTW, the very same problem was fixed as BUG#21675 with assumption that compressed data size is always less or equal to real data size. This is not always the case, sometimes we may fall into opposite situation.

Unfortunately there is no good way to estimate compressed data size. The only solution on my mind is to copy logic of deflateBound() function into archive engine.
[20 Jun 2007 16:35] Brian Aker
Do we have a test case where original size is larger? 

For builds, the build guys are supposed to use bundled. Did we not do that for some binary?
[20 Jun 2007 17:17] Sergey Vojtovich
I guess it is one that provided by Harrison Fisk.

You're right, I didn't noticed that the package was built by our team. So probably my idea of this problem is wrong.

I plan to investigate this problem tomorrow and will let you know details.
[21 Jun 2007 10:32] Sergey Vojtovich
Ok, so my idea was indeed wrong. But anyway it would be great to doublecheck it.

The problem was incorrect "after push fix" for BUG#21675 by myself. I check sizeof(z_off_t) instead of zoffset_size when determining max allowed zfile size.

I believe we link against system installed zlib. At least in my case for BUILD/compile-pentium-debug-max ldd says:
        libz.so.1 => /usr/lib/libz.so.1 (0x00c5f000)
[21 Jun 2007 10:49] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/29261

ChangeSet@1.2492, 2007-06-21 14:52:56+05:00, svoj@mysql.com +1 -0
  BUG#15787 - MySQL crashes when archive table exceeds 2GB
  
  Max compressed file size was calculated incorretly causing server
  crash on INSERT.
  
  With this patch we use proper max file size provided by zlib.
  
  No test case for this fix, since it requires huge table.
[21 Jun 2007 13:45] Brian Aker
Its hard to say whether or not this fixes it... how about adding a test case that only runs on big runs... or somehow build runs on final binaries?
[21 Jun 2007 13:48] Brian Aker
One other question, do we need to update our internal zlib library?
[21 Jun 2007 14:08] Sergey Vojtovich
It fixes (tested). Ok, I'll add archive_big.test, but I still believe that it is way too big even for big runs - it'll require about 4Gb of disk space + considerable amount of time (about 5 minutes on my fast box).

No, we don't need to update zlib. That was actually done before within the scope of this bug - if server is compiled with large file support, compile zlib with large file support too.
[22 Jun 2007 8:30] Sergey Vojtovich
Brian,

do you approve this patch with archive_big.test (plan to add it shortly)? Do we need another reviewer?
[22 Jun 2007 14:10] Brian Aker
Your approved.
[24 Jun 2007 10:53] Bugs System
A patch for this bug has been committed. After review, it may
be pushed to the relevant source trees for release in the next
version. You can access the patch from:

  http://lists.mysql.com/commits/29459

ChangeSet@1.2492, 2007-06-24 19:44:54+05:00, svoj@mysql.com +3 -0
  BUG#15787 - MySQL crashes when archive table exceeds 2GB
  
  Max compressed file size was calculated incorretly causing server
  crash on INSERT.
  
  With this patch we use proper max file size provided by zlib.
  
  Affects 5.0 only.
[7 Jul 2007 16:34] Bugs System
Pushed into 5.1.21-beta
[7 Jul 2007 16:35] Bugs System
Pushed into 5.0.46
[7 Jul 2007 18:40] Paul Dubois
Noted in 5.0.46, 5.1.21 changelogs. (Moved 5.0.x entry from 5.0.19 to 5.0.46)

The server crashed when the size of an ARCHIVE table grew larger than 2GB.
[21 Aug 2008 5:55] Janne Pikkarainen
5.0.60 seems to be affected by this. I have an archive table with 2 147 483 647 bytes of data and every time I try to access that table, the whole MySQL server process comes down, crashing and burning. Even 

---
USE myarchivedatabase;
SHOW TABLE STATUS;
---

is enough to trigger the bug.

MySQL is running with Gentoo Linux, zlib version is 1.2.3, if that's interesting to anyone. I'm not a ricer, everything is compiled with just plain -O2. :-)
[21 Aug 2008 8:50] Shane Bester
Janne, I'm opening a new bug report for this discovery. There are various issues even on 5.0.66a now that I tested it a bit.
[21 Aug 2008 8:51] Janne Pikkarainen
Thank you very much! :)