Bug #3269 table corruption
Submitted: 23 Mar 2004 5:49 Modified: 27 Apr 2004 6:38
Reporter: Peter Normann Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server Severity:S1 (Critical)
Version:4.0.18 OS:Linux (SuSe Linux)
Assigned to: CPU Architecture:Any

[23 Mar 2004 5:49] Peter Normann
Description:
A specific table keeps getting marked as crashed. A client tries to issue an update:

UPDATE page SET Name='Leverandører', Headline='', SubHead='Nedenfor finder du kontaktoplysninger på:', BodyText=' .... some really long (escaped) text goes here .... ', Menu='232' WHERE Page_ID='226';

The client will receive a 'Incorrect key file for table: 'page'. Try to repair it'

The table 'page' will periodically get marked as crashed. When I launch the mysql console and issue a check table page, it will repair itself - or so it seems. I issue a 'REPAIR TABLE page' just the same, but the problem comes back within a day or two. I updated the mysql server from 4.0.17, since the changelog suggested this was fixed, but alas, the problem persists.

I am willing to produce any backtraces and other stuff, but I would need some guidance on how do this, though...

How to repeat:
Uncertain.
[23 Mar 2004 7:41] MySQL Verification Team
After you repair the table and run that UPDATE statement, is table again crashed ????

If yes, upload a table to this record and we shall test it.
[24 Mar 2004 0:36] Peter Normann
No, the table is not crashed immediately after issuing the UPDATE statement and thus will work maybe a day or two more.

I'm thinking that maybe some earlier UPDATE or INSERT statement could cause the table to be marked as crashed, but really I'm in the dark here.
[24 Mar 2004 7:59] Peter Normann
Okay, the table is corrupted again. I could upload the table files but are restricted by the 200kb limit. The gzip'ed files are approximately 474 kb... Do you need all the files (page.MYD, page.MYI, page.frm)?
[26 Mar 2004 2:36] Alexander Keremidarski
Yes we need all three files. If you can't compress each of them below limit of 200k tar them within some well recognizable name like bug3269.tar.gz, upload it at ftp://support.mysql.com/pub/mysql/secret/ and let us know here.
[26 Mar 2004 2:39] Peter Normann
Done :-)
[26 Mar 2004 6:59] Thomas Goik
Hi,

i have the same problem
On my table (fulltext index) when he is crashed fopr first time
i stop teh server myisamchk and starting the server again.

5 Min. later the tabel ist crashed again!
All over again!

the only way to get away from this continuois crashing is making a truncat table
and alter table type=myisam

then the table is fixt for the next 3 - 5 days until he starts all over again :-(

using mysqld -v 4.0.16 (now 4.0.18 with the same error)

The file size is arround 280 MB and the index around 180 MB

Thank for any help
[26 Mar 2004 7:17] MySQL Verification Team
As I said, we truly need fully repeatable test cases.

We understand problems that you face, but without us being 100 % able to repeat what you get, under controlled environment, we can not fix it.

This forum is meant truly only for 100 % repeatable test cases.
[27 Mar 2004 6:38] MySQL Verification Team
Small note. 

I took a look at the file and could not come up with the idea of what caused 
the corruption.

In short, we need a test case for corruption.

Here it is how it can be done:

E.1.6 Making a Test Case If You Experience Table Corruption

If you get corrupted tables or if mysqld always fails after some
update commands, you can test if this bug is reproducible by doing the
following:

    * Take down the MySQL daemon (with mysqladmin shutdown).

    * Make a backup of the tables (to guard against the very unlikely
    * case that the repair will do something bad).

    * Check all tables with myisamchk -s database/*.MYI. Repair any
    * wrong tables with myisamchk -r database/table.MYI. It is even
    * better to use mysqlcheck as it does not require shutting down
    * MySQL server 

    * Make a second backup of the tables.

    * Remove (or move away) any old log files from the MySQL data
    * directory if you need more space.

    * Start mysqld with --log-bin. See section 4.9.4 The Binary
    * Log. If you want to find a query that crashes mysqld, you should
    * use --log --log-bin.

    * When you have gotten a crashed table, stop the mysqld server.
    * Restore the backup.
    * Restart the mysqld server without --log-bin

    * Re-execute the commands with mysqlbinlog update-log-file |
    * mysql. The update log is saved in the MySQL database directory
    * with the name hostname-bin.#.

    * If the tables are corrupted again or you can get mysqld to die
    * with the above command, you have found reproducible bug that
    * should be easy to fix! FTP the tables and the binary log to
    * ftp://support.mysql.com/pub/mysql/Incoming/ and enter it into our
    * bugs system at http://bugs.mysql.com/. If you are a support
    * customer), you can also support@mysql.com to alert the MySQL
    * team about the problem and have it fixed as soon as possible.

You can also use the script mysql_find_rows to just execute some of
the update statements if you want to narrow down the problem.
[1 Apr 2004 2:11] Thomas Goik
Hello,

i did the bin-log test!
The table doesn't crushed by doing how it is descript in the documentation
the way i did it:
mysqlbinlog -rtemp.txt
because the mysqlbinlog on the table direct returnt an error
mysql < temp.txt
then 
mysql> check table tblSearch_quick;
+------------------------+-------+----------+----------+
| Table                  | Op    | Msg_type | Msg_text |
+------------------------+-------+----------+----------+
| thomas.tblSearch_quick | check | status   | OK       |
+------------------------+-------+----------+----------+
1 row in set (1 min 10.31 sec)

what else can i do to find why i get this the error?
The error allways comes when the table reaches the size bigger then 180 MB
[14 Feb 2005 22:54] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".