Bug #22108 bin-logs do get huge with same query
Submitted: 8 Sep 2006 8:04 Modified: 27 Dec 2006 14:43
Reporter: [ name withheld ] Email Updates:
Status: No Feedback Impact on me:
None 
Category:MySQL Server Severity:S2 (Serious)
Version:4.1.21, 4.1.11-Debian_4sarge5-log OS:Linux (debian sarge)
Assigned to: CPU Architecture:Any
Tags: bin-logs, replcation

[8 Sep 2006 8:04] [ name withheld ]
Description:
client: php4 4.3.10-16 
mysql module:php4-mysql     4.3.10-16
apache 1.3.33-6sarge1

the code:
UPDATE users SET date_mod=now() WHERE id=63603;
what happens:
the table is updated correctly
but the bin log contains about 300.000 THREE HUNDRED THOUSANDS (thanks wc) lines like
SET TIMESTAMP=1157656550;
UPDATE users SET date_mod=now() WHERE id=63603;

the time stamp does not change so the id they are strictly the sam 300.000 times.
the log position is incremented correctly.

on a normal day it produces about 360 x 100 MB bin-log files.

updated table structure
users | CREATE TABLE `users` (
 
   `id` int(8) NOT NULL auto_increment,
   `login` varchar(255) NOT NULL default '',
   `password` varchar(255) NOT NULL default '',
   `date_add` datetime NOT NULL default '0000-00-00 00:00:00',
   `date_mod` datetime NOT NULL default '0000-00-00 00:00:00',
   `partner` varchar(16) NOT NULL default '',
   `actif` int(1) NOT NULL default '0',
   `code_actif` varchar(32) NOT NULL default '',
   `last_login` datetime NOT NULL default '0000-00-00 00:00:00',
   `nb_login` int(8) NOT NULL default '0',
   `ip` varchar(15) NOT NULL default '',
   `last_ip` varchar(15) NOT NULL default '',
   `errors` varchar(255) NOT NULL default '',
   `oldlogin` varchar(255) NOT NULL default '',
   `info_client` varchar(255) NOT NULL default '',
   `date_extract` date NOT NULL default '0000-00-00',
   PRIMARY KEY  (`id`),
   KEY `password` (`password`),
   KEY `login` (`login`),
   KEY `date_extract` (`date_extract`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1 PACK_KEYS=0 |
more details about table
SHOW TABLE STATUS LIKE 'users';
*************************** 1. row ***************************
           Name: users
         Engine: MyISAM
        Version: 9
     Row_format: Dynamic
           Rows: 99810
 Avg_row_length: 68
    Data_length: 6838736
Max_data_length: 4294967295
   Index_length: 72738816
      Data_free: 0
 Auto_increment: 100148
    Create_time: 2006-07-17 11:31:02
    Update_time: 2006-09-08 09:58:04
     Check_time: 2006-09-08 09:33:47
      Collation: latin1_swedish_ci
       Checksum: NULL
 Create_options: pack_keys=0
        Comment:
1 row in set (0.00 sec)

that'all folks

cedric

How to repeat:
this is going on permanently

Suggested fix:
no clue
[8 Sep 2006 12:20] Valeriy Kravchuk
Thank you for a problem report. Please, try to repeat with a newer version , 4.1.21 (MySQL binaries, please), and inform about the results.
[25 Sep 2006 8:58] [ name withheld ]
An upgrade to 4.21 (binaries downloaded at mysql.com) 
mysql-standard-4.1.21-pc-linux-gnu-i686-glibc23 

does not solve the problem
the upgrade was done dumping on one side importingf on the other side.

need more info please ask
[12 Oct 2006 12:13] Valeriy Kravchuk
Please, send the result of 

SELECT count(*) FROM users WHERE id=63603;

Send my.cnf from master. Do you have anything unusual in the error log of your master?
[12 Oct 2006 15:06] [ name withheld ]
as asked her the my.cnf file

Attachment: my.cnf (application/octet-stream, text), 3.48 KiB.

[12 Oct 2006 15:07] [ name withheld ]
mysql> SELECT count(*) FROM users WHERE id=63603;
+----------+
| count(*) |
+----------+
|        1 |
+----------+
1 row in set (0.00 sec)
[12 Oct 2006 15:08] [ name withheld ]
we are just using the bi-log feature for crash recovery
there is no replication in use.
[27 Nov 2006 14:43] Valeriy Kravchuk
Please, try to repeat with a newer version, 4.1.22, MySQL binaries. It was already announced and binaries should be available really soon.
[28 Dec 2006 0:00] Bugs System
No feedback was provided for this bug for over a month, so it is
being suspended automatically. If you are able to provide the
information that was originally requested, please do so and change
the status of the bug back to "Open".