Bug #5190 Condense repeated messages in the log
Submitted: 24 Aug 2004 20:10 Modified: 29 Dec 2005 16:40
Reporter: Jeremy Tinley Email Updates:
Status: Won't fix Impact on me:
None 
Category:MySQL Server Severity:S4 (Feature request)
Version:4.0.20 OS:Linux (Redhat ES)
Assigned to: CPU Architecture:Any

[24 Aug 2004 20:10] Jeremy Tinley
Description:
In situations where you have same server-id or out of memory errors on high volume servers, error messages can fill up the $hostname.err log very, very quickly.  With same server-id errors, we managed to get a 12GB log file in a few hours.

How to repeat:
Any high volume event that leads to errors.   Same server-id replication, out of memory are the two that have occured in my experience so far.

Suggested fix:
Write the error in the log once, then append (repeated $x times) so long as the event occurs within the same second.
[24 Aug 2004 20:39] Guilhem Bichot
Hello,
please, what is this error message "same server id" ? Could you please paste it here?
Thanks!
[24 Aug 2004 21:44] Jeremy Tinley
Actually, I have no record of it anymore. I deleted the logs to free the space up.  Configure replication while having 2 slaves use the same server-id and you should be able to generate the message.
[28 Oct 2004 22:01] Jeremy Tinley
Found it.

041028 16:58:47  Slave: received 0 length packet from server, apparent master shutdown:
041028 16:58:47  Slave I/O thread: Failed reading log event, reconnecting to retry, log 'master.001' position 5001
041028 16:58:47  Slave: connected to master 'repl@hostname:3306',replication resumed in log 'master.001' at position 5001

Repeat multiple times per second.
[28 Oct 2004 22:01] Jeremy Tinley
CLARIFICATION:  If you have 2 hosts with the same server-id in your replication setup, the above comment is what happens.
[29 Dec 2005 16:10] Valeriy Kravchuk
Thank you for a feature request. You have to look for your logs anyway, and in this particular case you have 3 different messages, during one second, although, related.

As for your suggestion:

"Write the error in the log once, then append (repeated $x times) so long as the event occurs within the same second."

some kind of buffering will be needed then, and this kind of checks will slow down the logging and (it seems that each thread should have a separate buffer...) the messages may be wriiten in the wrong sequence. So, I do not think this feature should be implemented. Can you give an example of logging facility that works this way?
[29 Dec 2005 16:40] Jeremy Tinley
Sure, syslog.

"Last message repeated n times"

Feb 15 14:57:16 servername dansguardian: nbpp::SocketException: Connection
reset by peer
Feb 15 14:58:35 servername last message repeated 4 times

Really what is needed is a better error in the log about having same server-ids in a master/slave relationship.  If Slave2 attempts to connect to master that has another slave, Slave1, with the same server-id connected, it should not allow the connection from Slave2 and report an error of "duplicate server-id".  The original slave, Slave1, should remain unaffected.

At the time of the bug posting, the above 3 log lines would be repeated multiple times per second.