Bug #9234 Relay_Master_Log_File not updated when master is slave of itself
Submitted: 16 Mar 2005 23:20 Modified: 18 Dec 2014 21:20
Reporter: Jeremy Tinley Email Updates:
Status: Closed Impact on me:
None 
Category:MySQL Server: Replication Severity:S4 (Feature request)
Version:4.0.20 OS:Linux (Redhat AS4)
Assigned to: Mats Kindahl CPU Architecture:Any

[16 Mar 2005 23:20] Jeremy Tinley
Description:
The Relay_Master_Log_File value is not being updated when the master is a slave of itself.

Our primary master database is configured to be a slave of itself.  We do this for failover purposes.  Replication should be in nearly the same place as the master when it fails, so when it recovers as the secondary and begins to read from the newly assigned master, it can pick up where it left off.

How to repeat:
Install a single MySQL 4.0.20 node.
Configure the machine to be a master database using log-bin.
Start the server.

Execute a few statements that will be logged in the binary log.

Now configure the machine to be a slave of itself by either configurion my.cnf parameters or using CHANGE MASTER TO.  Verify that the relay log did read the statements from the local binary log.

Now perform a FLUSH LOGS and execute more statements that are in the binary log.  The Relay_Master_Log_File value will not change however, the Relay_Master_Log_Pos will.

Suggested fix:
It's my totally uneducated opinion that when the SQL thread reads an entry of same server id, it is not updating the Relay_Master_Log_File value.
[17 Mar 2005 15:38] Jeremy Tinley
Here's a little background on why we have a master database be a slave of itself.

We have a primary and failover master (we'll call them A and B).  The active master is reachable via a virtual IP that floats between the two.  We have a series of database slaves that read from the current primary via the virtual IP.  When a failover occurs, a shared volume containing the binary logs is switched to the other server.

In order to sync B with the A, B must be a slave of A.  When a failover occurs, A goes down and B assumes the master role.  When A returns to service, it must be a slave of B to catch any updates that happened while it was unavailable.  This is done automatically using heartbeat.  The problem however, is that A does not know where to start replicating from without either  a) clearing all the binary logs and starting from the beginning or b) keeping track of where it left off.  The easiest way to keep track of where it left off is to make the master a slave of itself. This will ensure that the local master.info file is at the very last position before A went down. When it comes back up, the IO thread begins reading from the next file, the one that B just created.
[18 Dec 2014 21:20] Jeremy Tinley
Closing. This is obsolete and irrelevant now.