Bug #69771 SHOW SLAVE HOSTS only displays 1 Host if 2 Hosts have the same Server ID
Submitted: 18 Jul 2013 2:22 Modified: 19 Jul 2013 19:40
Reporter: Chris Calender Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Server: Replication Severity:S3 (Non-critical)
Version:5.6.12 OS:Any
Assigned to: CPU Architecture:Any
Tags: server_id, show slave hosts

[18 Jul 2013 2:22] Chris Calender
Description:
If you have 2 slaves set up in 5.6, running off the same master, and they have the same server id, SHOW SLAVE HOSTS only displays 1 row, or 1 of the hosts.

How to repeat:
1. Set up master with 2 slaves.

(I initially had different different server_ids for the slaves, and then changed them to see the results. This may or may not be necessary.)

mysql> show slave hosts;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID                           |
+-----------+------+------+-----------+--------------------------------------+
|         5 |      | 3323 |         2 | d3a81d73-ef41-11e2-b7ff-089e01009f6a |
|         3 |      | 3322 |         2 | 915861e2-ef40-11e2-b7ff-089e01009f6a |
+-----------+------+------+-----------+--------------------------------------+
2 rows in set (0.00 sec)

2. Now, change Server_id=5 to Server_id=3 on the first slave, and restart that instance.

3. Run SHOW SLAVE HOSTS:

mysql> show slave hosts;
+-----------+------+------+-----------+--------------------------------------+
| Server_id | Host | Port | Master_id | Slave_UUID                           |
+-----------+------+------+-----------+--------------------------------------+
|         3 |      | 3323 |         2 | d3a81d73-ef41-11e2-b7ff-089e01009f6a |
+-----------+------+------+-----------+--------------------------------------+
1 row in set (0.00 sec)

o Note it only returns 1 row!
o Note the row it returns is the newly changed one.  The original that never changed is now the missing one.

Suggested fix:
Display all hosts.

More specifically, I would say a primary key, or unique key, on all/most fields is necessary.  Or in other words, it should not solely be based on Server_id because the 'Server_id' could be the same (I know it's not recommended).

Also 'Host' could be the same, 'Master_id' *will* be the same (maybe exclude this from the key), and 'Slave_UUID' can even be the same (say person copies full datadir of slave #1 to seed slave #2, accidentally including the auto.cnf file - quite common if running multiple slaves).
[18 Jul 2013 2:23] Chris Calender
I forgot to add, 'Port' is actually the one that should always be different.  Perhaps just base it off that (or at least include it in the key). :)
[18 Jul 2013 2:29] Chris Calender
Also, for anyone interested, and in my "very basic" tests, replication still worked fine.  Updates from the master were written to both slave #1 and slave #2, and SHOW SLAVE STATUS always was normal on both slaves.
[18 Jul 2013 16:31] MySQL Verification Team
Hello Chris,

Thank you for the bug report. 
Verified as described.

Thanks,
Umesh
[18 Jul 2013 16:32] MySQL Verification Team
// How to repeat

// Use sandbox to setup replication
// Download tarball n place it in $HOME/downloads/

[ushastry@cluster-repo ~]$ more 5612.sh
#!/bin/bash
export SANDBOX_HOME=$HOME/sandboxes
export MYSQL5_6_12=mysql-5.6.12-linux-glibc2.5-x86_64
export TARBALL=$HOME/downloads/$MYSQL5_6_12.tar.gz

if [ ! -f $TARBALL ]
then
    echo "$TARBALL not found"
    exit 1
fi

export SANDBOX_BINARY=$HOME/usr/local/bin/
if [ ! -d $SANDBOX_BINARY ]
then
    mkdir -p $SANDBOX_BINARY
fi
if [ ! -d $SANDBOX_BINARY/5.6.12 ]
then
    cd $SANDBOX_BINARY
    tar -xzf $TARBALL
    if [ ! -d $MYSQL5_6_12 ]
    then
        echo "error expanding $TARBALL"
        exit 1
    fi
    mv $MYSQL5_6_12 5.6.12
fi

make_replication_sandbox --how_many_slaves=2 5.6.12

// Setup repl, start

[ushastry@cluster-repo ~]$ ./5612.sh
executing "clear" on slave 1
executing "clear" on slave 2
executing "clear" on master
installing and starting master
installing slave 1
installing slave 2
starting slave 1
...... sandbox server started
starting slave 2
....... sandbox server started
initializing slave 1
initializing slave 2
replication directory installed in $HOME/sandboxes/rsandbox_5_6_12

master [localhost] {root} ((none)) > show slave hosts;
+-----------+----------+-------+-----------+--------------------------------------+
| Server_id | Host     | Port  | Master_id | Slave_UUID                           |
+-----------+----------+-------+-----------+--------------------------------------+
|       101 | SBslave1 | 19078 |         1 | 3ce49fc3-f080-11e2-9dca-bc5b92cb43ef |
|       102 | SBslave2 | 19079 |         1 | 3fb7ce91-f080-11e2-9dcb-bf68c9b9452d |
+-----------+----------+-------+-----------+--------------------------------------+

2. Now, change Server_id=101 to Server_id=102 on the first slave, and restart that instance.

[ushastry@cluster-repo ~]$ more /data/ushastry/sandboxes/rsandbox_5_6_12/node1/my.sandbox.cnf|grep -i server-id
server-id=101
[ushastry@cluster-repo ~]$ vi /data/ushastry/sandboxes/rsandbox_5_6_12/node1/my.sandbox.cnf
[ushastry@cluster-repo ~]$ more /data/ushastry/sandboxes/rsandbox_5_6_12/node1/my.sandbox.cnf|grep -i server-id
server-id=102
[ushastry@cluster-repo ~]$

[ushastry@cluster-repo ~]$ sandboxes/rsandbox_5_6_12/node1/restart
.. sandbox server started

3. Run SHOW SLAVE HOSTS:

master [localhost] {root} ((none)) > show slave hosts;
+-----------+----------+-------+-----------+--------------------------------------+
| Server_id | Host     | Port  | Master_id | Slave_UUID                           |
+-----------+----------+-------+-----------+--------------------------------------+
|       102 | SBslave1 | 19078 |         1 | 3ce49fc3-f080-11e2-9dca-bc5b92cb43ef |
+-----------+----------+-------+-----------+--------------------------------------+
1 row in set (0.00 sec)

Indeed, second slave(SBslave2) that never changed is now the missing one ^^.
[19 Jul 2013 19:40] Chris Calender
To add more bit to the issue:

"second slave(SBslave2) that never changed is now the missing one ^^."

If you correct the 1st slave back to a distinct value and restart it, you'll see the new slave appears in SHOW SLAVE HOSTS, and the original slave that never changed does not re-appear.  You need to restart the original for it to re-appear.

Just kind of strange overall for the one that never changed to the be the one that "disappears" forever.
[13 Jan 2017 0:19] Ceri Williams
As this issue is still present (5.7), I thought that I'd add that it is not specific to server-id but seems related to the thread.

# Server info 
> select @@hostname, @@report_host, @@server_id, @@server_uuid)
db1     db1     1       21c6727f-96e1-11e6-8e1e-525400593d78
db2     db1     2       e429a553-96e2-11e6-8df1-525400b7fe18
db3     db1     3       6ef9cf74-d858-11e6-8e52-525400b7fe19

# Master processlist
> select user, host, command from information_schema.processlist where user = "repl")
repl    10.0.1.15:47552 Binlog Dump
repl    10.0.1.14:54608 Binlog Dump

# Master slave hosts
> show slave hosts
3       db1     3306    1       6ef9cf74-d858-11e6-8e52-525400b7fe19

I have not been able to get it to repeat, but I did see that it was possible to have 0 rows returned for SHOW SLAVE HOSTS and still see a slave connected in the processlist:

# Stop 10.0.1.15
> stop slave

# Master slave hosts (zero rows)
> show slave hosts

# Master processlist (1 slave shows)
> select user, host, command from information_schema.processlist where user = "repl"'
repl    10.0.1.14:54608 Binlog Dump