Bug #82850 | GTID setup documentation needs more clarification | ||
---|---|---|---|
Submitted: | 3 Sep 2016 8:53 | Modified: | 1 Aug 2018 13:19 |
Reporter: | Daniël van Eeden (OCA) | Email Updates: | |
Status: | Closed | Impact on me: | |
Category: | MySQL Server: Documentation | Severity: | S3 (Non-critical) |
Version: | 5.7 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | GTID |
[3 Sep 2016 8:53]
Daniël van Eeden
[3 Sep 2016 10:13]
MySQL Verification Team
Hello Daniël, Thank you for the report and feedback. Thanks, Umesh
[12 Sep 2016 8:23]
Daniël van Eeden
https://dev.mysql.com/doc/refman/5.7/en/replication-gtids-failover.html
[14 Jul 2017 12:11]
David Moss
Posted by developer: Thanks for the feedback. This is in progress and I have implemented fixes for points 1, 2 and 4.
[1 Aug 2018 13:19]
Margaret Fisher
Posted by developer: Thanks for helping us to improve the MySQL documentation! I have now completed the rework of this section. Here's the new version with annotations referencing your comments. Please let me know in a new bug if you have any further suggestions. Copying data and transactions to the slave Executing the entire transaction history can be time-consuming when the source server has processed a large number of transactions previously, and this can represent a major bottleneck when setting up a new replication slave. To eliminate this requirement, a snapshot of the data set, the binary logs and the global transaction information the source server contains can be imported to the new slave. The source server can be either the master or the slave, but you must ensure that the source has processed all required transactions before copying the data. [2 - Previous bug handler added that source can be master or slave.] There are several variants of this method, the difference being in the manner in which data dumps and transactions from binary logs are transfered to the slave, as outlined here: Data Set - Create a dump file using mysqldump on the source server. Set the mysqldump option --master-data (with the default value of 1) to include a CHANGE MASTER TO statement with binary logging information. Set the --set-gtid-purged option to AUTO (the default) or ON, to include information about executed transactions in the dump. Then use the mysql client to import the dump file on the target server. [3 - These instructions are to use the binary log file-based positioning method rather than the GTID-based method, so I believe this should stand. The previous bug handler removed the instruction to set gtid_mode to ON to deal with this. I have added that the default of 1 is expected.] - Alternatively, create a data snapshot of the source server using raw data files, then copy these files to the target server, following the instructions in Section 17.1.2.5, "Choosing a Method for Data Snapshots". If you use InnoDB tables, you can use the mysqlbackup command from the MySQL Enterprise Backup component to produce a consistent snapshot. This command records the log name and offset corresponding to the snapshot to be used on the slave. MySQL Enterprise Backup is a commercial product that is included as part of a MySQL Enterprise subscription. See Section 29.2, “MySQL Enterprise Backup Overview” for detailed information. [4 - Added MEB.] - Alternatively, stop both the source and target servers, copy the contents of the source's data directory to the new slave's data directory, then restart the slave. If you use this method, the slave must be configured for GTID-based replication, in other words with gtid_mode=ON. [1 - Added to stop both source and target.] Transaction History If the source server has a complete transaction history in its binary logs (that is, the GTID set @@global.gtid_purged is empty), you can use these methods. [5 - Added this tip.] - Import the binary logs from the source server to the new slave using mysqlbinlog, with the --read-from-remote-server and --read-from-remote-master options. - Alternatively, copy the source server's binary log files to the slave. You can make copies from the slave using mysqlbinlog with the --read-from-remote-server and --raw options. These can be read into the slave by using mysqlbinlog > file (without the --raw option) to export the binary log files to SQL files, then passing these files to the mysql client for processing. Ensure that all of the binary log files are processed using a single mysql process, rather than multiple connections. For example: shell> mysqlbinlog copied-binlog.000001 copied-binlog.000002 | mysql -u root -p For more information, see Section 4.6.8.3, “Using mysqlbinlog to Back Up Binary Log Files”. [6 - I think the option about updating the slave's binlog.index file was referring to the process to set up a slave from a snapshot as described at the end of the CHANGE MASTER TO documentation, for example: CHANGE MASTER TO MASTER_LOG_FILE='copied-binlog.000001', MASTER_LOG_POS=4025; However the user is already using mysqlbinlog here, and it might be hard to identify the starting file position. So I've removed this option. 6.1 - As above we are not trying to use GTIDs here. 6.2 - Added about using a single mysql process with example.]