Bug #84281 group replication can not work
Submitted: 20 Dec 2016 15:44 Modified: 10 Jan 2017 7:48
Reporter: song boomballa Email Updates:
Status: Not a Bug Impact on me:
None 
Category:Tests: Group Replication Severity:S7 (Test Cases)
Version:mysql-5.7.17 OS:CentOS (6.6)
Assigned to: MySQL Verification Team CPU Architecture:Any
Tags: group replication

[20 Dec 2016 15:44] song boomballa
Description:

I follow the official instructions to test the new function group_replication, but it does not achieve the results in the manual.

When the first node initialization normal, performance_schema group_replication also can not be normal.

mysql> SELECT * FROM performance_schema.replication_group_members;
Empty set (0.00 sec)

And the remaining nodes can not normally join group_replication cluster.

1. I do not understand why I installed the same group_replication method on the official website, but after I initialized the first node, I did not find any information in the performance_schema.replication_group_members table.Then I node2, node3 can not join the cluster.

2. In the official document (http://dev.mysql.com/doc/refman/5.7/en/group-replication-adding-instances.html) Let everyone group_replication write my.cnf parameters to loose-group_replication_group_XXX the way. 

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "127.0.0.1:24903"
loose-group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
loose-group_replication_bootstrap_group= off

But when I did not install the group_replcation plug-in, these parameters and can not be properly loaded. 

3. In node2, and node3 installed plug-in, use the command start group_replication when the output error:

2016-12-20T19:10:37.648782Z 4 [ERROR] Plugin group_replication reported: 'The group name option is mandatory'

So, I still need to manually configure the way (set global group_replication_xxx = xxx;), or restart the database again in order to load correctly. This is not very convenient.

How to repeat:
Environment node information description:

node1:127.0.0.1 24081 24091
node2:127.0.0.1 24082 24092
node3:127.0.0.1 24083 24093

1.Download the mysql binary installation package and extract it to the specified directory:

① shell>wget http://dev.mysql.com/get/Downloads/MySQL-5.7/mysql-5.7.17-linux-glibc2.5-x86_64.tar.gz

② shell>tar -zxvf mysql-5.7.17-linux-glibc2.5-x86_64.tar.gz -C /opt/app

③
shell>cp -r mysql-5.7.17-linux-glibc2.5-x86_64 mysql_node1
shell>cp -r mysql-5.7.17-linux-glibc2.5-x86_64 mysql_node2
shell>cp -r mysql-5.7.17-linux-glibc2.5-x86_64 mysql_node3

④
shell>useradd mysql -s /sbin/nologin

⑤
shell>mkdir -p mysql_node1/{data,logs,tmp}&&chown -R mysql.mysql mysql_node1/{data,logs,tmp}
shell>mkdir -p mysql_node2/{data,logs,tmp}&&chown -R mysql.mysql mysql_node2/{data,logs,tmp}
shell>mkdir -p mysql_node3/{data,logs,tmp}&&chown -R mysql.mysql mysql_node3/{data,logs,tmp}

2.Edit the configuration file my.cnf

[node1]
port=24081
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="dceed018-c471-11e6-9c3c-005056b8286c"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "127.0.0.1:24901"
loose-group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
loose-group_replication_bootstrap_group= off

[node2]
port=24082
server_id=2
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="dceed018-c471-11e6-9c3c-005056b8286c"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "127.0.0.1:24902"
loose-group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
loose-group_replication_bootstrap_group= off

[node3]
port=24083
server_id=3
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin=binlog
binlog_format=ROW

transaction_write_set_extraction=XXHASH64
loose-group_replication_group_name="dceed018-c471-11e6-9c3c-005056b8286c"
loose-group_replication_start_on_boot=off
loose-group_replication_local_address= "127.0.0.1:24903"
loose-group_replication_group_seeds= "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"
loose-group_replication_bootstrap_group= off

3. Initialize the database

[node1]
shell>./bin/mysqld --defaults-file=/opt/app/mysql_node1/my.cnf --initialize-insecure  --basedir=/opt/app/mysql_node1 --datadir=/opt/app/mysql_node1/data
[node2]
shell>./bin/mysqld --defaults-file=/opt/app/mysql_node2/my.cnf --initialize-insecure  --basedir=/opt/app/mysql_node2 --datadir=/opt/app/mysql_node2/data
[node3] 
shell>./bin/mysqld --defaults-file=/opt/app/mysql_node3/my.cnf --initialize-insecure  --basedir=/opt/app/mysql_node3 --datadir=/opt/app/mysql_node3/data

4. Configure mysql quick start script, modify the mysql.server basedir and datadir and start mysql.

[node1]
shell>cp support-files/mysql.server /etc/init.d/mysql_node1
shell>/etc/init.d/mysql_node1 start

[node2]
shell>cp support-files/mysql.server /etc/init.d/mysql_node2
shell>/etc/init.d/mysql_node1 start

[node3]
shell>cp support-files/mysql.server /etc/init.d/mysql_node3
shell>/etc/init.d/mysql_node1 start

4. Initialize the first node in group_replication on node1

①. User Credentials

mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)

mysql> CREATE USER rpl_user@'%';
Query OK, 0 rows affected (0.00 sec)

mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%' IDENTIFIED BY 'rpl_pass';
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)

mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.01 sec)

②. Launching Group Replication

mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
Query OK, 0 rows affected (0.00 sec)

mysql> SHOW PLUGINS;
+----------------------------+----------+--------------------+----------------------+---------+
| Name                       | Status   | Type               | Library              | License |
+----------------------------+----------+--------------------+----------------------+---------+
| binlog                     | ACTIVE   | STORAGE ENGINE     | NULL                 | GPL     |
| mysql_native_password      | ACTIVE   | AUTHENTICATION     | NULL                 | GPL     |

(...)

| group_replication          | ACTIVE   | GROUP REPLICATION  | group_replication.so | GPL     |
+----------------------------+----------+--------------------+----------------------+---------+

mysql> SET GLOBAL group_replication_bootstrap_group=ON;
Query OK, 0 rows affected (0.00 sec)

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (1.05 sec)

mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
Query OK, 0 rows affected (0.00 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;       
Empty set (0.00 sec)

[node1  error_log]

2016-12-20T18:54:41.530901Z 4 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2016-12-20T18:54:41.531033Z 4 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,172.16.3.134/22 to the whitelist'
2016-12-20T18:54:41.531219Z 4 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2016-12-20T18:54:41.531243Z 4 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "dceed018-c471-11e6-9c3c-005056b8286c"; group_replication_local_address: "127.0.0.1:24901"; group_replication_group_seeds: "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2016-12-20T18:54:41.531497Z 6 [Note] Plugin group_replication reported: 'Detected previous RESET MASTER invocation or an issue exists in the group replication applier relay log. Purging existing applier logs.'
2016-12-20T18:54:41.541827Z 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2016-12-20T18:54:41.560505Z 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log '/opt/app/mysql_node1/logs/relay-bin-group_replication_applier.000001' position: 4
2016-12-20T18:54:41.560514Z 4 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2016-12-20T18:54:41.560563Z 4 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2016-12-20T18:54:41.560574Z 4 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1'
2016-12-20T18:54:41.560723Z 0 [Note] Plugin group_replication reported: 'state 0 action xa_init'
2016-12-20T18:54:41.576299Z 0 [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:24901 (socket=60).'
2016-12-20T18:54:41.576340Z 0 [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=60)!'
2016-12-20T18:54:41.576347Z 0 [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=60)!'
2016-12-20T18:54:41.576395Z 0 [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:24901 (socket=60)!'
2016-12-20T18:54:41.576402Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.576540Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 62'
2016-12-20T18:54:41.576784Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.576843Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 64'
2016-12-20T18:54:41.576969Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.577026Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 66'
2016-12-20T18:54:41.577181Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.577232Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 68'
2016-12-20T18:54:41.577454Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.577506Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 70'
2016-12-20T18:54:41.577670Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T18:54:41.577719Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 72'
2016-12-20T18:54:41.577922Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_net_boot'
2016-12-20T18:54:41.577941Z 0 [Note] Plugin group_replication reported: 'getstart group_id c55d84f1'
2016-12-20T18:54:41.578154Z 0 [Note] Plugin group_replication reported: 'new state x_run'
2016-12-20T18:54:41.578207Z 0 [Note] Plugin group_replication reported: 'state 4330 action xa_net_boot'
2016-12-20T18:54:41.578217Z 0 [Note] Plugin group_replication reported: 'new state x_run'
2016-12-20T18:54:41.578229Z 0 [Note] Plugin group_replication reported: 'getstart group_id c55d84f1'
2016-12-20T18:54:42.577927Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 14822600825777673:1'
2016-12-20T18:54:42.578143Z 12 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2016-12-20T18:54:42.581215Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
2016-12-20T18:54:42.581265Z 0 [Note] Plugin group_replication reported: 'Unsetting super_read_only.'
2016-12-20T18:54:42.581274Z 6 [Note] Plugin group_replication reported: 'A new primary was elected, enabled conflict detection until the new primary applies all relay logs'

5. The second node, node2, of group_replication is then initialized

[node2] 
mysql>SET SQL_LOG_BIN=0;
mysql>CREATE USER repl@'%';
mysql>GRANT REPLICATION SLAVE ON *.* TO repl@'%' IDENTIFIED BY 'repl';
mysql>FLUSH PRIVILEGES;
mysql>SET SQL_LOG_BIN=1;
mysql>CHANGE MASTER TO MASTER_USER='repl',MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';

INSTALL PLUGIN group_replication SONAME 'group_replication.so';

mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (5.53 sec)

mysql> SELECT * FROM performance_schema.replication_group_members;
Empty set (0.00 sec)

mysql> SHOW DATABASES LIKE 'test';
Empty set (0.00 sec)

[node2 error_log]
2016-12-20T19:04:54.068047Z 4 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2016-12-20T19:04:54.068192Z 4 [Note] Plugin group_replication reported: '[GCS] Added automatically IP ranges 127.0.0.1/8,172.16.3.134/22 to the whitelist'
2016-12-20T19:04:54.068652Z 4 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2016-12-20T19:04:54.068679Z 4 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "dceed018-c471-11e6-9c3c-005056b8286c"; group_replication_local_address: "127.0.0.1:24902"; group_replication_group_seeds: "127.0.0.1:24901,127.0.0.1:24902,127.0.0.1:24903"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "AUTOMATIC"'
2016-12-20T19:04:54.068928Z 6 [Note] Plugin group_replication reported: 'Detected previous RESET MASTER invocation or an issue exists in the group replication applier relay log. Purging existing applier logs.'
2016-12-20T19:04:54.086184Z 6 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2016-12-20T19:04:54.104519Z 9 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log '/opt/app/mysql_node2/logs/relay-bin-group_replication_applier.000001' position: 4
2016-12-20T19:04:54.104551Z 4 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2016-12-20T19:04:54.104570Z 4 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2016-12-20T19:04:54.104574Z 4 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 2'
2016-12-20T19:04:54.104692Z 0 [Note] Plugin group_replication reported: 'state 0 action xa_init'
2016-12-20T19:04:54.120776Z 0 [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:24902 (socket=63).'
2016-12-20T19:04:54.120813Z 0 [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=63)!'
2016-12-20T19:04:54.120819Z 0 [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=63)!'
2016-12-20T19:04:54.120850Z 0 [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:24902 (socket=63)!'
2016-12-20T19:04:54.120863Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.120976Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 65'
2016-12-20T19:04:54.121167Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.121225Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 67'
2016-12-20T19:04:54.121351Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.121405Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 69'
2016-12-20T19:04:54.121623Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.121677Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 71'
2016-12-20T19:04:54.121835Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.121886Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 73'
2016-12-20T19:04:54.122025Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24902'
2016-12-20T19:04:54.122075Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24902 fd 75'
2016-12-20T19:04:54.122214Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T19:04:54.122264Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 77'
2016-12-20T19:04:55.218984Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_snapshot'
2016-12-20T19:04:55.219113Z 0 [Note] Plugin group_replication reported: 'new state x_recover'
2016-12-20T19:04:55.219135Z 0 [Note] Plugin group_replication reported: 'state 4277 action xa_complete'
2016-12-20T19:04:55.219263Z 0 [Note] Plugin group_replication reported: 'new state x_run'
2016-12-20T19:04:59.599108Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 14822600825777673:2'
2016-12-20T19:04:59.599315Z 12 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2016-12-20T19:04:59.610513Z 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='localhost.localdomain', master_port= 24081, master_log_file='', master_log_pos= 4, master_bind=''.
2016-12-20T19:04:59.613200Z 12 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 0350d3cf-c61e-11e6-a0c0-005056b8286c at localhost.localdomain port: 24081.'
2016-12-20T19:04:59.613430Z 14 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2016-12-20T19:04:59.614032Z 14 [ERROR] Slave I/O for channel 'group_replication_recovery': error connecting to master 'repl@localhost.localdomain:24081' - retry-time: 60  retries: 1, Error_code: 1045
2016-12-20T19:04:59.614053Z 14 [Note] Slave I/O thread for channel 'group_replication_recovery' killed while connecting to master
2016-12-20T19:04:59.614059Z 14 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2016-12-20T19:04:59.628505Z 15 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log '/opt/app/mysql_node2/logs/relay-bin-group_replication_recovery.000001' position: 4
2016-12-20T19:04:59.628672Z 12 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Check group replication recovery's connection credentials.'
2016-12-20T19:04:59.628854Z 12 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10'
2016-12-20T19:05:59.629143Z 12 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='localhost.localdomain', master_port= 24081, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='localhost.localdomain', master_port= 24081, master_log_file='', master_log_pos= 4, master_bind=''.
2016-12-20T19:05:59.631058Z 12 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 0350d3cf-c61e-11e6-a0c0-005056b8286c at localhost.localdomain port: 24081.'
2016-12-20T19:05:59.632227Z 16 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2016-12-20T19:05:59.632761Z 16 [ERROR] Slave I/O for channel 'group_replication_recovery': error connecting to master 'repl@localhost.localdomain:24081' - retry-time: 60  retries: 1, Error_code: 1045
2016-12-20T19:05:59.632780Z 16 [Note] Slave I/O thread for channel 'group_replication_recovery' killed while connecting to master
2016-12-20T19:05:59.632785Z 16 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2016-12-20T19:05:59.632878Z 12 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Check group replication recovery's connection credentials.

6. Then initialize the third node, node3, of group_replication, attempting to join the cluster.

[node3]

mysql>SET SQL_LOG_BIN=0;
mysql>CREATE USER repl@'%';
mysql>GRANT REPLICATION SLAVE ON *.* TO repl@'%' IDENTIFIED BY 'repl';
mysql>FLUSH PRIVILEGES;
mysql>SET SQL_LOG_BIN=1;
mysql>CHANGE MASTER TO MASTER_USER='repl',MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';

mysql> START GROUP_REPLICATION;
ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log.

[node3 error_log]
2016-12-20T19:10:37.648782Z 4 [ERROR] Plugin group_replication reported: 'The group name option is mandatory'

Restart and then execute:
mysql>reset master;
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected (2.56 sec)

[node3 error_log]
2016-12-20T19:13:25.086551Z 4 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2016-12-20T19:13:25.086590Z 4 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2016-12-20T19:13:25.086610Z 4 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 3'
2016-12-20T19:13:25.086680Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_init'
2016-12-20T19:13:25.086721Z 0 [Note] Plugin group_replication reported: 'Successfully bound to 0.0.0.0:24903 (socket=64).'
2016-12-20T19:13:25.086744Z 0 [Note] Plugin group_replication reported: 'Successfully set listen backlog to 32 (socket=64)!'
2016-12-20T19:13:25.086757Z 0 [Note] Plugin group_replication reported: 'Successfully unblocked socket (socket=64)!'
2016-12-20T19:13:25.086783Z 0 [Note] Plugin group_replication reported: 'Ready to accept incoming connections on 0.0.0.0:24903 (socket=64)!'
2016-12-20T19:13:25.086784Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.086893Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 66'
2016-12-20T19:13:25.089319Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.089409Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 69'
2016-12-20T19:13:25.089557Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.089631Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 68'
2016-12-20T19:13:25.089762Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.089812Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 72'
2016-12-20T19:13:25.089938Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.089986Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 74'
2016-12-20T19:13:25.090122Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24903'
2016-12-20T19:13:25.090187Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24903 fd 76'
2016-12-20T19:13:25.090321Z 0 [Note] Plugin group_replication reported: 'connecting to 127.0.0.1 24901'
2016-12-20T19:13:25.090373Z 0 [Note] Plugin group_replication reported: 'client connected to 127.0.0.1 24901 fd 78'
2016-12-20T19:13:26.162191Z 0 [Note] Plugin group_replication reported: 'state 4257 action xa_snapshot'
2016-12-20T19:13:26.162344Z 0 [Note] Plugin group_replication reported: 'new state x_recover'
2016-12-20T19:13:26.162360Z 0 [Note] Plugin group_replication reported: 'state 4277 action xa_complete'
2016-12-20T19:13:26.162482Z 0 [Note] Plugin group_replication reported: 'new state x_run'
2016-12-20T19:13:27.608612Z 0 [Note] Plugin group_replication reported: 'Starting group replication recovery with view_id 14822600825777673:5'
2016-12-20T19:13:27.608864Z 19 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2016-12-20T19:13:27.626854Z 19 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='localhost.localdomain', master_port= 24081, master_log_file='', master_log_pos= 4, master_bind=''.
2016-12-20T19:13:27.629640Z 19 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 0350d3cf-c61e-11e6-a0c0-005056b8286c at localhost.localdomain port: 24081.'
2016-12-20T19:13:27.629852Z 21 [Warning] Storing MySQL user name or password information in the master info repository is not secure and is therefore not recommended. Please consider using the USER and PASSWORD connection options for START SLAVE; see the 'START SLAVE Syntax' in the MySQL Manual for more information.
2016-12-20T19:13:27.630391Z 21 [ERROR] Slave I/O for channel 'group_replication_recovery': error connecting to master 'repl@localhost.localdomain:24081' - retry-time: 60  retries: 1, Error_code: 1045
2016-12-20T19:13:27.630413Z 21 [Note] Slave I/O thread for channel 'group_replication_recovery' killed while connecting to master
2016-12-20T19:13:27.630427Z 21 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'FIRST', position 4
2016-12-20T19:13:27.645486Z 22 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log '/opt/app/mysql_node3/logs/relay-bin-group_replication_recovery.000001' position: 4
2016-12-20T19:13:27.645647Z 19 [ERROR] Plugin group_replication reported: 'There was an error when connecting to the donor server. Check group replication recovery's connection credentials.'
2016-12-20T19:13:27.645855Z 19 [Note] Plugin group_replication reported: 'Retrying group recovery connection with another donor. Attempt 2/10'

Suggested fix:

I according to the official documents can not achieve the desired effect, this problem is more serious. Hope that the manual can be written more stringent and correct.

Group_replication is very interested in the new function, but the cluster does not work properly, join the cluster is not very convenient. Mysql official hope that this function can continue to improve, allowing users to more simple installation and use of this feature.
[10 Jan 2017 7:48] MySQL Verification Team
Hi,

Thanks for your report but I don't consider this a bug. Our documentation is not perfect but is getting better every day. Stating that "group replication can not work" or that "froum our documentation you can't get it to work" just don't cut it as there is number of ppl that did set the group replication up using the same resources. We are of course working on getting the documentation better.

If you need help setting up group replication, the bugs system is not a proper place for that, you can get support either trough mysql forums or trough paid support provided by oracle or 3rd parties.

take care
Bogdan