Bug #101635 | group_replication_local_address port overflow | ||
---|---|---|---|
Submitted: | 17 Nov 2020 6:29 | Modified: | 24 Nov 2020 9:24 |
Reporter: | phoenix Zhang (OCA) | Email Updates: | |
Status: | Verified | Impact on me: | |
Category: | MySQL Server: Group Replication | Severity: | S3 (Non-critical) |
Version: | 8.0.21, 8.0.22 | OS: | Any |
Assigned to: | CPU Architecture: | Any | |
Tags: | group_replication |
[17 Nov 2020 6:29]
phoenix Zhang
[24 Nov 2020 8:33]
MySQL Verification Team
Hello phoenix Zhang, Thank you for the report and feedback. Sorry for taking time on this but I tried it on 8.0.21 and even latest GA 8.0.22 but not seeing any issues. Is there anything else I'm missing? Please let me know. -- rm -rf 101635/ bin/mysqld --defaults-file=101635.cnf --initialize-insecure --basedir=$PWD --datadir=$PWD/101635 --log-error-verbosity=3 bin/mysqld --defaults-file=101635.cnf --basedir=$PWD --datadir=$PWD/101635 --core-file --socket=/tmp/mysql_ushastry.sock --port=3333 --log-error=$PWD/101635/log.err --mysqlx-port=33330 --mysqlx-socket=/tmp/mysql_x_ushastry.sock --log-error-verbosity=3 --secure-file-priv=/tmp/ 2>&1 & [mysqld] disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" server_id=1 gtid_mode=ON enforce_gtid_consistency=ON binlog_checksum=NONE log_bin=binlog log_slave_updates=ON binlog_format=ROW master_info_repository=TABLE relay_log_info_repository=TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add='group_replication.so' group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" group_replication_start_on_boot=off group_replication_local_address= "localhost:116002" group_replication_group_seeds= "localhost:116002,localhost:116003,localhost:116004" group_replication_bootstrap_group=off - bin/mysql -uroot -S /tmp/mysql_ushastry.sock Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 8 Server version: 8.0.22 MySQL Community Server - GPL . mysql> CREATE USER 'rpl_user'@'%' IDENTIFIED BY 'rpl_pass'; mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%'; Query OK, 0 rows affected (0.01 sec) mysql> CREATE USER 'rpl_user'@'localhost' IDENTIFIED BY 'rpl_pass'; Query OK, 0 rows affected (0.01 sec) mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'localhost'; Query OK, 0 rows affected (0.01 sec) mysql> mysql> set global group_replication_recovery_get_public_key=ON; Query OK, 0 rows affected (0.00 sec) mysql> SHOW GLOBAL VARIABLES LIKE 'group_replication%'; +-----------------------------------------------------+----------------------------------------------------+ | Variable_name | Value | +-----------------------------------------------------+----------------------------------------------------+ | group_replication_advertise_recovery_endpoints | DEFAULT | | group_replication_allow_local_lower_version_join | OFF | | group_replication_auto_increment_increment | 7 | | group_replication_autorejoin_tries | 3 | | group_replication_bootstrap_group | OFF | | group_replication_clone_threshold | 9223372036854775807 | | group_replication_communication_debug_options | GCS_DEBUG_NONE | | group_replication_communication_max_message_size | 10485760 | | group_replication_components_stop_timeout | 31536000 | | group_replication_compression_threshold | 1000000 | | group_replication_consistency | EVENTUAL | | group_replication_enforce_update_everywhere_checks | OFF | | group_replication_exit_state_action | READ_ONLY | | group_replication_flow_control_applier_threshold | 25000 | | group_replication_flow_control_certifier_threshold | 25000 | | group_replication_flow_control_hold_percent | 10 | | group_replication_flow_control_max_quota | 0 | | group_replication_flow_control_member_quota_percent | 0 | | group_replication_flow_control_min_quota | 0 | | group_replication_flow_control_min_recovery_quota | 0 | | group_replication_flow_control_mode | QUOTA | | group_replication_flow_control_period | 1 | | group_replication_flow_control_release_percent | 50 | | group_replication_force_members | | | group_replication_group_name | aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa | | group_replication_group_seeds | 127.0.0.1:116002,127.0.0.1:116003,127.0.0.1:116004 | | group_replication_gtid_assignment_block_size | 1000000 | | group_replication_ip_allowlist | AUTOMATIC | | group_replication_ip_whitelist | AUTOMATIC | | group_replication_local_address | 127.0.0.1:116002 | . . +-----------------------------------------------------+----------------------------------------------------+ 58 rows in set (0.01 sec) mysql> SET GLOBAL group_replication_bootstrap_group=1; Query OK, 0 rows affected (0.00 sec) mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='rpl_pass' FOR CHANNEL 'group_replication_recovery'; Query OK, 0 rows affected, 2 warnings (0.11 sec) mysql> CHANGE MASTER TO GET_MASTER_PUBLIC_KEY=1; Query OK, 0 rows affected (0.11 sec) mysql> START GROUP_REPLICATION; mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 7c28c907-2a64-11eb-988b-02001701fbd2 | support-cluster04 | 3333 | RECOVERING | SECONDARY | 8.0.22 | +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ 1 row in set (0.00 sec) mysql> SELECT * FROM performance_schema.replication_group_members; +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ | group_replication_applier | 7c28c907-2a64-11eb-988b-02001701fbd2 | support-cluster04 | 3333 | ONLINE | PRIMARY | 8.0.22 | +---------------------------+--------------------------------------+-------------------+-------------+--------------+-------------+----------------+ 1 row in set (0.00 sec) -- regards, Umesh
[24 Nov 2020 8:56]
phoenix Zhang
Hiļ¼cause the port 50466 is not in using in your machine, so, it will not report error when set group_replication_local_address="localhost:116002" The problem is, although u set group_replication port to 116002, the real port will changed to 50466, which is confused for user
[24 Nov 2020 9:24]
MySQL Verification Team
Thank you for the feedback. Verified as described. regards, Umesh
[24 Nov 2020 9:28]
MySQL Verification Team
- Start any instance 5.7 or 8.0 with port=50466 rm -rf 101727/ bin/mysqld --initialize-insecure --basedir=$PWD --datadir=$PWD/101727 --log-error-verbosity=3 bin/mysqld --no-defaults --basedir=$PWD --datadir=$PWD/101727 --core-file --socket=/tmp/mysql_ushastry.sock --port=50466 --log-error=$PWD/101727/log.err --log-error-verbosity=3 --secure-file-priv="" --skip-name-resolve --performance-schema=ON 2>&1 & -- Now attempt to setup GR node with port group_replication_local_address="localhost:116002" and observe the reported issue cat 101635.cnf [mysqld] disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY" server_id=1 gtid_mode=ON enforce_gtid_consistency=ON binlog_checksum=NONE log_bin=binlog log_slave_updates=ON binlog_format=ROW master_info_repository=TABLE relay_log_info_repository=TABLE transaction_write_set_extraction=XXHASH64 plugin_load_add='group_replication.so' loose_group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" loose_group_replication_start_on_boot=off loose_group_replication_local_address= "127.0.0.1:116002" loose_group_replication_group_seeds= "127.0.0.1:116002,127.0.0.1:116003,127.0.0.1:116004" loose_group_replication_bootstrap_group=off rm -rf 101635/ bin/mysqld --defaults-file=101635.cnf --initialize-insecure --basedir=$PWD --datadir=$PWD/101635 --log-error-verbosity=3 bin/mysqld --defaults-file=101635.cnf --basedir=$PWD --datadir=$PWD/101635 --core-file --socket=/tmp/mysql_ushastry_8.sock --port=3333 --log-error=$PWD/101635/log.err --mysqlx-port=33330 --mysqlx-socket=/tmp/mysql_x_ushastry.sock --log-error-verbosity=3 --secure-file-priv=/tmp/ 2>&1 & - mysql> START GROUP_REPLICATION; ERROR 3092 (HY000): The server is not configured properly to be an active member of the group. Please see more details on error log. tail -f 101635/log.err 2020-11-24T09:22:13.191555Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Unable to announce tcp port 50466. Port already in use?' 2020-11-24T09:22:13.191619Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] Error joining the group while waiting for the network layer to become ready.' 2020-11-24T09:22:13.255728Z 0 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 50466' 2020-11-24T09:22:26.789007Z 8 [ERROR] [MY-011640] [Repl] Plugin group_replication reported: 'Timeout on wait for view after joining group' 2020-11-24T09:22:26.789115Z 8 [Note] [MY-011649] [Repl] Plugin group_replication reported: 'Requesting to leave the group despite of not being a member' 2020-11-24T09:22:26.789146Z 8 [ERROR] [MY-011735] [Repl] Plugin group_replication reported: '[GCS] The member is leaving a group without being on one.' 2020-11-24T09:22:26.789977Z 13 [Note] [MY-010596] [Repl] Error reading relay log event for channel 'group_replication_applier': slave SQL thread was killed 2020-11-24T09:22:26.790018Z 13 [Note] [MY-010587] [Repl] Slave SQL thread for channel 'group_replication_applier' exiting, replication stopped in log 'FIRST' at position 0 2020-11-24T09:22:26.793771Z 10 [Note] [MY-011444] [Repl] Plugin group_replication reported: 'The group replication applier thread was killed.' 2020-11-24T09:22:26.794156Z 9 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'