Description:
After several months in group replication with 3 nodes (single primary), i have already many Warning logs about timestamps error. It seems it's since the 8.0.11 to 8.0.12 upgrade probably.
The 3 nodes are sync with chrony and are update. A huge table for save GPS data are permanently update/create.
The errors:
2019-01-02T13:53:52.176882Z 16 [Warning] [MY-010956] [Server] Invalid replication timestamps: original commit timestamp is more recent than the immediate commit timestamp. This may be an issue if delayed replication is active. Make sure that servers have their clocks set to the correct time. No further message will be emitted until after timestamps become valid again.
2019-01-02T13:53:52.299659Z 16 [Warning] [MY-010957] [Server] The replication timestamps have returned to normal values.
2019-01-02T13:53:52.383469Z 16 [Warning] [MY-010956] [Server] Invalid replication timestamps: original commit timestamp is more recent than the immediate commit timestamp. This may be an issue if delayed replication is active. Make sure that servers have their clocks set to the correct time. No further message will be emitted until after timestamps become valid again.
2019-01-02T13:53:52.448596Z 16 [Warning] [MY-010957] [Server] The replication timestamps have returned to normal values.
2019-01-02T13:53:52.450277Z 16 [Warning] [MY-010956] [Server] Invalid replication timestamps: original commit timestamp is more recent than the immediate commit timestamp. This may be an issue if delayed replication is active. Make sure that servers have their clocks set to the correct time. No further message will be emitted until after timestamps become valid again.
2019-01-02T13:53:52.597896Z 16 [Warning] [MY-010957] [Server] The replication timestamps have returned to normal values.
2019-01-02T13:53:52.677348Z 16 [Warning] [MY-010956] [Server] Invalid replication timestamps: original commit timestamp is more recent than the immediate commit timestamp. This may be an issue if delayed replication is active. Make sure that servers have their clocks set to the correct time. No further message will be emitted until after timestamps become valid again.
...
my configuration is :
[mysqld]
innodb_buffer_pool_size = 21474836480 #default 128M
innodb-buffer-pool-instances= 8 #default 1
innodb_lock_wait_timeout= 120 #default 50
relay-log=snibdd003-relay-bin
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
server_id=X
gtid_mode=ON
enforce_gtid_consistency=ON
master_info_repository=TABLE
relay_log_info_repository=TABLE
binlog_checksum=NONE
log_slave_updates=ON
log_bin
binlog_format=ROW
innodb_file_per_table=ON
group_replication_auto_increment_increment=3
group_replication_transaction_size_limit=0
binlog_expire_logs_seconds=864000
slow_query_log=1
slow_query_log_file=/var/log/mysqld-slow.log
#### Group Replication specific options
plugin_load=group_replication.so
group_replication=FORCE_PLUS_PERMANENT
transaction_write_set_extraction=XXHASH64
group_replication_group_name="xxxxxxxxxxx"
group_replication_start_on_boot=ON
group_replication_bootstrap_group=OFF
group_replication_single_primary_mode=ON
group_replication_local_address="A.B.C.D:33061"
group_replication_group_seeds="A.B.C.D:33061,A.B.C.E:33061,A.B.C.F:33061"
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE"
super_read_only = 1
event_scheduler=ON
#Fix for proxysql
default-authentication-plugin=mysql_native_password
collation-server = utf8mb4_general_ci
#Tuning
max_connections = 1000
How to repeat:
it's permanent.