Bug #118868 Make the use_gr_notifications option enabled by default to cut Router's metadata handling overhead
Submitted: 20 Aug 21:48 Modified: 21 Aug 6:23
Reporter: Przemyslaw Malkowski Email Updates:
Status: Verified Impact on me:
None 
Category:MySQL Router Severity:S4 (Feature request)
Version:8.0, 8.4, 9.4 OS:Any
Assigned to: CPU Architecture:Any
Tags: group replication, metadata cache, router

[20 Aug 21:48] Przemyslaw Malkowski
Description:
Thanks to https://dev.mysql.com/worklog/task/?id=10719, we have a great new functionality allowing switching from TTL-based metadata cache refresh to event-based notifications using X protocol listener. 

The old behavior brings a lot of unnecessary overhead in terms of binlog writes to the `mysql_innodb_cluster_metadata`.`routers` table, as, regardless of whether there is any change or not, it always executes UPDATE queries via the ClusterMetadata::update_router_last_check_in function:
https://github.com/mysql/mysql-server/blob/8.4/router/src/metadata_cache/src/cluster_metad...
 With ROW-based replication, this puts the whole row twice each time into the binary log event.
Now, the rows in version 8.4 are many times bigger due to a major change in storing the whole configuration:
https://dev.mysql.com/doc/mysql-router/8.4/en/mysql-router-configuration-file-locations.ht...

With the router's use_gr_notifications=1 option, the overhead drops to an absolute minimum as the metadata table is updated only as a result of cluster changes. I don't see any reason why this behavior should not be the default. The TTL-based refresh is still active as a fallback in the case of X protocol listener malfunction. If there are any potential drawbacks, please add them to the documentation.

How to repeat:
Add more 8.4 routers to an idle InnoDB Cluster and observe how much binlog writes amplify!

Suggested fix:
Make use_gr_notifications enabled by default to minimize the write overhead.
[21 Aug 6:23] MySQL Verification Team
Hello Przemyslaw,

Thank you for the reasonable feature request!

Thanks,
Umesh