You already know about the Continuent Connector which is the “secret sauce” that routes your application database traffic to the appropriate MySQL data source of your cluster.
Have you ever wondered how the Connector keeps track of the cluster configuration? How it always knows which host is the master (or masters in a Composite Multimaster topology), and which are slaves?
The Short Version
This information is actually held and maintained by the Managers, which monitor and take care of their local MySQL node.
Each one of the Connectors maintains a single connection to one, and only one, manager per data service. In the case of a Composite Multimaster topology, that will be one Manager chosen per site.
The Nitty Gritty
Every 3 seconds, each Manager sends a “ping” on this previously-established connection, which is also the opportunity to send a refresh of the cluster states, just in case… As long as the Connector receives these pings, it’s happy in perfect world.
If a Connector does not hear from the selected Manager for more than 30s (by default, defined by the
--connector-keepalive-timeout option), it will just disconnect from that Manager and try to reach another one. Most of the time, the Connector is able to discover another available Manager, connect to it and operations simply continue normally.
If the Connector is unable to reach any of the Managers (which could happen if the host is isolated from the network), the Connector will go into the “ON HOLD” state, which will delay any new connection requests and continue to serve existing client connections.
If the Connector is unable to reach another Manager within 30s (by default, defined by the
--connector-delay-before-offline option), the Connector will declare itself isolated and will kill existing connections and reject new ones. This is done so that there is no risk of writing data to a cluster node that may no longer have the master role.
That’s how Continuent Clustering keeps your data safe!