Abnormal Restoration of the Primary Database
There are two scenarios for restoring the primary database that is removed from the MGR cluster. One scenario is that the data files of the primary database are not damaged and the database can be started normally. In this scenario, after the database is started, it joins the cluster as the standby database and continues to synchronize logs with the current primary database. The other scenario is that the data files of the primary database that is removed from the cluster are damaged or the database cannot be started due to other reasons. In this scenario, you need to restart and initialize the database, add the database to the MGR cluster again, and synchronize data again.
Primary Database Can Be Started Normally
- The primary database is added to the MGR again.
- View the primary/secondary attribute of the original database.
Log in to the mysql-1 database.
1kubectl exec -it mysql-1 -n ns-mysql-test -- mysql -uroot -p
After logging in to the MySQL database, check the information about the primary and secondary databases of the MGR.
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | e44f902b-bf50-11ea-89d7-000000536bb6 | mysql-1 | 3306 | ONLINE | PRIMARY | 8.0.19 | | group_replication_applier | e51ea27b-bf50-11ea-b256-000000536bb7 | mysql-2 | 3306 | ONLINE | SECONDARY | 8.0.19 | | group_replication_applier | e5b471d9-bf50-11ea-8639-000000536bb8 | mysql-3 | 3306 | ONLINE | SECONDARY | 8.0.19 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------
Confirm that the original primary database is mysql-1 and the secondary databases are mysql-2 and mysql-3.
Exit the database.
exit;
- Run the following command to delete the mysql-1 pod of the primary database (by simulating that the primary database is faulty):
1kubectl delete pod mysql-1 -n ns-mysql-test
pod "mysql-1" deleted
1kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-9zmdb 1/1 Running 0 2d kube-system coredns-66bff467f8-f5ghl 1/1 Running 0 2d kube-system etcd-centos-10 1/1 Running 0 2d kube-system kube-apiserver-centos-10 1/1 Running 0 2d kube-system kube-controller-manager-centos-10 1/1 Running 0 2d kube-system kube-ovn-cni-97f48 1/1 Running 0 2d kube-system kube-ovn-controller-749cfd66d7-7n64n 1/1 Running 0 2d kube-system kube-ovn-pinger-c756s 1/1 Running 0 2d kube-system kube-proxy-gmfmg 1/1 Running 0 2d kube-system kube-scheduler-centos-10 1/1 Running 0 2d kube-system ovn-central-6458555b48-lkhp2 1/1 Running 0 2d kube-system ovs-ovn-hsr2f 1/1 Running 0 2d ns-mysql-test mysql-2 1/1 Running 0 46m ns-mysql-test mysql-3 1/1 Running 0 25m
- Check the primary and secondary databases again.
Log in to the mysql-2 database.
1kubectl exec -it mysql-2 -n ns-mysql-test -- mysql -uroot -p
Run the following command to view the information about the primary and secondary databases of the MGR:
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | e51ea27b-bf50-11ea-b256-000000536bb7 | mysql-2 | 3306 | ONLINE | PRIMARY | 8.0.19 | | group_replication_applier | e5b471d9-bf50-11ea-8639-000000536bb8 | mysql-3 | 3306 | ONLINE | SECONDARY | 8.0.19 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+
The command output indicates that mysql-2 has become the primary database.
Exit the mysql-2 database.
exit;
- Run the following commands to restore the suspended pod. (In the preceding section, the delete command is used to simulate the suspension of the mysql-1 pod.)
1kubectl apply -f ./mysql_deployment.yaml
namespace/ns-mysql-test unchanged subnet.kubeovn.io/ns-mysql-test unchanged pod/mysql-1 created pod/mysql-2 configured pod/mysql-3 configured
1kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system coredns-66bff467f8-9zmdb 1/1 Running 0 2d1h kube-system coredns-66bff467f8-f5ghl 1/1 Running 0 2d1h kube-system etcd-centos-10 1/1 Running 0 2d1h kube-system kube-apiserver-centos-10 1/1 Running 0 2d1h kube-system kube-controller-manager-centos-10 1/1 Running 0 2d1h kube-system kube-ovn-cni-97f48 1/1 Running 0 2d1h kube-system kube-ovn-controller-749cfd66d7-7n64n 1/1 Running 0 2d1h kube-system kube-ovn-pinger-c756s 1/1 Running 0 2d1h kube-system kube-proxy-gmfmg 1/1 Running 0 2d1h kube-system kube-scheduler-centos-10 1/1 Running 0 2d1h kube-system ovn-central-6458555b48-lkhp2 1/1 Running 0 2d1h kube-system ovs-ovn-hsr2f 1/1 Running 0 2d1h ns-mysql-test mysql-1 1/1 Running 0 22s ns-mysql-test mysql-2 1/1 Running 0 61m ns-mysql-test mysql-3 1/1 Running 0 40m
The command output indicates that mysql-1 has been restarted.
mysql-1 is restored to run on the original physical machine. Therefore, the MGR configuration data in the database and configuration file is not lost. You only need to run the MGR startup command on mysql-1 to add mysql-1 to the group.
- Run the following command on the physical machine where mysql-1 is located. Enter the password of user root of the MySQL database as prompted, and log in to the MySQL database in the mysql-1 pod.
1kubectl exec -it mysql-1 -n ns-mysql-test -- mysql -uroot -p
- Run the following command to enable the MGR function of mysql-1:
START GROUP_REPLICATION;
- View the information about the primary and secondary databases.
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | e44f902b-bf50-11ea-89d7-000000536bb6 | mysql-1 | 3306 | ONLINE | SECONDARY | 8.0.19 | | group_replication_applier | e51ea27b-bf50-11ea-b256-000000536bb7 | mysql-2 | 3306 | ONLINE | PRIMARY | 8.0.19 | | group_replication_applier | e5b471d9-bf50-11ea-8639-000000536bb8 | mysql-3 | 3306 | ONLINE | SECONDARY | 8.0.19 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)
The command output indicates that mysql-1 has been added to the group again and it serves as a secondary database.
- Manually switch the primary database to the specified target database.
As described in the preceding section, after mysql-1 becomes abnormal, mysql-2 is automatically upgraded to the primary database. After mysql-1 is restored and added to the group again, it serves as a secondary database in the group, and mysql-2 still serves as the primary database. This section uses the switchback of the primary database to mysql-1 as an example to describe how to manually switch the primary database.
- Run the following command to log in to the MySQL database in the mysql-1 pod, and enter the password of the root user of the MySQL database as prompted:
kubectl exec -it mysql-1 -n ns-mysql-test -- mysql -uroot -p
- After logging in to the MySQL database, execute the show variables like '%server_%'; statement to view the server_uuid information of the current MySQL database (that is, MEMBER_ID of MGR).
show variables like '%server_%';
+---------------------------------------------------+--------------------------------------+ | Variable_name | Value | +---------------------------------------------------+--------------------------------------+ | group_replication_recovery_ssl_verify_server_cert | OFF | | immediate_server_version | 999999 | | innodb_ft_server_stopword_table | | | original_server_version | 999999 | | server_id | 1 | | server_id_bits | 32 | | server_uuid | e44f902b-bf50-11ea-89d7-000000536bb6 | +---------------------------------------------------+--------------------------------------+ 7 rows in set (0.01 sec)
- By using the obtained server_uuid as the parameter, execute the following statement to switch to the primary database:
SELECT group_replication_set_as_primary('replaced with server_uuid of the target database');If the switchover takes a long time, you can execute the following statement to view the switchover progress:
SELECT event_name, work_completed, work_estimated FROM performance_schema.events_stages_current WHERE event_name LIKE "%stage/group_rpl%";
The following is an example of switching the primary database to mysql-1:
SELECT group_replication_set_as_primary('e44f902b-bf50-11ea-89d7-000000536bb6');+--------------------------------------------------------------------------+ | group_replication_set_as_primary('e44f902b-bf50-11ea-89d7-000000536bb6') | +--------------------------------------------------------------------------+ | Primary server switched to: e44f902b-bf50-11ea-89d7-000000536bb6 | +--------------------------------------------------------------------------+ 1 row in set (1.00 sec)SELECT event_name, work_completed, work_estimated FROM performance_schema.events_stages_current WHERE event_name LIKE "%stage/group_rpl%";
Empty set (0.00 sec)
SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | CHANNEL_NAME | MEMBER_ID | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ | group_replication_applier | e44f902b-bf50-11ea-89d7-000000536bb6 | mysql-1 | 3306 | ONLINE | PRIMARY | 8.0.19 | | group_replication_applier | e51ea27b-bf50-11ea-b256-000000536bb7 | mysql-2 | 3306 | ONLINE | SECONDARY | 8.0.19 | | group_replication_applier | e5b471d9-bf50-11ea-8639-000000536bb8 | mysql-3 | 3306 | ONLINE | SECONDARY | 8.0.19 | +---------------------------+--------------------------------------+-------------+-------------+--------------+-------------+----------------+ 3 rows in set (0.00 sec)
The command output indicates that mysql-1 has been successfully switched to the primary database.
Primary Database Cannot Be Started Properly
The original primary database cannot be restarted because its data is damaged. If you want to add the primary database to the MGR cluster again, the primary database will be added as a new node. For details, see Adding a New Member to the MGR. After the node is added to the group, synchronize data again.