title | description | author | ms.author | ms.reviewer | ms.date | ms.service | ms.subservice | ms.topic | ms.custom | |
---|---|---|---|---|---|---|---|---|---|---|
Configure RHEL FCI for SQL Server on Linux |
Learn to configure a Red Hat Enterprise Linux (RHEL) shared disk failover cluster instance (FCI) for SQL Server on Linux high availability. |
rwestMSFT |
randolphwest |
vanto |
11/18/2024 |
sql |
linux |
install-set-up-deploy |
|
[!INCLUDE SQL Server - Linux]
This guide provides instructions to create a two-node shared disk failover cluster for [!INCLUDE ssnoversion-md] on Red Hat Enterprise Linux. The clustering layer is based on Red Hat Enterprise Linux (RHEL) HA add-on built on top of Pacemaker. The [!INCLUDE ssnoversion-md] instance is active on either one node or the other.
Note
Access to Red Hat HA add-on and documentation requires a subscription.
As the following diagram shows, storage is presented to two servers. Clustering components - Corosync and Pacemaker - coordinate communications and resource management. One of the servers has the active connection to the storage resources and the [!INCLUDE ssnoversion-md]. When Pacemaker detects a failure, the clustering components are responsible for moving the resources to the other node.
:::image type="content" source="media/sql-server-linux-shared-disk-cluster-red-hat-7-operate/linux-cluster.png" alt-text="Diagram of Red Hat Enterprise Linux 7 shared disk SQL Server cluster.":::
For more information on cluster configuration, resource agents options, and management, visit RHEL reference documentation.
At this point, [!INCLUDE ssnoversion-md] integration with Pacemaker isn't as coupled as with WSFC on Windows. From within [!INCLUDE ssnoversion-md], there's no knowledge about the presence of the cluster, all orchestration is outside in and the service is controlled as a standalone instance by Pacemaker. Also for example, cluster dmvs sys.dm_os_cluster_nodes
and sys.dm_os_cluster_properties
will no records.
To use a connection string that points to a string server name and not use the IP, they will have to register in their DNS server the IP used to create the virtual IP resource (as explained in the following sections) with the chosen server name.
The following sections walk through the steps to set up a failover cluster solution.
To complete the following end-to-end scenario, you need two machines to deploy the two nodes cluster and another server to configure the NFS server. The following steps outline how these servers will be configured.
The first step is to configure the operating system on the cluster nodes. For this walk through, use RHEL with a valid subscription for the HA add-on.
-
Install and setup [!INCLUDE ssnoversion-md] on both nodes. For detailed instructions, see Installation guidance for SQL Server on Linux.
-
Designate one node as primary and the other as secondary, for purposes of configuration. Use these terms for the following this guide.
-
On the secondary node, stop and disable [!INCLUDE ssnoversion-md].
The following example stops and disables [!INCLUDE ssnoversion-md]:
sudo systemctl stop mssql-server sudo systemctl disable mssql-server
Note
At setup time, a Server Master Key is generated for the [!INCLUDE ssnoversion-md] instance and placed at /var/opt/mssql/secrets/machine-key
. On Linux, [!INCLUDE ssnoversion-md] always runs as a local account called mssql
. Because it's a local account, its identity isn't shared across nodes. Therefore, you need to copy the encryption key from primary node to each secondary node so each local mssql
account can access it to decrypt the Server Master Key.
-
On the primary node, create a [!INCLUDE ssnoversion-md] login for Pacemaker and grant the login permission to run
sp_server_diagnostics
. Pacemaker uses this account to verify which node is running [!INCLUDE ssnoversion-md].sudo systemctl start mssql-server
Connect to the [!INCLUDE ssnoversion-md]
master
database with thesa
account and run the following:USE [master]; GO CREATE LOGIN [<loginName>] WITH PASSWORD = N'<password>'; ALTER SERVER ROLE [sysadmin] ADD MEMBER [<loginName>];
[!CAUTION]
[!INCLUDE password-complexity]Alternatively, you can set the permissions at a more granular level. The Pacemaker login requires
VIEW SERVER STATE
to query health status withsp_server_diagnostics
,setupadmin
andALTER ANY LINKED SERVER
to update the FCI instance name with the resource name by runningsp_dropserver
andsp_addserver
. -
On the primary node, stop and disable [!INCLUDE ssnoversion-md].
-
Configure the hosts file for each cluster node. The host file must include the IP address and name of every cluster node.
Check the IP address for each node. The following script shows the IP address of your current node.
sudo ip addr show
Set the computer name on each node. Give each node a unique name that is 15 characters or less. Set the computer name by adding it to
/etc/hosts
. The following script lets you edit/etc/hosts
withvi
.sudo vi /etc/hosts
The following example shows
/etc/hosts
with additions for two nodes namedsqlfcivm1
andsqlfcivm2
.127.0.0.1 localhost localhost4 localhost4.localdomain4 ::1 localhost localhost6 localhost6.localdomain6 10.128.18.128 sqlfcivm1 10.128.16.77 sqlfcivm2
In the next section, you'll configure shared storage and move your database files to that storage.
There are various solutions for providing shared storage. This walk-through demonstrates configuring shared storage with NFS. We recommend following best practices and use Kerberos to secure NFS. For an example, see RHEL7: Use Kerberos to control access to NFS network shares.
Warning
If you don't secure NFS, then anyone who can access your network and spoof the IP address of a SQL node will be able to access your data files. As always, make sure you threat model your system before using it in production. Another storage option is to use SMB fileshare.
Important
Hosting database files on a NFS server with version <4 isn't supported in this release. This includes using NFS for shared disk failover clustering as well as databases on nonclustered instances. We are working on enabling other NFS server versions in the upcoming releases.
On the NFS Server, perform the following steps:
-
Install
nfs-utils
sudo yum -y install nfs-utils
-
Enable and start
rpcbind
sudo systemctl enable rpcbind && sudo systemctl start rpcbind
-
Enable and start
nfs-server
sudo systemctl enable nfs-server && sudo systemctl start nfs-server
-
Edit
/etc/exports
to export the directory you want to share. You need one line for each share you want. For example:/mnt/nfs 10.8.8.0/24(rw,sync,no_subtree_check,no_root_squash)
-
Export the shares
sudo exportfs -rav
-
Verify that the paths are shared/exported, run from the NFS server
sudo showmount -e
-
Add exception in SELinux
sudo setsebool -P nfs_export_all_rw 1
-
Open the firewall the server.
sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --reload
Do the following steps on all cluster nodes.
-
Install
nfs-utils
sudo yum -y install nfs-utils
-
Open up the firewall on clients and NFS server
sudo firewall-cmd --permanent --add-service=nfs sudo firewall-cmd --permanent --add-service=mountd sudo firewall-cmd --permanent --add-service=rpc-bind sudo firewall-cmd --reload
-
Verify that you can see the NFS shares on client machines
sudo showmount -e <IP OF NFS SERVER>
-
Repeat these steps on all cluster nodes.
For more information about using NFS, see the following resources:
- NFS servers and firewalld | Stack Exchange
- Mounting an NFS Volume | Linux Network Administrators Guide
- NFS server configuration | Red Hat Customer Portal
-
On the primary node only, save the database files to a temporary location.The following script, creates a new temporary directory, copies the database files to the new directory, and removes the old database files. As [!INCLUDE ssnoversion-md] runs as local user
mssql
, you need to make sure that after data transfer to the mounted share, local user has read-write access to the share.sudo su mssql mkdir /var/opt/mssql/tmp cp /var/opt/mssql/data/* /var/opt/mssql/tmp rm /var/opt/mssql/data/* exit
-
On all cluster nodes, edit
/etc/fstab
file to include the mount command.<IP OF NFS SERVER>:<shared_storage_path> <database_files_directory_path> nfs timeo=14,intr
The following script shows an example of the edit.
10.8.8.0:/mnt/nfs /var/opt/mssql/data nfs timeo=14,intr
Note
If using a File System (FS) resource as recommended here, there's no need to preserve the mounting command in /etc/fstab. Pacemaker will take care of mounting the folder when it starts the FS clustered resource. With the help of fencing, it will ensure the FS is never mounted twice.
-
Run
mount -a
command for the system to update the mounted paths. -
Copy the database and log files that you saved to
/var/opt/mssql/tmp
to the newly mounted share/var/opt/mssql/data
. This step only needs to be done on the primary node. Make sure that you give read write permissions to themssql
local user.sudo chown mssql /var/opt/mssql/data sudo chgrp mssql /var/opt/mssql/data sudo su mssql cp /var/opt/mssql/tmp/* /var/opt/mssql/data/ rm /var/opt/mssql/tmp/* exit
-
Validate that [!INCLUDE ssnoversion-md] starts successfully with the new file path. Do this on each node. At this point only one node should run [!INCLUDE ssnoversion-md] at a time. They can't both run at the same time because they will both try to access the data files simultaneously (to avoid accidentally starting [!INCLUDE ssnoversion-md] on both nodes, use a File System cluster resource to make sure the share isn't mounted twice by the different nodes). The following commands start [!INCLUDE ssnoversion-md], check the status, and then stop [!INCLUDE ssnoversion-md].
sudo systemctl start mssql-server sudo systemctl status mssql-server sudo systemctl stop mssql-server
At this point, both instances of [!INCLUDE ssnoversion-md] are configured to run with the database files on the shared storage. The next step is to configure [!INCLUDE ssnoversion-md] for Pacemaker.
-
On both cluster nodes, create a file to store the [!INCLUDE ssnoversion-md] username and password for the Pacemaker login. The following command creates and populates this file:
sudo touch /var/opt/mssql/secrets/passwd echo '<loginName>' | sudo tee -a /var/opt/mssql/secrets/passwd echo '<password>' | sudo tee -a /var/opt/mssql/secrets/passwd sudo chown root:root /var/opt/mssql/secrets/passwd sudo chmod 600 /var/opt/mssql/secrets/passwd
[!CAUTION]
[!INCLUDE password-complexity] -
On both cluster nodes, open the Pacemaker firewall ports. To open these ports with
firewalld
, run the following command:sudo firewall-cmd --permanent --add-service=high-availability sudo firewall-cmd --reload
If you're using another firewall that doesn't have a built-in high-availability configuration, the following ports need to be opened for Pacemaker to be able to communicate with other nodes in the cluster:
- TCP: Ports 2224, 3121, 21064
- UDP: Port 5405
-
Install Pacemaker packages on each node.
sudo yum install pacemaker pcs fence-agents-all resource-agents
-
Set the password for the default user that is created when installing Pacemaker and Corosync packages. Use the same password on both nodes.
sudo passwd hacluster
-
Enable and start
pcsd
service and Pacemaker. This will allow nodes to rejoin the cluster after the reboot. Run the following command on both nodes.sudo systemctl enable pcsd sudo systemctl start pcsd sudo systemctl enable pacemaker
-
Install the FCI resource agent for [!INCLUDE ssnoversion-md]. Run the following commands on both nodes.
sudo yum install mssql-server-ha
A STONITH device provides a fencing agent. Setting up Pacemaker on Red Hat Enterprise Linux in Azure provides an example of how to create a STONITH device for this cluster in Azure. Modify the instructions for your environment.
-
On one of the nodes, create the cluster.
sudo pcs cluster auth <nodeName1 nodeName2 ...> -u hacluster sudo pcs cluster setup --name <clusterName> <nodeName1 nodeName2 ...> sudo pcs cluster start --all
-
Configure the cluster resources for [!INCLUDE ssnoversion-md], File System and virtual IP resources and push the configuration to the cluster. You need the following information:
- SQL Server Resource Name: A name for the clustered [!INCLUDE ssnoversion-md] resource.
- Floating IP Resource Name: A name for the virtual IP address resource.
- IP Address: The IP address that clients use to connect to the clustered instance of [!INCLUDE ssnoversion-md].
- File System Resource Name: A name for the File System resource.
- device: The NFS share path
- device: The local path that it's mounted to the share
- fstype: File share type (that is,
nfs
)
Update the values from the following script for your environment. Run on one node to configure and start the clustered service.
sudo pcs cluster cib cfg sudo pcs -f cfg resource create <sqlServerResourceName> ocf:mssql:fci sudo pcs -f cfg resource create <floatingIPResourceName> ocf:heartbeat:IPaddr2 ip=<ip Address> sudo pcs -f cfg resource create <fileShareResourceName> Filesystem device=<networkPath> directory=<localPath> fstype=<fileShareType> sudo pcs -f cfg constraint colocation add <virtualIPResourceName> <sqlResourceName> sudo pcs -f cfg constraint colocation add <fileShareResourceName> <sqlResourceName> sudo pcs cluster cib-push cfg
For example, the following script creates a [!INCLUDE ssnoversion-md] clustered resource named
mssqlha
, and a floating IP resource with IP address10.0.0.99
. It also creates a Filesystem resource and adds constraints so all resources are colocated on same node as SQL resource.sudo pcs cluster cib cfg sudo pcs -f cfg resource create mssqlha ocf:mssql:fci sudo pcs -f cfg resource create virtualip ocf:heartbeat:IPaddr2 ip=10.0.0.99 sudo pcs -f cfg resource create fs Filesystem device="10.8.8.0:/mnt/nfs" directory="/var/opt/mssql/data" fstype="nfs" sudo pcs -f cfg constraint colocation add virtualip mssqlha sudo pcs -f cfg constraint colocation add fs mssqlha sudo pcs cluster cib-push cfg
After the configuration is pushed, [!INCLUDE ssnoversion-md] will start on one node.
-
Verify that [!INCLUDE ssnoversion-md] is started.
sudo pcs status
The following example shows the results when Pacemaker has successfully started a clustered instance of [!INCLUDE ssnoversion-md].
fs (ocf::heartbeat:Filesystem): Started sqlfcivm1 virtualip (ocf::heartbeat:IPaddr2): Started sqlfcivm1 mssqlha (ocf::mssql:fci): Started sqlfcivm1 PCSD Status: sqlfcivm1: Online sqlfcivm2: Online Daemon Status: corosync: active/disabled pacemaker: active/enabled pcsd: active/enabled