Written by 18:41 Database administration, Performance Tuning

Consolidating SQL Server Instance by Clustering and Stacking

Stacked SQL Cluster Instances

NOTES:

  • Windows Failover Clustering comprising two nodes.
  • Two SQL Server Failover Cluster Instances. This configuration optimizes the hardware. IN01 is preferred on Node1 and IN02 is preferred on Node2.
  • Port Numbers: IN01 listens on port 1435 and IN02 listens on port 1436.
  • High Availability. Both nodes back up each other. Failover is automatic in case of failure.
  • Quorum Mode is Node and Disk majority.
  • Backup LAN in place and routine backup configured using Veritas

Introduction

It is not uncommon to have developers and project managers demanding a new instance of SQL Server for every new application or service. While technologies like virtualization and Cloud have made spinning up new instances a breeze, some age-old techniques inbuilt with SQL Server make it possible to achieve low turn-around times when there is need to provide a new database for a new service or application. This state of affairs can be created by a DBA who can design and deploy a large SQL Server cluster capable of supporting most SQL Server databases required by the organization. There are added advantages to this kind of consolidation such as lower license costs, better governance, and ease of administration. In the article, we shall highlight some considerations we have had the opportunity to experience when using clustering and stacking as a means to consolidate SQL Server databases.

Clustering

Windows Server Failover Clustering is a very well-known High Availability Solution that has survived many versions of Windows Server and which Microsoft intends to keep investing in and improving. SQL Server Failover Cluster instances rely on WSFC. Both Standard and Enterprise Editions of SQL Server support SQL Server Failover Cluster Instances but the Standard Edition is limited to only two nodes. Consolidating databases on a single SQL Server FCI gives the benefits such as:

  • HA by Default — All databases deployed on a clustered SQL Server instance are highly available by default! Once a clustered instance is built, new deployments are taken care of in terms of HA ahead of time.
  • Ease of Administration – Fewer DBAs can spend time configuring, monitoring and when necessary troubleshooting ONE clustered instance supporting many applications. Properly, documenting the instance is also made much easier when dealing with one large environment. Configuring an Enterprise Backup solution to handle all databases in your environment is made easier by the fact that you have to do this configuration only one when using consolidated instances.
  • Compliance – Such key requirements as patching, and even hardening can be done once with minimal downtime on a large number of databases in a single administrative effort. In our shop, we have used Transaction Log Shipping between clustered instances at two data centres to ensure databases are protected from the risk of disasters.
  • Standardization – Enforcing such standards as naming conventions, access management, Windows Authentication, auditing and policy-based management is much easier when dealing with just one or two environments depending on the size of your shop

Listing 1: Extract Information About Your Instance

-- Extract Instance Details
-- Includes a Column to Check Whether Instance is Clustered 

SELECT SERVERPROPERTY('MachineName') AS [MachineName]
, SERVERPROPERTY('ServerName') AS [ServerName]
, SERVERPROPERTY('InstanceName') AS [Instance]
, SERVERPROPERTY('IsClustered') AS [IsClustered]
, SERVERPROPERTY('ComputerNamePhysicalNetBIOS') AS [ComputerNamePhysicalNetBIOS] 
, SERVERPROPERTY('Edition') AS [Edition]
, SERVERPROPERTY('ProductLevel') AS [ProductLevel]
, SERVERPROPERTY('ProductVersion') AS [ProductVersion]
, SERVERPROPERTY('ProcessID') AS [ProcessID]
, SERVERPROPERTY('Collation') AS [Collation]
, SERVERPROPERTY('IsFullTextInstalled') AS [IsFullTextInstalled]
, SERVERPROPERTY('IsIntegratedSecurityOnly') AS [IsIntegratedSecurityOnly]
, SERVERPROPERTY('IsHadrEnabled') AS [IsHadrEnabled]
, SERVERPROPERTY('HadrManagerStatus') AS [HadrManagerStatus]
, SERVERPROPERTY('IsXTPSupported') AS [IsXTPSupported];

Stacking

SQL Server supports up to fifty single instances on one server and up to 25 Failover Cluster Instances on a Windows Server Failover Cluster. Different versions of SQL Server can be stacked on the same environment to provide a robust environment that will support different applications. In such a configuration, upgrading databases can take the form of simply promoting them from one SQL Server instance to the next version in the very same cluster till the hardware ages. One key consideration to have in mind while stacking SQL Server is that you must allocate memory to each instance in such a manner that the total amount of memory allocated will not exceed the memory available on the operating system. The other point in this direction is to ensure that the SQL Server service account for each instance must have the lock pages in memory privileges. Assigning Lock Pages in Memory ensures that when SQL Server acquires memory, the Operating System does not attempt to recover such memory when other processes on the server need memory. Setting up a defined SQL Server service account, configuring MAX_SERVER_MEMORY, and grating Lock Pages in Memory privilege are an essential trio when stacking SQL Server instances.

Microsoft charges a few thousand dollars per pair of CPU cores. Stacking SQL Server instances allows you to leverage on this licensing model by having instances share the same set of CPUs (sweating the asset). We already mentioned that you can stack different versions of SQL Server thus taking care of legacy applications still running versions older than SQL Server 2016 for example. When using different Editions of SQL Server you may want to consider using Processor Affinity as described by Glen Berry in this article. Processor Affinity can also be used to control how CPU resources are shared among instance just like you control memory. Stacking also addresses security concerns for applications that must use the SA account for example or configuration concerns for applications that require a dedicated instance, or such options are a specific collation. Concern about the performance of the shared TempDB is another reason you may want to stack rather than lump all databases on one clustered instance.

It is worth noting that the value of clustering as highlighted earlier extends even further with stacking. For example, when patching an SQL Server instance with several FCIs, all FCIs can be patched in one go.

Points to Note

When using clustering, certain conventions will make administering and managing of the environment a little easier and sweat the assets better. We shall touch on a few of them briefly:

  • Current Client Tools — You may find yourself getting unusual errors when trying to manage an SQL Server 2016 instance using SQL Server Management Studio 2012. The errors do not specifically tell you that the problem is the client tool version. We typically have SQL Server Management Studio 17.3 instance on the client we wish to use to connect to our instances.
  • Naming Conventions — A naming convention makes it easy for you to be sure of which instance you are working on at any point in time. Using aliases, you can further reduce the burden of remembering long instance name on end users who need access to the database.
  • Preferred Node – Setting a preferred node for each SQL Server role on Failover Cluster Manager is a good idea, a good way to make sure the processing power of all your Cluster Nodes are being utilized. In our shop, after setting up preferred nodes, we configured the role to fail back between 0500 HRS and 0600 HRS in case there is an inadvertent failover.
  • Transaction Log Shipping – When configuring Disaster Recovery for FCIs it makes sense to identify all UNC paths using virtual names, not the names or IP Address of the Cluster Nodes. This ensures that things continue to function properly if a failover occurs. It is also very important to ensure that the SQL Server Agent Accounts on both sites have full control on these paths.

Listing 2: Configure Monitoring for Transaction Log Shipping Using Email

-- Create Table to Store Log Shipping Data

create table msdb dbo log_shipping_report 
(status bit, 
is_primary bit, 
server sysname, 
database_name sysname, 
time_since_last_backup int, 
last_backup_file nvarchar (500), 
backup_threshold int, 
is_backup_alert_enabled bit, 
time_since_last_copy int,
last_copied_file nvarchar 500), 
time_since_last_restore int, 
last_restored_file nvarchar(500), 
last_restored_latency int, 
restore_threshold int, 
is_restore_alert_enabled bit); 
go

-- Create an SQL Agent Job with the Following Script
-- This will send an Email at Intervals determined by the job Schedule
-- The Job Should be Created on the Log Shipping Secondary Clustered Instance
-- This Job Requires that Database Mail is Enabled

truncate table msdb dbo log_shipping_report 
go

insert into msdb dbo log_shipping_report 
EXEC sp_help_log_shipping_monitor; 
go

/*
select [server]
, database_name [database]
, time_since_last_copy [Last Copy Time]
, last_copied_file [Last Copied File]
, time_since_last_restore [Last Restore Time]
, last_restored_file [Last Restored File]
, restore_threshold [Restore Threshold]
, restore_threshold - time_since_last_restore [Restore Latency] 
from msdb.dbo.log_shipping_report; 
go 
*/

DECLARE @tableHTML NVARCHAR(MAX) ;
DECLARE @SecServer SYSNAME ;

SET @SecServer = @@SERVERNAME

SET @tableHTML =
N'<H1><font face="Verdana" size="4">Transaction Logshipping Status from Secondary 
Server ' + @SecServer + N'</H1>' +
N'<p style="margin-top: 0; margin-bottom: 0"><font face="Verdana" size="2">Please 
find below status of Secondary databases: </font></p> &nbsp;' +
N'<table border="1" style="BORDER-COLLAPSE: collapse" borderColor="#111111" 
cellPadding="0" width="2000" bgColor="#ffffff" borderColorLight="#000000" 
border="1"><font face="Verdana" size="2">' +
N'<tr><th><font face="Verdana" size="2">Secondary Server</th>
<th><font face="Verdana" size="2">Secondary Database</th>
<th><font face="Verdana" size="2">Last Copy Time</th>' +
N'<th><font face="Verdana" size="2">Last Copied File</th><th>
<font face="Verdana" size="2">Last Restore Time</th>' +
N'<th><font face="Verdana" size="2">Last Restored File</th><th>
<font face="Verdana" size="2">Restore Threshold</th>
<th><font face="Verdana" size="2">Restore Latency</th>' +
CAST ( ( SELECT td = lsr.server,	'',
td = lsr [database_name], td = lsr time_since_last_copy '',
td = lsr last_copied_file td = lsr time_since_last_restore '', 
td = lsr last_restored_file, '',
td = lsr restore_threshold '',
td =
case
when lsr restore_threshold
lsr time_since_last_restore < 0 then + '<td bgcolor="#FFCC99"><b><font face="Verdana" 
size="1">' + 'CRITICAL' + '</font></b></td>'
when lsr restore_threshold
lsr time_since_last_restore < 20 and lsr restore_threshold
lsr time_since_last_restore > 0 then + '<td bgcolor="#FFBB33"><b><font face="Verdana 
size="1">' + 'WARNING' + '</font></b></td>'
when lsr restore_threshold
lsr time_since_last_restore > 20 then + '<td bgcolor="#21B63F"><b><font face="Verdana 
size="1">' + 'OK' + '</font></b></td>'
end , ''
FROM msdb dbo log_shipping_report as lsr
ORDER BY lsr.[database_name]
FOR XML PATH('tr'), TYPE ) AS NVARCHAR(MAX) ) +
N'</table>' + '&nbsp';

EXEC msdb dbo.sp_send_dbmail
@recipients='[email protected]',
@copy_recipients='[email protected]',
@subject = 'Transaction Log Shipping Report',
@body = @tableHTML,
@body_format = 'HTML' ;

Disk Drives

One side effect of stacking SQL Server instance and making provision for several databases is the tendency to run out of drive letters. We circumvented this problem by configuring Volume Mount Points. Each disk assigned to a cluster role is configured as a mount point with a drive letter only necessary for one or two drives per instance. An important point to note when using volume mount points on a cluster is that in the future when you need to add more mount points to perform similar maintenance tasks, it will be necessary to put BOTH the primary drive which owns the drive letter and the mount point in maintenance mode on the cluster.

In our case, we found the name of each Volume Mount Point based on the Cluster Role it was assigned to. With so many drives to deal with you definitely would need to work out a way for both you and the Storage Administrator to identify a disk unique so that maintaining the disks at storage level, for example, would not be much of a hassle.

Listing 3: Monitor Disk Space Usage When Using Volume Mount Points

-- The Following Script Will Show Disk Space Usage from Within SQL Server 
-- It is Especially Helpful When Using Volume Mount Points
-- Volume Mount Point Space Usage Can Also Be Monitored from Computer Management (OS Level)

SELECT DISTINCT vs volume_mount_point 
, vs file_system_type 
, vs logical_volume_name
, CONVERT(DECIMAL!18 2 vs total_bytes 1073741824.0) AS [Total Size (GB)]
, CONVERT(DECIMAL(18 2 vs available_bytes 1073741824.0' AS [Available Size (GB)]
, CAST(CAST(vs available_bytes AS FLOAT)/ CAST(vs total_bytes AS FLOAT) AS DECIMAL (18,2)) 
* 100 AS [Space Free %]
FROM sys.master_files AS f WITH (NOLOCK)
CROSS APPLY sys.dm_os_volume_stats f database_id, f [file_id]i AS vs OPTION (RECOMPILE);

Database Deployment

In our case, our strategy was to ensure new databases followed our standard. Older databases were handled with a little more care since we were sort of consolidating and upgrading at the same time. Database Migration Assistant helped to tell us which databases would definitely not be compatible with our hallowed SQL Server 2016 instance and we left them in peace (some with compatibility levels are as low as 100). Each deployed database should have its own volumes for data and log files depending on its size. Using separate volumes for each database is another step towards having a very well-organized environment which is important considering the potential complexity of this consolidated environment. The last statement also implies that when you allow an application to create its own databases, you must as the DBA relocate the data files after the deployment is done because the application will use the same file locations used by the model database.

Listing 4: Relocating User Databases

-- 1. Set the database offline
-- Be sure to replace DB_NAME with the actual database name

ALTER DATABASE DB_NAME SET OFFLINE

-- 2. Move the file or files to the new location.
-- This means actually copying the datafiles at OS level
-- You may also need grant the SQL Server Service account full permissions on the data file
-- 3. For each file moved, run the following statement.

ALTER DATABASE DB_NAME MODIFY FILE ( NAME = logical_name FILENAME = 'new_path\os_file_name')

-- 4. Bring the database back online

ALTER DATABASE database name SET ONLINE 

-- 5. Verify the file change:

SELECT name, physical_name AS CurrentLocation, state_desc
FROM sys.master_files
WHERE database_id = DB_ID(N'DB_NAME');

Access Management

You will agree that in our consolidated environment, we could end up having a very long list of Server Level objects such as logins. Using Windows Groups will help shorten this list and simplify Access Management on each clustered instance. Typically, you will need groups created on Active Directory for Application Admins who need access, Application Service Accounts, business user who needs to pull reports and of course Database Administrators. One key benefit of using Windows Groups is that access can be granted or revoked simply by managing the membership of these groups right in Active Directory.

It is probably obvious by now that this benefit in the area of Access Management is only possible with Windows Authentication. SQL Server logins cannot be managed in groups.

Listing 5: Instance Logins, Database Users, and their Roles

create table #userlist (
[Server Name] varchar(20)
,[Database Name] varchar(50)
,[Database User] varchar(50)
, [Database Role] varchar(50)
, [Instance Login] varchar(50)
, [Status] varchar(15)
)
go
insert into #userlist
exec sp_MSforeachdb @command1 ='
USE [?]
IF	''?'' NOT IN ("tempdb","model"J"msdb"J"master")
BEGIN
select @@servername as instance_name , ''?'' as database_name , rp.name as database_user , mp.name as database_role , sp.name as instance_login , case
when sp.is_disabled = 1 then ''Disabled'' when sp.is_disabled = 0 then ''Enabled'' end
[login_status]
from sys.database_principals rp
left outer join sys.database_role_members drm on (drm.member_principal_id = rp.principal_id)
left outer join sys.database_principals mp on (drm.role_principal_id = mp.principal_id)
left outer join sys.server_principals sp on (rp.sid=sp.sid)
where rp.type_desc in (''WINDOWS_GROUP'',''WINDOWS_USER'',''SQL_USER'')
END' go
select * from #userlist go
drop table #userlist

Conclusion

We have examined at a very high level the benefits that can be gained by clustering and stacking SQL Server instances as a means to achieving consolidation, cost optimization and ease of management. If you find yourself capable of purchasing large hardware, you can explore this option and reap the benefits we have described above.

Tags: , , Last modified: September 22, 2021
Close