SQL Server 2008 Clustering – Crib Sheat

August 25, 2012

A few months ago I was tasked to setup a SQL Server 2008 Cluster in an active/active configuration.

This was required in order to migrate an existing application, from one of our smaller offices over to our global data centre. Other than a couple of prior cluster installations done by my colleague we had no real documentation to follow and I quickly ascertained that SQL Clustering theory is something you learn/read as a chapter and unless you actually implement it, it’s quickly forgotten or confused with other HA/DR solutions such as database mirroring and replication.

I was slightly apprehensive with the complexity of this task and decided before so much as logging into the servers, that I should brush up on my SQL Server Clustering theory. Armed with the theory refresh brain dump, I could then tackle the installation and configuration safe in the knowledge that I had an excellent understanding of SQL Clustering concepts.

Below are my crib-sheet notes that I created for SQL Server Clustering.

I find them really useful to refer too just prior to walking into a meeting for quick hard facts about what Clustering is and is not – it’s given me the confidence to challenge project manager, architects plans and project proposals!



Clustering General Concepts

  • Windows OS Clustering requires Windows Enterprise Edition. Standard is not enough.
  • However, SQL 2008 Clustering can be setup using SQL Standard edition (limited to two  nodes).
  • SQL clustering depends and is built on Windows Clustering. This should be setup and configured by the Server team. They should provide you with the Windows Cluster name and virtual IP (VIP)
  • The ‘Failover cluster Management tool’ is a great tool to run a cluster validation test which will verify that the Windows Cluster is setup correctly. This needs to be rock-solid and the results of the report need to be 100% correct for Microsoft to support your installation.
  • Windows Cluster service runs on each nod
  • SQL Server service runs on only one node at any time. When it fails over, the service is started on the other node
  • Single set of hard drives configured for both servers to use – this could be your single point of failure
  • Single set of system and user databases per instance. If these DB’s fail, then all nodes fail.
  • All clustered services run under domain account. Browser service not necessary.
  • Nodes poll each other constantly to detect failover. All poll quorum. Quorum decides.
  • Clustering does not provide load-balancing.
  • You cannot add SQL features to and existing SQL Cluster. Ensure that you install all the required features during the setup!  – http://support.microsoft.com/kb/2547273
  • Quorum drive – used by cluster specific settings and log data. Must be shared and available to all nodes. (512MB/1GB)
  • Quorum is basically just log files. If you change a setting on the active node, it replicates this setting to other nodes. Its primary purpose is to record any changes on Active.
  • If multiple SQL instances are installed; each instance must be configured to listen on their own IP address only. Or each instance must use a different port.
  • Each node must have at least two network cards   Public Network – clients connect to clusterPrivate Network – cluster nodes communicate on this. Heartbeat signal.
  • SQL 2008 is cluster aware, however font-end app may not be. Users would have to re-login

SSAS is cluster aware;

  • Should be installed in its own cluster group.
  • SSAS –is resource hungry, should be allocated on dedicated Cluster Nodes.

SSRS is not Cluster aware;

  • Put SSRS databases (ReportServer and ReportServerTempDB) on the Clustered SQL Instance.
  • Use Standalone SSRS web servers and Scale-out using NLB
  • Failover clustering is supported only for the report server database; cannot run the Report Server Windows service or Web service as part of a failover cluster. Leave the RS service running on both nodes.
  • To host a report server database on a SQL Server failover cluster, the cluster must already be installed and configured. You can then select the failover cluster as the server name when you create the report server database.
  • When you install Reporting Services, choose the ‘Install but do not configure server’ option on the Report Server Installation Options page. You then point RS to same DB’s on same cluster.
  • RS scale out notes:  http://msdn.microsoft.com/en-us/library/ms159114(v=sql.105).aspx

SSIS is not cluster aware;

  • If SSIS used, must install on each node, update xml setting file.
  • Disable windows power saving features – can cause a failover
  • Use the same product key on your Cluster nodes

Resources and Resource Groups

  • Resources: Mapped to drive/service that is controlled by it. Dependency chaining; order of startup
  • Resource Groups: Fail over from one node to another
  • Each resource group is independent of each other. No dependencies between groups!
  • Each SQL instance is placed into its own resource group. This is so instances failover independently.
  • SQL 2008 – we can create resource groups within install wizard. Do not have to rely on Windows admins.
  • The following are examples of Resources:

– SQL Agent Service
– SQL Database Service
– Shared Drives
– Network Share
– IP Address
– Host Name

SQL Instances on a Cluster

  • Each SQL Instance has its own

– Host name (cluster name) – unique to instance
– IP Address
– Hard drives

  • Each instance needs its own logical disk on the share array.
  • Logical disks cannot be shared among instances of SQL 2008.
  • Active/Active cluster; Run one instance on each node
  • Select new failover cluster which would give you this result:- cluster1\inst1- cluster2\inst2
  • If you want to add more SQL instances, it’s always a ‘new failover cluster’ in the setup wizard
  • One instance per one Cluster resource group.
  • You can install 2 instances on the same SQL Server; however all instances are on the same node, if you failover to node2, inst1 & inst2 failover.

– cluster1\inst1
– cluster1\isnt2

  • TIP: Install all the instances on one node first, and then move to the next node. Could save time on patching


  • A whole bunch of IP’s are required for Clustering. Easiest to show this in a diagram.

Active/Active Configuration

Active/Active SQL 2008 Cluster Configuration

Active/Passive Configuration

Active/Passive SQL 2008 Cluster Configuration


  • Shared array must have at least 2 logical partitions. Recommended that we align each LUN with a physical disk in Windows;

– 1 partition used for SQL DB’s
– 1 partition used for Quorum

  • Each node in cluster is required to use same drive letter when referring to the drives on SAN
  • SQL 2008 binaries are installed on local disk of each node


  • Clustering mitigates downtime. However there will always be some downtime. Clustering effectively shuts down the services on one node, then fires up the services on the other node, which can take anything up to 1min depending on your environment. Downtime is not equal to zero.
  • Nodes are constantly polling each other. If passive node does not receive ‘heartbeat’ from active node, after a predetermined delay, failover is initiated.
  • On failover, the passive node (becomes active) and;

– Takes control of shared array and reads quorum.
– Takes control of the VN and VIP
– Performs a SQL Server start-up, going through database recovery steps

  • The time taken for the aboveprocess, depends on system speed, number of un-committed transactions (rolled forward/back).
  • The new Active node, now announces itself on the network with the VN and VIP, clients can now connect
  • Manual failover: Use the failover cluster management tool. Move a resource group from Active to Passive.

Maintaining the Cluster

  • Connect to cluster using SQL tools ( e.g. management studio) must specify the VN/VIP
  • Connect to cluster using windows tools (e.g. system monitor) must specify the node
  • Use the Failover cluster Management tool – check which node is active.
  • Windows explorer will only show drives on Active node (quorum and data)
  • Event logs are identical on all nodes, replicated between Active and Passive nodes
  • Terminal services – if you connect using the virtual cluster name/IP you will connect to active node. If it fails over during this time, TS could become confused & explorer. Log out and back in.
  • Cluster Logs: c:\windows\cluster\cluster.log
  • Do not use services (control panel) to manage stop/start of SQL services. The Service panel is not cluster aware. Always use the SQL Server Configuration wizard.


  • Installing a SP or HF is fairly straight-forward
  • Move SQL instances onto one node
  • First upgrade the node that is not the active node
  • Failover to upgraded node (making it active)
  • Upgrade other node
  • Fail back to original
  • Once you’ve installed SP, reboot Active, reboot Passive.


My primary reference was chapter 20 in ‘Wrox Professional Microsoft SQL Server 2008 Administration’. I can’t recommend this entire book highly enough and specifically this chapter to anyone who needs a start in SQL Clustering.

Another great source for the Clustering installation steps is Edwin Sarmiento’s article on mssqltips.com – http://www.mssqltips.com/sqlservertip/1687/install-sql-server-2008-on-a-windows-server-2008-cluster-part-1/

The above list is by no means exhaustive and I plan to add, update and ammend as I go along. Any comments, please let me know. Thanks for reading.


One Response to “SQL Server 2008 Clustering – Crib Sheat”

  1. […] my previous clustering related post for further details and […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: