Archive for the ‘Clustering’ Category

What is CAU ? Cluster Update Automation with CAU

#CAU is a great new feature but how does it fit in your infrastructure ?

I have already a WSUS server and I use SCCM ,and I use WSUS for my DTAP environment, and now Do I need another WSUS server ? or can I reuse the old WSUS ?

WSUS 3.0SP2 (on W2K8R2): not yet compatible with Windows Server 2012

You can’t use SCCM to pull the Updates.

So basically install a downstream server for the CAU or primary wsus, if you have more WSUS servers you can sync the updates with powershell to hold the same info on all your other servers.

 

  • Single-click launch of cluster-wide updating operation
  • Or a single PS cmdlet
  • “Updating Run”image
  • Physical or VM clusters
  • CAU scans, downloads and installs applicable updates on each node
  • Restarts node as necessary
  • One node at a time
  • Repeats for all cluster nodes
  • Customize pre-update & post-update behavior with PS scripts

 

  • Updates (GDRs) from Windows Update or WSUS
  • Hotfixes (QFEs) from a local File Share
  • Simple customization that installs almost any software update off a local File Share

 

 

 

 

 

 

 

image

  • Adds CAU clustered role
  • Just like any other clustered workload
  • Resilience to planned and unplanned failures
  • Not mutually exclusive with on-demand updating
  • Analogy: Windows Update scan on your PC with AU auto-install
  • But possible conflicts with Updating Runs in progress
  • “Configured, but on hold” functionality
  • Compatible with VCO Prestaging

image

Powershell usage :

Sample: fill in the cluster name and the wsus share.

 

Invoke-CauScan -ClusterName CONTOSO-FC1 -CauPluginName Microsoft.WindowsUpdatePlugin, Microsoft.HotfixPlugin -CauPluginArguments @{}, @{ ‘HotfixRootFolderPath’ = ‘\CauHotfixSrvshareName’; ‘HotfixConfigFilePath’ = ‘\CauHotfixSrvshareNameDefaultHotfixConfig.xml’ } -RunPluginsSerially -Verbose
Invoke-CauRun -ClusterName CONTOSO-FC1 -CauPluginName Microsoft.WindowsUpdatePlugin, Microsoft.HotfixPlugin -CauPluginArguments @{ ‘IncludeRecommendedUpdates’ = ‘True’ }, @{ ‘HotfixRootFolderPath’ = ‘\CauHotfixSrvshareName’;  ‘HotfixConfigFilePath’ = ‘\CauHotfixSrvshareNameDefaultHotfixConfig.xml’ } -MaxRetriesPerNode 2  -StopOnPluginFailure –Force

 

Options: RunPluginsSerially, StopOnPluginFailure, SeparateReboots

  • CAU supports only Windows Server 2012 clusters
  • Can be installed on Windows 8 Client RSAT package

Make CAU the only tool updating the cluster
Concurrent updates by other tools: e.g., WSUS, WUA, SCCM might cause downtime

For a WSUS-based deployment:

WSUS 4.0: needs a workaround with Beta builds (only) http://social.technet.microsoft.com/wiki/contents/articles/7891.how-wsus-and-cluster-aware-updating-are-affected-by-windows-server-8-beta-updates.aspx 
WSUS 3.0SP2 (on W2K8R2): not yet compatible with Windows Server 2012

Think about firewalls on nodes!
Windows Firewall Beta (or non-Windows firewall): create a firewall rule and enable it for domain-scope, wininit.exe program, dynamic RPC endpoints, TCP protocol
Windows Firewall RC: Enable the "Remote Shutdown" firewall rule group for the Domain profile, or pass the “-EnableFirewallRules” parameter to Invoke-CauRun, Add-CauClusterRole or Set-CauClusterRole cmdlets
Make sure GPOs agree

CAU: Understand and Troubleshoot Guide: http://www.microsoft.com/download/en/details.aspx?id=29015

CAU Scenario Overview: http://technet.microsoft.com/en-us/library/hh831694.aspx

CAU Windows PowerShell cmdlets
‘Update-Help’ downloads the full cmdlet help for CAU cmdlets
Online: http://go.microsoft.com/fwlink/p/?LinkId=237675

Starting with Cluster-Aware Updating: Self-Updating: http://blogs.technet.com/b/filecab/archive/2012/05/17/starting-with-cluster-aware-updating-self-updating.aspx

Posted August 25, 2012 by Robert Smit in Cluster Update Automation, Clustering

Tagged with ,

Virtual Machine Density Flexibility in Windows Server 2008 R2 Failover Clustering

Recently Windows Server 2008 R2 Failover Clustering has changed the support statement for the maximum number of Virtual Machines (VMs) that can be hosted on a failover cluster from 64 VMs per node to 1,000 VMs per cluster.  This article reflects the new policy in Hyper-V: Using Hyper-V and Failover Clustering.

Supporting 1000 VMs will enable increased flexibility to utilize hardware that has the capacity to host more VMs per physical server while maintaining the high availability and management components that Failover Clustering provides. 

Number of Nodes in Cluster

Max Number of VMs per Node

Average Number of VMs per active Node

Max # VMs in Cluster

2 Nodes (1 active + 1 failover)

384

384

384

3 Nodes (2 active + 1 failover)

384

384

768

4 Nodes (3 active + 1 failover)

384

333

1000

5 Nodes (4 active + 1 failover)

384

250

1000

6 Nodes (5 active + 1 failover)

384

200

1000

7 Nodes (6 active + 1 failover)

384

166

1000

8 Nodes (7 active + 1 failover)

384

142

1000

9 Nodes (8 active + 1 failover)

384

125

1000

10 Nodes (9 active + 1 failover)

384

111

1000

11 Nodes (10 active + 1 failover)

384

100

1000

12 Nodes (11 active + 1 failover)

384

90

1000

13 Nodes (12 active + 1 failover)

384

83

1000

14 Nodes (13 active + 1 failover)

384

76

1000

15 Nodes (14 active + 1 failover)

384

71

1000

16 Nodes (15 active + 1 failover)

384

66

1000

 

Note: There is no requirement to have a node without any VMs allocated as a “passive node”.  All nodes can host VMs and have the equivalent to 1 node of capacity unallocated (total, across all the nodes) to allow for placement of VMs if a node fails or is taken out of active cluster membership for activities like patching or performing maintenance. 

It is important to perform proper capacity planning that takes into consideration the capabilities of the hardware and storage to host VMs, and the total resources that the individual VMs require, while still having enough reserve capacity to host VMs in the event of a node failure to prevent memory over commitment.  The same base guidance of Hyper-V configuration and limits of a maximum number of VMs supported per physical server still apply.  This currently states that no node can host more than 384 running VMs at any given time, and that the hardware scalability should not exceed 4 virtual processors per VM and no more than 8 virtual processors per logical processor.  Review this Technet article on VM limits and requirements: Requirements and Limits for Virtual Machines in Hyper-V in Windows Server 2008 R2

Here are some Frequently Asked Questions:

1. Is there a hotfix or service pack required to have this new limit? 

a. No, this support policy change based on extra testing we have performed to verify that the cluster retains its ability to health detect and failover VMs with these densities.  There are no changes or updates required.

2. 64 VMs per node on a 16 node cluster equals 1024 VMs, so aren’t you actually decreasing the density for a 16 node cluster? 

a. No, the previous policy was to have 64 VMs per node in addition to one nodes equivalent of reserve capacity, which is 15 nodes x 64 VMs which equals 960 with the spare capacity of a passive node.  This policy slightly increases the density for a 16 node cluster and the density for an 8 node cluster is more than twice and a 4 node cluster more than 4-times as high as before.

3. Does this include Windows Server 2008 clusters?

a.  This change is only for Windows Server 2008 R2 clusters.

4. Why did you make this change?

a. We are responding to our customers’ requests to have flexibility in the number of nodes and the number of VMs that can be hosted.  For VMs running workloads that have relatively small demand of VM and storage resources, customers would like to place more VMs on each server to maximize their investiments and lower the management costs.  Other customers want the flexibility of having more nodes and fewer VMs. 

5. Does this mean I can go and put 250 VMs on my old hardware?

a. Understanding the resources that your hardware can provide and the requirements of your VMs is still the most important thing in identifying the capacity of your cluster or the specific Hyper-V servers.    Available RAM and CPU resources are relatively easy to calculate, but another important part of the equation is capacity of the SAN/Storage.  Not just how many GB or TB of data it can store, but can it handle the I/O demands with reasonable performance?  1000 VMs can potentially produce a significant amount of I/O demand, and the exact amount will depend on what is running inside the VMs.  Monitoring the storage performance is important to understand the capacity of the solution.

Source :http://blogs.msdn.com/b/clustering/archive/2010/06/28/10031803.aspx

  • Tag