HP 3PAR / Windows Server 2008/2012 Boot from SAN Guide

Use Case: 

  • Store operating systems on the SAN, generally this provides higher availability, redundancy & recovery depending on the RAID & SAN configuration.
  • In diskless server builds to reduce power consumption by having no internal disks.
  • Blade architecture, where internal disks aren’t large enough to hold application and OS (not so much of an issue now considering the density of modern 3.5/2.5” disks).

Benefits:

  • Minimize system downtime, perhaps a critical component such as a processor, memory, or host bus adapter fails and needs to be replaced. You need only swap the hardware and reconfigure the HBA’s BIOS, switch zoning, and host-port definitions on the storage processors.
  • Enable rapid deployment scenarios.
  • Boot from SAN alleviates the necessity for each server to have its own direct-attached disk, eliminating internal disks as a potential point for failure. Thin diskless servers also take up less rack space, require less power, and are generally less expensive because they have fewer hardware components.
  • Centralised management when operating system images are stored on networked disks, all upgrades and fixes can be managed at a centralized location. Changes made to disks in a storage array are readily accessible by each server (This includes benefits in capacity planning as you typically have a holistic view of your SAN environment) .
  • All the boot information and production data stored on local SAN ‘A’ can be replicated to local SAN ‘B’ (see 3PAR Peer Persistence) or one at a geographically dispersed disaster recovery site. If a disaster destroys functionality of the servers at the primary site, the remote site can take over with minimal downtime.
  • Recovery from server failures is simplified in a SAN environment. With the help of snapshots, mirrors of a failed server can be recovered quickly by booting from the original copy of its image. As a result, boot from SAN can greatly reduce the time required for server recovery.

Risks:

  • With older windows operating systems – (Windows 2003) it was recommended that the boot LUN should be on separate SCSI bus from the shared LUNs (to avoid issues with SCSI-bus resets disrupting I/O – causing a BSOD. In windows 2008/2012 this is not an issue, boot LUNS can share the same bus/path).
  • Financial risk, understand that CAPEX costs can be higher than if you were to boot off you local disks (additional HBA’s, cabling & sfp’s). If you calculate the  £ per GB cost of a typical high-end SAN versus the cost of mirrored drives it can be allot more. Do you have enough physical capacity in your array to support this? if not you will need to buy more disks increase throughput/IOPs.

Potential Design Impact:

  • If a host/nodes swap out pages frequently this could result in heavy I/O traversing your storage fabric, this may negatively impact services (especially latency-critical apps). This might not be apparent if you have a few servers, but what if you have many that have a BFS (Boot From SAN) requirement. This can be mitigated to some extent by moving page-files to local disks or installing more memory. If this is a SQL server investigate using ‘Lock Pages in Memory’ function, to prevent SQL from paging workloads out unnecessarily (let SQL manage it’s working set size, check buffer pool to RAM ratio too).
  • Migrating OS to BFS in some situations can have a negative impact on the array or fabric switches potentially causing contention (check ISL fan-in ratios). This is more of an issue with iSCSI/NFS than FC due to the nature of the protocol. Ensure that your fabric switches – core/access have enough bandwidth to supply I/O demands. In some situations even storage processors could be overwhelmed (check host queue depth settings – this allows the host to throttle back I/O).
  • Check that you have enough FC/FCOE/iSCSI/NFS uplinks to service any high I/O workloads. Certainly the most opted for solution is to increase array side cache, but this is often the most costly option and doesn’t really address the root cause of any latency or throughput constraints.
  • Be mindful of boot storms after an outage or in VDI deployments, you may have to selectively boot tier1 apps in phases (bear in mind tier 1 application dependencies such as DNS, LDAP or Active Directory servers, they need to be started first). Review your tier 1 app service dependencies and their IOPs requirements (see point above for throughput considerations).

Key Points (3PAR):

  • The Boot LUN should be the lowest-ordered LUN number that exports to the host (3PAR recommended), however some arrays assign LUN0 to the controller in which case LUN1 can be used.
  • NOTE: With the introduction of the Microsoft Storport driver, booting from a SAN has become less problematic. Refer to http://support.microsoft.com/kb/305547.
  • For the initial boot, restrict the host to a single path connection on the 3PAR array. Only a single-path should be available on the HP 3PAR StoreServ Storage and a single path on the host to the VLUN that will be the boot volume (this can be changed after the host has booted and you have installed the MPIO driver).
  • It goes without saying check that your SAN, FC switch, server & HBA cards are running the latest firmware.
  • Ensure appropriate zoning techniques are applied (see my best practice guide)
  • If you are using clustering ensure nodes in a cluster have sole access to the boot LUN (1:1 mapping), using LUN masking (array side).
  • Server side HBA configuration (This can vary depending on HBA vendor – check your documentation)
  • Use soft zoning (zoning per pWWN), generally this is a requisite for HP 3PAR – but in terms of booting from SAN provides more flexibility. However, if the HBA card fails you will need to update LUN masking and soft zoning configurations.

 3PAR: Creating & Exporting Virtual Volumes

Virtual volumes are the only data layer visible to hosts. After devising a plan for allocating space for host servers on the HP 3PAR StoreServ Storage, create the VVs for eventual export as LUNs to the Windows Server 2012/2008 host server.

You can create volumes that are provisioned from one or more common provisioning groups (CPGs). Volumes can be fully provisioned from a CPG or can be thinly provisioned. You can optionally specify a CPG for snapshot space for fully-provisioned volumes. (Don’t forget, that if your requirements change – and you need to convert these volumes to thin provisioned volumes or vice-versa use the 3PAR System Tune operation).

Using the HP 3PAR Management Console :

  1. From the menu bar, select: Actions→Provisioning→Virtual Volume→Create Virtual Volume
  2. Use the Create Virtual Volume wizard to create a base volume.
  3. Select one of the following options from the allocation list: ‘Fully Provisioned’ / ‘Thinly Provisioned’

Next perform softzoning / LUN masking, see key point mentioned earlier about only presenting a single path to host. After you have installed the MPIO drivers (post OS install) present additional paths.

Configuring Brocade HBA to boot from SAN:

  1. Check and enable HBA BIOS (BIOS for arrays must be disabled that are not configured for boot from SAN).
  2. Enable one of the following boot LUN options:
  • Auto Discover—When enabled, boot information, such as the location of the boot LUN,is provided by the fabric (This is the default value).
  • Flash Values—The HBA obtains the boot LUN information from flash memory.
  • First LUN —The host boots from the first LUN visible to the HBA that is discovered in the fabric.
  1. Select a boot device from discovered targets.
  2. Then just save changes and exit.

Configuring Emulex HBA to boot from SAN:

  1. Boot the Windows Server 2012/2008 system following the instructions in the BootBios update manual.
  2. Press Alt+E. For each Emulex adapter, set the following parameters:
  3. Select Configure the Adapter’s Parameters.
  4. Select Enable or Disable the BIOS; for SAN boot, ensure that the BIOS is enabled.
  5. Press Esc to return to the previous menu.
  6. Select Auto Scan Setting; set the parameter to First LUN 0 Device; press Esc to return to the previous menu.
  7. Select Topology.
  8. Select Fabric Point to Point for fabric configurations.
  9. Select FC-AL for direct connect configurations.
  10. Press Esc to return to the previous menu if you need to set up other adapters. When you are Finished, press x to exit and reboot.

Configuring Qlogic HBA to boot from SAN:

Note: use the QLogic HBA Fast!UTIL utility to configure the HBA. Record the Adapter Port Name WWPN for creating the host definition  in the 3PAR IMC (however if server is zoned correctly you should see the HBA pWWN’s when adding a new host).

  1. Boot server; as the server is booting, press the Alt+Q or
  2. Ctrl+Q keys when the HBA BIOS prompts appear.
  3. In the Fast!UTIL utility, click Select Host Adapter and then select the appropriate adapter.
  4. Click Configuration Settings→Adapter Settings.
  5. In the Adapter Settings window, set the following.
  6. Host Adapter BIOS: Enabled
  7. Spinup Delay: Disabled
  8. Connection Option:0 for direct connect 1 for fabric
  9. Press Esc to exit this window.
  10. Click Selectable Boot Settings. In the Selectable Boot Settings window, set Selectable Boot Device to Disabled.
  11. Press Esc twice to exit; when you are asked whether to save NVRAM settings, click Yes.

Connecting Multiple Paths for Fibre Channel SAN Boot

After the Windows Server 2012/2008 host completely boots up and is online, connect additional paths to the fabric or the HP 3PAR disk storage system directly by completing the following tasks.

  1. On the HP 3PAR StoreServ Storage, issue createhost -add <hostname> <WWN> to add the additional paths to the defined HP 3PAR StoreServ Storage host definition.
  2. On the Windows Server 2012/2008 host scan for new devices.
  3. Reboot the Windows Server 2012/2008 system.
  4. Install following patches: KB2849097
  5. Setup Multipathing, Install the following patches: KB2406705 and KB2522766

Windows Server 2008, Server 2012 implementation steps:

On the first Windows Server 2012 or Windows Server 2008 reboot following an HP 3PAR array firmware upgrade, whether a major upgrade or an MU update within the same release family, the Windows server will mark the HP 3PAR LUNs “offline.”

This issue occurs only in the following configurations:

  1. HP 3PAR LUNs on Windows standalone servers.
  2. HP 3PAR LUNs that are used in Microsoft Failover Clustering and are not configured as “shared storage” on the Windows failover cluster. If HP 3PAR LUNs that are used in Microsoft Failover Clustering are configured as shared storage, then they will not experience the same problem (that is, be marked offline) as in a Windows standalone-server configuration.

When the HP 3PAR LUNs are marked offline, the you must follow these steps so that applications can access the HP 3PAR LUNs again:

  1. Click Computer Management→Disk Management.
  2. Right-click each of the HP 3PAR LUNs.
  3. Set the LUN online.

HP recommends the execution of Microsoft KB2849097 on every Windows Server 2008/2012 host connected to a HP 3PAR array prior to performing an initial array firmware upgrade. Subsequently, the script contained in KB2849097 will have to be rerun on a host each time new HP 3PAR LUNs are exported to that host.

KB2849097 is a Microsoft PowerShell script designed to modify the Partmgr Attributes registry value that is located at: HKLM\System\CurrentControlSet\Enum\SCSI\<device>\<instance>\DeviceParameters\Partmgr.

NOTE: The following procedure will ensure proper execution of KB2849097, which will prevent the HP 3PAR LUNs from being marked offline when the Windows server is rebooted following an array firmware upgrade.

Save the following script as ‘.ps1’ on your system:

$val = 0
$vendor = Read-Host &quot;Enter Vendor String&quot;
$devIDs = Get-ChildItem &quot;HKLM:\SYSTEM\CurrentControlSet\Enum\SCSI\Disk*Ven_$vendor*\*\Device Parameters\&quot;
 
 foreach ($id in $devIDs)
{
    $error.Clear()
    $regpath = $id.PSPath + &quot;\Partmgr\&quot;
    Set-ItemProperty -path $regpath -Name Attributes -Value $val -ErrorAction SilentlyContinue
 
    if ($error) # didn't find the path, create it and try again
    {
        New-Item -Path $id.PSPath -Name Partmgr
        Set-ItemProperty -path $regpath -Name Attributes -Value $val -ErrorAction SilentlyContinue
        $error.Clear()
    }
 
   Get-ItemProperty -Path $regpath -Name Attributes -ErrorAction SilentlyContinue | Select Attributes | fl | Out-String -Stream
}

Windows Server 2008/2012 requires the PowerShell execution policy to be changed to RemoteSigned to allow execution of external scripts. This must be done before the script is executed. To change the PowerShell execution policy, open the PowerShell console and issue the following command:

Set-ExecutionPolicy RemoteSigned

You might be prompted to confirm this action by pressing y.

The next step is to save the script as a .ps1 file to a convenient location and execute it by issuing the following command in a PowerShell console window:

C:\ps_script.ps1

The above command assumes that the script has been saved to C: under the name

ps_script.ps1.

You will then be prompted to provide a Vendor String, which is used to distinguish between different vendor types. The script will only modify those devices whose Vendor String matches the one that has been entered into the prompt. Enter 3PAR in the prompt to allow the script to be executed on all HP 3PAR LUNs currently presented to the host as shown in the output below:

Enter Vendor String: 3PAR

The script will then run through all HP 3PAR LUNs currently presented to the host and set the Attributes registry value to 0. In order to verify that the Attributes value for all HP 3PAR LUNs were properly modified, issue the following command:

Get-ItemProperty -path
“HKLM:\SYSTEM\CurrentControlSet\Enum\SCSI\Disk*Ven_3PARdata*\*\Device Parameters\Partmgr” -Name Attributes
The Attributes value should be set to 0 as shown in the example below:
PSPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM

\CurrentControlSet\Enum\SCSI\Disk&Ven_3PARdata&Prod_VV\5&381f35e2&0&00014f\Device Parameters\Partmgr

PSParentPath :
Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SYSTEM

\CurrentControlSet\Enum\SCSI\Disk&Ven_3PARdata&Prod_VV\5&381f35e2&0&00014f\Device Parameters
PSChildName : Partmgr
PSDrive : HKLM PSProvider : Microsoft.PowerShell.Core\Registry
Attributes : 0 (so you are good to go)

Setting up Multipathing (Windows 2008/2012)

For high-availability storage with load balancing of I/O and improved system and application performance, Windows Server 2012/2008 requires the native Microsoft MPIO and the StorPort miniport driver.

Configuring Microsoft MPIO for HP 3PAR Storage required to resolve issues with MPIO path failover, it is recommended that hotfixes KB2406705 and KB2522766 be installed for all versions of Windows Server 2008 up to and including Windows Server 2008 R2 SP1.

Windows Server 2008, Windows Server 2008 SP1, and Windows Server 2008 SP2 also require that hotfix KB968287 be installed to resolve an issue with MPIO path failover. All three patches (KB2522766, KB968287, KB2406705) are required for non-R2 versions of Windows Server 2008. Only two patches (KB2522766 and KB2406705) are required for R2 versions of Windows Server 2008.

  1. If you have not already done so, check HBA vendor documentation for any required support drivers, and install them.
  2. If necessary, install the StorPort miniport driver.
  3. If the MPIO feature is not enabled, open the Server Manager and install the MPIO feature. This will require a reboot.
  4. After rebooting, open the Windows Administrative Tools and click MPIO.
  5. In the MPIO-ed Devices tab, click the Add button; the Add MPIO Support popup appears.
  6. In the Device Hardware ID: text box, enter 3PARdataVV, and click OK.
  7. Reboot as directed, (You can also use MPIO-cli to add 3PARdataVV).

The command is:

“mpclaim -r -I -d “3PARdataVV”

Configuring MPIO for Round Robin

Note from HP: Windows Server 2008 server connected to an HP 3PAR StoreServ Storage running HP 3PAR OS 2.2.x or later requires that the multipath policy be set to Round Robin.

Windows Server 2012 or Windows Server 2008 R2 servers do not need to change the multipath policy, as it defaults to Round Robin. If the server is running any supported Windows Server 2008 version prior to Windows Server 2008 R2, and if the Windows Server 2008 server is connected to an HP 3PAR array that is running HP 3PAR OS 2.2.x, the multipath policy will default to failover and must be changed to Round Robin. However, if the OS version on the HP 3PAR array is HP 3PAR OS 2.3.x or later, then you must use HP 3PAR OS host persona 1 for Windows Server 2008 R2 or host personal 2 for Windows Server 2008 non-R2 so that the multipath policy defaults to Round Robin.

To verify the default MPIO policy, follow these steps:

  1. In the Server Manager, click Diagnostics; select Device Manager. Expand the Disk drives list.
  2. Right-click an HP 3PAR drive to display its Properties window and select the MPIO tab.
  3. Select Round Robin from the drop-down menu.

Reference Documents:

HP 3PAR Windows Server 2008 /2012 Implementation Guide

4 comments

  1. Hello,

    Do you know if currently HP 3PAR Storeserv 7200 supports FC direct attach to standard Emulex/Qlogic/Barocade FC HBAs?

    Thanks,

    Nadav

    1. Hi Nadav,
      Are you talking about directly connecting a host (Server) to a 7200?
      The controller node ports operate in the following modes:
      ‘Target’ – Hosts
      ‘Initiator’ – Disk Cages (in the case of SAS)
      ‘Initiator RCFC’ – Remote Copy over FC (Peer to Peer / 3PAR to 3PAR)

      So yes this could be achieved by changing the connection type on the FC ports on the StoreServ nodes to ‘point’ & Connection mode to ‘Host’.
      Is connecting a host directly to the StorServ supported ? I cant answer for HP but I suspect not.

      Why ?
      You should be using soft-zoning techniques utilising fabric switches and as follows – even ports to fabric A, odd ports to fabric B (see my guide here: http://goo.gl/iyRr2F)
      The way I understand it, is that the StoreServ uses node port visualisation to mirror cache IO (Persistent Cache) to provide redundancy in the event of node failure – you cant achieve this using a direct connection.
      This feature also allows you to scale a 2 node system to an 8 node system non-disruptively.

      hope this helps

  2. Hi,
    I had a issue about use 1 Volume for multiple Windows server 2012 R2 Hyper-V, i had installed MPIO for 4 Windows server Hyper-v host. And i store all VMs on this volume. but this volume often error at the moment all VMs store on this volume failure.
    I had checkhealth on 3par but everything ok.
    Can u know about my issue ?
    Pls help me, I really headache about this issue.
    Thanks

    1. Hi, What kind of errors do you experience ?
      Here are a couple of things to check:
      – Have you isolated the issue down to one host in particular, or does the problem affect all of them ?
      – Check the Windows Server 2008/2012 implementation guide, in case you missed anything, be sure to run the powershell script provided to prevent volumes from being mounted offline : http://h20628.www2.hp.com/km-ext/kmcsdirect/emr_na-c03290621-14.pdf
      – I have experienced odd issues with write requests, but this was down to a problem with me not following the deployment guide.

      Hope this helps
      Thanks Gareth

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s