VMware vSphere 5.5

Virtualising SCO UNIX OpenServer 6 with vSphere 5.5

LOGO_SCO_UNIX

Rock solid reliability with SCO Unix and VMware vSphere

OpenServer® Release 6.0 is an impressive operating system for low-cost, commodity hardware that features large file support and support for a broad array of modern applications. XINUOS now owns the rights to the name ‘SCO’, XINUOS have released several UNIX-variants, specifically OpenServer & UnixWare. These systems are widely used in small offices, point of sale (POS) systems and backoffice database server deployments.

Using UnXis OpenServer 6 version 1.0.0 for VMware as a base, UnXis has built an optimized virtual appliance for VMware. This virtual appliance uses a subset of existing and updated device drivers that provide optimal performance for vSphere environments with vmware tools pre-installed.

OpenServer 6 is currently not officially supported on vSphere 5.5. However, older releases of SCO OpenServer and UnixWare are supported. You can find the guest OS compatibility matrix here : http://www.vmware.com/resources/compatibility/search.php )

Supported_Guest OS

SCO OpenServer 6: UNIX on Intel x86

New Features:

  • Multi-Processor Support. OpenServer 6 has increased multiprocessor support from 4 to 32 processors, taking advantage of the power of more modern and up-to-date hardware. SVR5 is a hardened kernel that runs on low cost, industry standard servers and is capable of near-linear scaling as resources are added to the system.
  • Ipfilter for Firewall and NAT Functionality. Ipfilter technology allows OpenServer 6 to be configured as a firewall.
  • Multi-Threaded Kernel. By incorporating SVR5 technology into OpenServer Release 6, this kernel now has support for more modern applications.
  • Includes KDE. With the addition of KDE, OpenServer 6 now has a modern, full-featured desktop, enabling a greater ease of use.
  • Increased Memory Support. Memory support increases from 4 GB to 64 GB in OpenServer 6. This enables the product to run and support more powerful applications and hardware
  • IPsec. Encrypts all TCP/IP packets for security, and implements Virtual Private Network (VPN) functionality.
  • OpenSSH and OpenSSL. These network protocols allow for logging into and executing commands on a remote computer. They provide secure encrypted communications between two untrusted hosts over an insecure network.
  • Kernel Privileges. The SVR5 kernel provides a fine-grain privilege mechanism. Using fine-grain privileges, the system can grant a subset of root powers to binaries, allowing them to achieve specific objectives without exposing the system to potential abuse/exploits.
  • Supports NFS v3 with TCP. Network File System (NFS) is an industry standard protocol for sharing files across networks. NFS v3 adds support for large files and NFS over the TCP protocol.
  • Encrypted File System and Archives. This file system encrypts data stored on the disk. The data can then be de-crypted using private keys.
  • VXFS Filesystem . High performance VXFS filesystem is journaled, ensuring data integrity in case of a hardware crash.
  • Hot-Plug Memory Support. With OpenServer 6 you can add additional memory into the system without a system reboot.

vSphere Supported Features:

  • VMotion
  • Storage VMotion
  • DRS – Dynamic Resource Scheduler
  • HA – High Availability

VMware System Resource Requirements:

Reviewing the documentation there aren’t many restrictions and I’m sure even the smallest environments will be able to support this.

  • A datastore with at least 11 GB of free disk space to hold the OpenServer 6.0.0V for VMware virtual machine. Note that additional disk space can be configured for your virtual machine once the import is complete.
  • A Virtual Switch that can be used as a virtual Network Card by OpenServer 6.0.0V for VMware.
  • At least 1 GB of free memory. Note that the amount of memory used by your virtual machine can be increased once the import has completed.

SCO OpenServer 6 Editions:

  Starter Edition (Trial Entitlement) Enterprise Edition
Users 2 10
Memory 1GB 4GB
CPUs 1 4
Restrictions Special user bump None

First download the OpenServer 6.0.0.V  ISO (which contains the compressed OVF file) –  available here: link

sco_download

Deployment Instructions:

  1. After you have downloaded the ISO image, extract the ISO file to reveal the OVF file.
  2. Launch the VMware vSphere web client and import the OVF file. Select your vCenter Server Instance -> Actions -> Deploy OVF Template.Deploy_SCO_OVF
  3. On the Source screen browse to the location of the OVF template. Then select Open & Next on the OVF Template details screen.
  4. On the End User License Agreement screen read & accept the EULA.
  5. Review the template details and select Next.
  6. On the Name and Location screen specify a unique name for your Virtual Machine (also specify the inventory location)
  7. Select the host or cluster for the appliance deployment.SCO_OVF_Choose_Nodes
  8. Select the datastore to for the virtual appliance configuration files (this shouldn’t be on an NFS datastore as the appliance requires fully provisioned disks, however as an operating system SCO OpenServer does support NFS v3).Select_storage
  9. Select a virtual network.
  10. Review the summary screen and select next & then finish.Review_SCO_OVF

If all goes well the appliance should be imported into your vSphere environment.

OS Setup Instructions:

  1. Power on virtual machine.
  2. Enter keyboard locale settings.
  3. Accept end user license agreement (EULA).
  4. Select license scenario *(option 5 for a seven day trial).
  5. Enter FQDN hostname (sco6.gemini.com).
  6. Enter an IP address / subnet mask & default gateway.
  7. Enter IP address of primary name server.
  8. Review network setup summary.SCO_OVF_Install_Review
  9. Select mail system.
  10. Select language locale.
  11. Enter root password (options: define a password or a pronounceable password will be picked for you).
  12. Select system security level (default : Traditional)
  13. Select your SCO login setting.
  14. Select time zone.
  15. Set name of mail server.
  16. The server will reboot.SC_OVF_Boot
  17. Login & have fun! the KDE also includes the mozilla thunderbird mail client.SCO_OVF_Complete

Reference documents:

http://ftp.sco.com/pub/openserver600v/600v/iso/openserver600v_v100vm/osr600v_vmware_1.0.0_GSG.html
http://www.sco.com/products/unix/virtualization/faq.html

VMware VSAN – Storage Virtualisation

VMware Virtual SAN (VSAN)

VMware have clearly laid out plans for the complete convergence of the IT infrastructure, as seen with the release of NSX and VSAN at VMworld 2013. I’ve highlighted some important features of VMware’s new storage solution ‘VSAN’.

VMware VSAN is fully integrated with VMware vSphere automatically aggregating local host storage in a cluster so that it can be presented as block level shared storage for virtual machines. It’s function is to provide both HA (High Availability) and scale-out-storage features.

This is leaps ahead of the VMware VSA (Virtual Storage Appliance) aimed at the SMB with 2-3 hosts released in 2011.

Here is a quick reminder of some of the strict requirements & limitations we had with the VSA.

  • Does not support memory overcommitment.
  • Once installed you cannot add additional storage to the VSA cluster.
  • The VSA can only be configured in a two or three node cluster (which can’t be changed after install).
  • You cannot run vCenter on the back-end storage supported by VSA cluster.
  • Requires a minimum of 4 NICs (two front-end : two back-end communication).
    • Total IP addresses required for a 2 Node cluster = 11.
    • Total IP addresses required for a 3 Node cluster = 14.
    • Requires greenfield hosts (cannot be hosting VM’s prior to VSA install).

VSAN Requirements:

vCentre Server

  • VSAN requires a vCentre server running vSphere 5.5 (VSAN is also supported on the new vCentre server appliance!).
  • VSAN is configured and monitored using the vSphere 5.5 Web Client.

Host/Storage:

  • Each vSphere 5.5 host participating in the VSAN cluster requires a disk controller (SAS/SATA – RAID Controller).
  • Passthrough mode is required because VSAN ‘talks’ directly to the SSD and HDD.
  • Disks to be used should not have any RAID configuration applied (Parity/Striping is looked after by the VSAN).
  • Check to make sure that your controller is listed in the HCL.
  • Each host participating in the VSAN cluster must have at least one SSD & HDD. (The SSD provides read/write cache for I/O to backing HDD’s, similar to conventional cache on a storage processor. Note: the SSD’s do not contribute to the size of the VSAN datastore its used for cache only operations.

Note: beta version will have an 8 host limit, this figure will be adjusted when it’s GA.

Host Network Interface Cards:

  • Each host participating in the VSAN cluster must have at least one 1Gb NIC, however VMware recommends 10Gb CNA’s for VM’s with high workloads.
  • VSAN is supported on both standard virtual switches as well as distributed virtual switches.
  • A VMkernel port must be created for VSAN communication; this is used for inter-cluster-node communication and read/write operations for virtual machines residing on parent hosts belonging to a VSAN cluster.

VSAN Performance

As with any storage solution understanding your IOPs requirement is paramount, and to effectively achieve the necessary IOPs for the solution you need to understand what your workloads are. vSphere VSAN supports SSDs which act as read and write cache for I/O this significantly improves performance, although we are yet to see any real world numbers.

SSD based Read Caching on the VSAN:

This is a cache of locally accessed disk blocks for virtual machines, specifically these are blocks used by the application running on the virtual machine. The virtual machine might not be on the same host the controller uses to communicate with the SSD. vSphere VSAN mirrors a directory of cached blocks between hosts in a VSAN cluster, if the virtual machine is utilising cache not local to the vSphere host, the interconnect (VMKernel port) is used to retrieve cached blocks from the host SSD that does hold the information. Hence why VMware recommend using 10Gb CNA’s to reduce latency.

Note: If the required disks blocks are not in cache on any of the hosts the information is retrieved directly from the backing HDD’s.

SSD Write Caching on the VSAN:

The SSD is also used for write cache to reduce I/O latency. Because write operations go to the SSD storage, a copy of the data must exist elsewhere in the VSAN cluster encase of node failure. This is so that write operations written to cache are not lost. When a write operation is initiated by an application in the virtual machine the write operations are mirrored to both local cache and to remote hosts in the VSAN cluster.

Note: Write operations must be committed to SSD on hosts before it is acknowledged and committed to the HDD’s

Availability

vSphere VSAN uses RAIN (Reliable Array of Independent Nodes) – object level redundancy, so in a converged infrastructure you can survive the loss of an underlying component (NIC port(s), disk, vSphere host).

In the past when defining HA admission control policies we defined enforcement policies as (a) ‘Host Failures the Cluster tolerates’, (b) ‘Percentage of cluster resources reserved…’  and (c) ‘Specify failover hosts’. vSphere administrators can now define how many host, network or disk failures a virtual machine can tolerate in a VSAN cluster.

Note: In the event of node failure there is no need for all of the data to be migrated to other nodes in the cluster (copies of virtual machines (replicas) reside on multiple nodes in the cluster).

Note: For an object to be accessible in VSAN, more than 50 % of its components must be accessible.

Configurable Options

NumberOfFailuresToTolerate  – This allows us to define the ‘number of failures to tolerate’ – (network or disk) in the cluster and still maintain availability. If this is set, it specifies that configurations must contain at least NumberOfFailuresToTolerate +1 replicas.

Note: VMware state that ‘any disk failure on a single host is treated as a “failure” for this metric. Therefore, the object cannot persist if there is a disk failure on host01 and another disk failure on host 02 when you have NumberOfFailuresToTolerate set to 1.’

Number of Disks Stripes per Object – This defines the number of physical disks across which each replica of a storage object is distributed. A value higher than 1 might result in better performance if read caching is not effective, but it will also result in a greater use of system resources. VMware state that a default disk stripe number of 1 should meet the requirements of most if not all virtual machine workloads. VMware recommend that the disk striping value should change only when a small number of high-performance virtual machines are running.

Flash Read Cache Reservation  – The amount of flash reserved as read cache for the storage object. This is specified as a percentage of the logical size of the virtual machine disk storage object. If cache reservation is not defined the VSAN scheduler manages ‘fair’ cache allocation.

Object Space Reservation – This defines the percentage of the logical size of the storage object that should be reserved on the HDD’s during initialisation. VSAN datastores are thin provisioned by default, the ObjectSpaceReservation is the amount of disk space reserved on the VSAN datastore specified as a percentage of the VM disk.

Note: VMware state that if you provision a virtual machine and select disk format to be either thick lazy or eager zeroed this setting overrides the ObjectSpaceReservation setting in the virtual machine storage policy.

Force provisioning (disabled by default) – This is an override to forcibly provision an object even if the capabilities in the virtual machine storage policy cannot be satisfied .VSAN will attempt to bring that object into compliance if and when resources become available.

Virtual Machine Policies & Compliance:

When a VSAN datastore is created, a set of policies are defined to identify the capabilities of the underlying environment (high performance disks, NL disks etc..) The administrator uses a virtual machine storage policy to classify the application workload requirements. Using VM storage policies, administrators can specify s set of required storage capabilities for a virtual machine, or more specifically a set of requirements for the application running in the virtual machine. When a virtual machine is deployed, and if the requirements in the VM storage policy attached to the virtual can be met by the VSAN datastore, the VSAN datastore will be identified as complaint.

Here is the whitepaper released by VMware : VMware_Virtual_SAN_Whats_New.pdf

I’m looking forward to revisiting this post after the public beta is released with some more information on how it works.

You can register for the beta here : http://www.vmware.com/vsan-beta-register

vmw-dgrm-virt-san-overview-lg

3PAR StoreServ Zoning Best Practice Guide for vSphere

Hopefully it will benefit others who are looking at implementing a HP/3PAR StoreServ solution, thanks for posting it Craig!

3PAR StoreServ 7400 – Zoning Best Practice Guide

figure 1b_host_zoning

VMFocus

This is an excellent guide which has been written by Gareth Hogarth who has recently implemented a 3PAR StoreServ and was concerned about the lack of information from HP in relation to zoning.  Being a ‘stand up guy’ Gareth decided to perform a lot of research and has put together the ‘3PAR StoreServ Zoning Best Practice Guide’ below.

This article focuses on zoning best practices for the StoreServ 7400 (4 node array), but can also be applied to all StoreServ models including the StoreServ 10800 8-node monster.

3PAR StoreServ Zoning Best Practice Guide

Having worked on a few of these, I found that a single document on StoreServ zoning BP doesn’t really exist. There also appears to be conflicting arguments on whether to use Single Initiator – Multiple Target zoning or Single Initiator – Single Target zoning. The information herein can be used as a guideline for all 3PAR supported host presentation types (VMware, Windows…

View original post 2,366 more words