VMware vSphere

VCP 6.5 Study Notes – Exam Number 2V0-622

Like most performing certification renewals it can be difficult to find time to study. I’ve put together a condensed set of notes I used for the VCP6.5-DCV exam.

There are some fairly comprehensive resources out there, and if you are new to the VMware VCP I would highly recommend building a lab and running through each of the sections identified in the official exam guide.

If you don’t have a lab, have a look at VMware Hands On Lab – a fantastic resource, one which I use regularly when I need to refresh my knowledge given the large remit of technologies I need to cover.

Hope this guide is useful to anyone taking their VCP6.5-DCV exam. My condensed set of notes can be downloaded via PDF – here: vcp6.5-dcv – study notes v1.0

Official VMware VCP6.5 DCV Exam Guide : https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/certification/vmw-vcp65-dcv-2v0-622-guide.pdf

VCDX Defence Dates

The 2017 defence dates have now been released. Here is the official statement from Karl Childs.

VMware Certified Design Expert (VCDX) application deadlines and defense dates for 2017! The calendar will have the most current information soon. Be sure to always confirm the deadline dates as you prepare to submit your application.

The dates listed below apply for every region. Locations include the Palo Alto, US office, the Staines, UK office, and the Sydney, Australia office. Depending on panelist availability, some tracks may not be available on all dates. That detail will be listed in the official calendar, so always be sure to confirm the information there.

We will also be releasing v6 of all tracks, starting with VCDX6-DCV – look for that information to be live soon.

The schedule is:

Dec 2, 2016 – applications due
Feb 13-17, 2017 – defenses

Mar 10, 2017 – applications due
May 22-26, 2017 – defenses

June 9, 2017 – applications due
Aug 21-25, 2017 – defenses

Sep 22, 2017 – applications due
Dec 4-8, 2017 – defenses

Note:  Dates may be subject to change based on physical location availability or other unforeseen issues. Candidates should never book travel until they have received an invitation to defend for the confirmed defense week. Actual physical locations and dates will be updated in the calendar and on the invitations to defend.

vRealize 7 Orchestrator Deployment

This is part of a series of posts that will look at the deployment VMware vRealize product suite, commencing with vRealize Orchestrator.

VMware vRealize Orchestrator

VMware vRealize Orchestrator is a development and process-automation platform that provides a library of extensible workflows to allow you to create and run automated, configurable processes to manage VMware products as well as other third-party technologies. vRealize Orchestrator automates management and operational tasks of both VMware and third-party applications such as service desks, change management systems, and IT asset management systems.

Platform Architecture

Orchestrator is composed of three distinct layers:

  • An orchestration platform that provides the common features required for an orchestration tool
  • A plug-in architecture to integrate control of subsystems
  • A library of workflows

Orchestrator is an open platform that can be extended with new plug-ins and libraries, and can be integrated into larger architectures through a REST API.

vRO Architecture

A standard set of plugins are provided, however 3rd party extensible plug-ins can also be used.

The Orchestrator database comes preconfigued with a PostrgreSQL database and is suitable for small to medium scale environments. External databases are also supported (Review the VMware Product Interoperability Matrix for list of externally supported DBs).

vRO Appliance Components:

  • SUSE Linux Enterprise Server 11 Update 3 for VMware 64-bit edition
  • Embedded PostgreSQL
  • In-Process ApacheDS LDAP (only recommended for Dev/Test purposes)
  • Orchestrator/Process automation engine

After the appliance has been deployed we can setup the authentication provider to use directory services or vSphere authentication. However, according to the documentation LDAP authentication is deprecated. The default authentication mechanism uses ApacheDS LDAP, which is fine for testing purposes. For production you could change this to vCenter SSO authentication. VMware recommend using localised authentication providers to avoid long LDAP response times. Similarly narrowing the LDAP search path to a specific OU – should also help.

PostgreSQL Database

PortgreSQL comes baked into to the deployment, this is suitable for small and medium scaled production purposes. An external database is recommended for large scale deployments. Orchestrator supports external database deployments of Oracle, Microsoft SQL server and PostgreSQL. For this implementation I will just be using the embedded db, but should you want to use an external db you will need to setup this up as a separate workflow.


Once vRealize Orchestrator has been deployed connectivity is established via the vRO control centre a web-UI (https://ipOfvROappliance:8283). From the control centre we will perform some basic configuration and then connect using the vRealize Orchestrator Workflow Designer tool. This will allow us to connect the vRO instance to vCenter. Once connected to vCenter as an extension we can create and manage workflows from the vSphere Web Client.

Deploying the vRealise Orchestrator Appliance


  • VMware vCenter Server deployed and running
  • Enough compute and storage resources to support the vRO appliance.
  • If using the vSphere Web-UI – Install the Client Integration plug-in as this is required to deploy the appliance.

Deployment: Follow the deployment procedure found on page 26 of the install and configuration guide (note this references v6 documentation but is essentially the same for v7.x).

The password for the root account of the Orchestrator Appliance expires after 365 days. You can increase the expiry time for an account by logging in to the Orchestrator Appliance as root, and running passwd -x number_of_days name_of_account. If you want to increase the Orchestrator Appliance root password to infinity, run passwd -x 99999 root.

Initial Configuration

Here we want to set the authentication mode to vSphere (as we are adopting the simple deployment). Configure the database to use PostgreSQL (embedded).

1a. Open your browser to https://ipOfvROapplaince:8281/vco

1b. Under Configure the Orchestrator Server select ‘Orchestrator Control Center’.



2. Login to the vRO Control Center.

3. Welcome to the vRO Control Center.

4. Select Configure Authentication Provider, for this deployment we will use vSphere (PSC SSO domain).

4a. Set the host address to your vCenter server and accept the certification warnings.


4b. Restart the services when prompted.

5. Configure the Database. I will be using the embedded PostgreSQL db in this deployment.


6. Restart the services


7. Next we want to navigate back to the vCO start page https://ipofvROapplaince:8281/vco

7a. Download and install the Orchestrator Client.

8. Open the Client and login with your vCenter Admin SSO user (administrator@vsphere.local)


9. First up we want to connect this instance of vRO to an endpoint such as vCenter. To do this we need to create our first workflow.

10. Select ‘Workflows’ icon (blueprint) and expand Library -> vCenter -> Configuration.


10a. Select ‘Add a vCenter Server instance’.

10b. Select ‘Start workflow’ (green play button).

10c. Enter your vCenter server hostname/IP address as well as the HTTPS port (443). The location should be set to /sdk. As I am not using any CA signed certificates I will select ‘yes’ to ignore any warnings.


10d. Enter the vCenter admin user/password and select submit.


11. Once the workflow has processed you should be able to view the vCenter server endpoint and resources from the inventory object.


12. The next task (optional) is to register the vRO instance with vCenter as an extension. This will allow us to the vSphere web client to manage and create workflows.

12a. Select ‘Workflows’ -> ‘vCenter’ -> ‘Register vCenter Orchestrator as a vCenter server extension’.

12b. Start the workflow to register vRO with the vCenter server instance.

12c. Set the vCenter instance as: https://FQDNofVC:443/sdk. Select submit to complete the task.


13. To confirm that task has completed login the vSphere web client then select vRealize Orchestrator.


14. There you have it, vRealize Orchestrator deployed. In future blog posts we will cover some basics around creating workflows before moving onto the deployment of vRealize Automation.


Documentation References

WSFC/MSCS vSphere 6.x Enhancements

For those that aren’t aware, VMware released an updated Microsoft WSFC Setup and Deployment Guide for vSphere 6.x.
In a previous blog post I covered Microsoft Clustering Design Implications in vSphere 5.x. Fundamentally the deployment of WSFC has not changed significantly. However, there are a couple of new features that I wanted to cover here.
New Features and Requirements:
  • vMotion supported for cluster of virtual machines across physical hosts (CAB deployment) with passthrough RDMs. Note, you must use VM-hardware version 11.
    • VMware recommends updating the heart-beat timeout ‘SameSubnetThreshold’ registry value to 10. Additional info can be found on MS Failover Clustering and NLB Team Blog and in VMware’s updated WSFC Setup and Deployment Guide.
    • The vMotion network must be a 10Gbps.
      • 1Gbps Ethernet link for vMotion of MSCS virtual machines is not supported.
        • Fair enough, but most customer deployments using 10GbE also share that with other workloads. In addition using NIOC to prioritise traffic to prod workloads. So its not clear if the minimum requirement is 10GbE or higher bandwidth that can be provided by 1GbE.
    • vMotion is supported for Windows Server 2008 SP2 and above. Windows Server 2003 is not supported.
    • SCSI bus sharing mode set to Physical.
  • ESXi 6.0 supports PSP_RR for Windows Server 2008 SP2 and above releases (same as ESXi 5.5 but with restrictions)
    • Shared disk quorum or data must be provisioned to guest in PassThrough RDM mode only
  • All hosts must be running ESXi 6.x
    • Mixed mode operating with older ESXi revisions not supported.
    • Rolling upgrades of cluster hosts from previous versions of ESXi to ESXi 6.x is not supported.
  • MSCS (Windows Server Failover Clustering (WSFC)) is supported with VMware Virtual SAN (VSAN) version 6.1 and later. See VSAN 6.1 Whats New!.
  • In vSphere 6.0, VMware introduced support for using Windows Server Failover Clustering or Microsoft Server Failover Clustering to protect a Windows-based vCenter Server.
  • Modifying the MSCS heartbeat time-out: An MSCS virtual machine can stall for a few seconds during vMotion. If the stall time exceeds the heartbeat time-out interval, then the guest cluster considers the node down and this can lead to unnecessary failover.
    • VMware recommends changing the DWORD ‘SameSubnetThreshold’ registry value in each WSFC node to 10.
  • VMware also warns of deploying WSFC in vSphere environments with memory overcommitment. Memory overcommitment (worse active memory reclamation like compression, swapping) can cause virtual machine I/O latency to increase, potentially causing failover. Set memory reservations if you are concerned this may affect your WSFC/MSCS nodes.

Not Supported / Limitations:

  • No Storage vMotion for VMs that are configured with shared disks.
  • No support for WSFC on NFS.
  • Running WSFC nodes on different ESXi versions (Pity as this would have been ideal for ESXi 5.x to ESXi 6.x upgrades).
  • Cant use WSFC in conjunction with VMware FT.
  • NPIV not supported.
  • Server 2012 storage spaces not supported.

Infrastructure Design & Project Framework

Successfully planning, designing and implementing a virtualisation project can be a very rewarding experience. Whether you are working alone or in a team you may find the task initially daunting, be unsure of where to start or lack the appropriate framework from which to work from. Hopefully this information will support you, if you have been given the task or have successfully completed a virtualization project, but want to identify ways to make the next upgrade or implementation more efficient.

Infrastructure design is a deep subject with many facets and interlinking dependencies of design choices. The four pillars, referred to as compute (see my compute design post), storage (see my storage design post), networking and management, can be very complex to integrate successfully, when considering all the options. A great deal of emphasis should be placed on understanding design decisions, as poor planning can lead to additional costs, the project not meeting organisation goals and ultimately a failure to deliver. Through each part of the design process it is important that you validate design decisions against requirements identified through the information gathering process.

Furthermore, design decisions should be continually evaluated against infrastructure qualities such as availability, manageability, performance, recoverability and security.

Project Framework

Use the following key points/stages to plan and build your project:

1. Information Gathering
2. Current State, Future State and Gap Analysis
3. Conceptual, Logical & Physical Design Process
4. Migration and Implementation Planning
5. Functional Testing / Quality Assurance
6. Continuous Improvement
7. Monitoring Performance, Availability and Capacity

1. Information Gathering: Information should be gathered from stakeholders / C-level executives, application owners and subject matter experts to define and identify:

  • The Project scope / project boundaries, for example, upgrade the VMware vSphere infrastructure at the organisations central European offices only.
  • Project goals, what is it the organisation wants to achieve? For example reduce physical server footprint by 25% before the end of the financial year.
  • Service Level Agreements (SLA), Service Level Objectives (SLO), Recovery Time Objectives (RTO), Recovery Point Objectives (RPO) : [Maximum Tolerable Downtime MTD].
  • Key Performance Indicators (KPI), relating to application response times.
  • Any requirements, both functional and non-functional i.e regulatory compliance – HIPAA, SOX, PCI etc. Understand the impact on the design required to meet HIPAA compliancy (a US standard, but acknowledged under EU-ISO/IEC 13335-1:2004 information protection guidelines), which states that data, communication must be encrypted (HTTPs, SSL, IPSEC, SSH). A functional requirement specifies something the design must do for example support 5000 virtual machines, whereas a non-functional requirement specifies how the system should behave, For example: Workloads deemed as business critical must not be subject to resource starvation (CPU, Mem, Network, Disk) and must be protected using appropriate mechanisms.
  • Constraints:  Limit design choices based on data consolidation from the information gathering exercise. An example could be that you need to use the organisations existing NFS storage solution. A functional requirement may be that the intended workload you need to virtualize is MS Exchange. Currently virtualising MS Exchange on NFS is not supported – if the customer had a requirement to virtualise MS Exchange but only had an NFS-based storage solution, the proposal would lead to an unsupported configuration. Replacing the storage solution may not be feasible and out of scope due to financial reasons.
  • Risks: Put simply are defined by the probability of a threat, the vulnerability of an asset to that threat, and the impact it would have if it occurred. Risks throughout the project must therefore be recorded and mitigated, regardless of which aspect of the project they apply to.  An example risk, a project aimed at a datacentre that doesn’t have enough capacity to meet the anticipated infrastructure requirements. The datacentre facilities team is working on adding additional power but due to planning issues may not be able to meet the expected deadlines set by the customer. This risk would therefore need to be mitigated and documented to minimise / remove the chance of it occurring.
  • Assumptions: The identification or classification of a design feature without validation. For example: In a multi-site proposal the bandwidth requirements for datastore replication is sufficient to support the stated recovery time objectives. If the site link has existing responsibilities how will the inclusion of additional replication traffic affect existing operations? During the design phase you may identify additional assumptions each of which must be documented and validated before proceeding.

 2. Current state, Future state and Gap Analysis:

  • Identifying the current state can be done by conducting an audit of the existing infrastructure, obtaining infrastructure diagrams, system documentation, holding workshops with SME’s and application owners.
  • A future state analysis is performed after the current state analysis and typically outlines where the organization will be at the end of the projects lifecycle.
  • A gap analysis outlines how the project will move from the current state to the future state and more importantly, what is needed by the organization to get there.

3. Conceptual, Logical & Physical Design Process:

  • A conceptual design identifies how the solution is intended to achieve its goals either through text, graphical block diagrams or both.
  • A logical design must focus on the relationships between the infrastructure components – typically this does not contain any vendor names, physical details such as amount of storage or compute capacity available.
  • A physical design shows a detailed description of what solutions have been implemented in order to achieve the project goals. For example: How the intended host design would mitigate against a single point of failure.
  • Get stakeholder approval on design decisions before moving to the implementation phase. Throughout the design process you should continually evaluate design decisions against the goal requirements and the infrastructure qualities (Availability, Manageability, Performance, Recoverability, Security).
    • Availability: Typically concerned with uptime and calculated as a percentage based on the organisations service level agreements (SLA). The key point is mitigating against single point of failure across all components. Your aim is to build resiliency into your design. Availability is calculated as a percentage or 9s value : [Availability % = ((minutes in a year – average annual downtime in minutes) / minutes in a year) × 100].
    • Manageability: Concerned with the operating expenditure of the proposed solution or object. How well will the solution scale, manage, implement, upgrade, patch etc..
    • Performance: How will the system deliver required performance metrics, typically aligned to the organisations KPIs and focus on workload requirements: response times, latency etc..
    • Recoverability: RTO/RPO | MTD : Recovery Time Objective: Time frames associated with service recovery. Recovery Point Objective: How much data loss is acceptable? Maximum Tolerable Downtime: A value derived from the business which defines the total amount of time that a business process can be disrupted without causing any unacceptable consequences.
    • Security: Compliance, access control. How best can you you protect the asset, workload, from intruders or DOS attacks. More importantly what are the consequences/risks of your design decisions.

4. Migration and Implementation Planning:

  • Identify low risk virtualisation targets and proceed with migrating these within the organisation first. This is beneficial in achieving early ROI, build confidence and assist other operational aspects of future workload migrations.
  • Work with application owners to create milestones and migration schedules.
  • Arrange downtime outside of peak operating hours, ensure you have upto date and fully documented rollback and recovery procedures.
  • Do not simply accept and adopt best practises; understand why they are required and their impact on the design.

Additional Guidelines: 

  • Create service dependency mappings: These are used to identify the impact of something unexpected and how best to protect the workload in the event of disaster. DNS for example plays an important role in any infrastructure – if this was provided through MS Active Directory in an all virtualised environment, what impact would the failure of this have on your applications, end users, external customers? How can you best mitigate the risks of this failing?
  • Plan for performance then capacity: If you base your design decisions on capacity you may find that as the infrastructure grows you start experiencing performance related issues.  This is primarily attributed to poor storage design, having insufficient drives to meet the combined workload I/O requirements.
  • Analyse workload performance and include capacity planning percentage to account for growth.
  • What are the types of workloads to be virtualised – Oracle, SQL, Java etc.  Ensure you understand and follow best practices for virtualized environments – reviewing and challenging where appropriate. Oracle for example has very strict guidelines on what they deem as cluster boundaries and can impact your Oracle licensing agreement.
  • Don’t assume something cannot be virtualised due to an assumed issue.
  • Benchmarking applications before they are virtualised can be valuable in determining a configuration issue post virtualisation.
  • When virtualising new applications check with the application vendor regarding any virtualisation recommendations. Be mindful of oversubscribing resources to workloads that won’t necessarily benefit from it. “Right sizing” virtual machines is an important part of your virtualisation project. This can be challenging as application vendors set specific requirements around CPU and memory.
    • For existing applications be aware of oversized virtual machines and adjust resources based on actual usage.
  • What mechanisms will you use to guarantee predicable levels of performance during periods of contention? See vSphere NIOC, SIOC.
  • VARS/Partners may be able to provide the necessary tools to assess current workloads, examples of which include VMware Capacity Planner (can capture performance information for Windows/Linux), IOStat, Windows Perfmon, vscsiStats,  vRealize Operations Manager…

5. Functional Testing / Quality Assurance :

This is a very important part of your design as it allows you to validate your configuration decisions. Also ensuring configurational aspects of the design are implemented as documented. This stage is also used to ensure the design meets both functional and non-fuctional requirements. Essentially the process maps the expected outcome against actual results.

  • Functional Testing is concerned with exercising core component function. For example, can the VM/Workload run on the proposed infrastructure.
  • Non-functional testing is concerned with exercising application functionality using a combination of invalid inputs, some unexpected operating conditions and by some other “out-of-bound” scenarios. This test is designed to evaluate the readiness of a system according to several criteria not covered by functional testing. Test examples include; vSphere HA, FT, vMotion, Performance
, security…

6. Continuous Improvement:

The ITIL framework is aimed at maximising the ability of IT to provide services that are cost effective and meet the expectations and requirements of the organisation and customers. This is therefore supported by streamlining service delivery and supporting processes by developing and documenting repeatable procedures. The ITIL Framework CSI (Continual Service Improvement) provides a simple seven-step process to follow.

Stage 1: Define what you should measure
Stage 2: Define what you currently measure
Stage 3: Gather the data
Stage 4: Processing of the data
Stage 5: Analysis of the data
Stage 6: Presentation of the information
Stage 7: Implementation of corrective action

  • Workloads rarely remain static. The virtualised environment will need constant assessment to ensure service levels are met and KPIs are being achieved. You may have to adjust memory and CPU as application requirements increase or decrease. Monitoring is an important part in the process and can help you identify areas which need attention. Use built-in alarms to identify latency in storage and vCPU ready times, which can be easily set to alert you to an issue.
  • Establish a patching procedure  (Host, vApps, VMs, Appliances, 3rd party extensible devices).
  • Use vSphere Update Manager to upgrade hosts, vmtools, virtual appliances. This goes deeper than just the hypervisor – ensure storage devices, switches, HBA, firmware are kept up-to-date and inline with vendor guidelines.
  • Support proactive performance adjustments and tuning, analyse issues : determine the root cause, plan corrective action, remediate then re-assess.
  • Document troubleshooting procedures.
  • Use automation to reduce operational overheads.
  • Maintain a database of configuration items (these are components that make up the infrastructure), their status, lifecycle, support plan, relationships and which department assumes responsibility for them when something goes wrong.

7. Monitoring Performance Availability and Capacity:

  • Ensure the optimal and cost effective use of the IT infrastructure to meet the current and future business needs. Match resources to workloads that require a specific level of service. Locate business critical workloads on datastores backed by tier 1 replicated volumes on infrastructure that mitigates against single point of failure.
  • Make use of built-in tools for infrastructure monitoring and have a process for managing / monitoring service levels.
  • Monitor not only the virtual machines but the underlying infrastructure, using built-in tools already mentioned above, to monitor latency.
  • Performance and capacity reports should include, hosts / clusters, datastores and resource pools.
  • Monitor and report on usage trends at all levels, compute, storage and networking.
  • Scripts for monitoring environment health (see Alan Renouf’s vCheck script).
  • A comprehensive capacity plan uses information gathered from day-to-day tuning of VMware performance, current demand, modeling and application sizing (future demand).

Additional Service Management Tasks:

  • Integrate the virtual infrastructure into your configuration and change management procedures.
  • Ensure staff are trained to support the infrastructure – investment here is key in ensuring a) staff are not frustrated supporting an environment they don’t understand and b) the business gets the most out of their investment.
  • Develop and schedule maintenance plans to ensure the environment can be updated and is running optimally.
  • Plan and perform daily, weekly, monthly maintenance tasks.  For example, search for unconsolidated snapshots, review VMFS volumes for space in use and available capacity (anything less than 10% available space should be reviewed). Check logical drive space on hosts. Check that any temporary vms can they be turned off or deleted. Monthly maintenance tasks, create a capacity report for the environment and distribute to IT and management. Update your VM templates, review the vmware website for patches, vulnerabilities and bug fixes.

Reference Documentation:

Conceptual Logical Physical It is Simple, by John A Zackman
Leveraging ITIL to Manage Your Virtual Environment, by Laurent Mandorla, Fredrik Hallgårde, BearingPoint, Inc.
Performance Best Practises for VMware vSphere
ITIL v3 Framework, Service Management Guide
Control Objectives for Information and Related Technology (COBIT) framework by ISACA
Oracle Databases on VMware vSphere Best Practise Guide
VMware vSphere Monitoring Performance Guide