Storage Virtualization VMWare

My first VMworld 2012 – A wrap

VMworld 2012 was held in San Francisco, USA  in August this year.  This was to be my first (and hopefully not last) VMworld conference,  I have a lot of memorable moments from the whole trip – probably too many to list here but nonetheless it was a very memorable trip.  The flight over and back was full, not cool for a relatively tall fella like myself.

I met some brilliant minds in the industry, and got to go to my first NFL game ever (Thanks @vStewed!) as well as doing a podcast with a good colleague of mine Calvin Zito (@HPStorageGuy), you can listen to the podcast at this link.

 

Some of my favourite sessions:

STO1430: Tracking Down Storage Performance Issues: A Customer’s Perspective http://goo.gl/uTtH8

Speaker Keith Aasen, NetApp, Scott Elliott, Christie Digital

 

STO2980: vSphere 5 Storage Best Practices @ http://goo.gl/pbP0g

Speakers: Chad Sakac, EMC Corporation, Vaughn Stewart, NetApp

 

VSP1800: vSphere Performance Best Practices http://goo.gl/I6lPc

Speaker: Peter Boone, VMware

 

Over 300+ sessions available for download @

http://www.vmworld.com/community/sessions/2012/

 

Now, onto some of the major new announcements that are worth a mention:

 

vRam goes vByeBye

Probably the biggest announcement made at VMworld 2012 was that VMWare has decided to drop the vRAM memory model that came out last year with vSphere 5.0.

This meant each edition of the vSphere tool had a virtual memory cap for each license, and if you needed more virtual memory for your ESXi host, you had to buy enough licenses to cover the total virtual memory in use by your VMs. Personally I wasn’t a big fan of it as I liked the simpler per socket approach as did a lot of VMware customers.

 

Monster VM’s now with more monster!!

The new release of ESXi has a bigger, faster, stronger CPU virtualization method, VMware refer to this method as ” virtualized hardware virtualization” or “VHV” for short, which  offers guest operating systems running inside of the VM “near native access to the physical CPU.”

New specs offered with this release of vSphere include 64 vCPU, up to 1TB RAM, 1 million IOPs out of a single VM.  All of this whilst still keeping the efficiencies we have all come to love with vSphere.

 

Shared Storage? Doesn’t matter!

Another important announcement is the capability to perform a virtual machine live migration between two separate physical servers  without the need of those physical hosts being attached to the same storage. Traditionally this requirement has been around since vMotion came out with ESX Server, but now VMware has redesigned it so that the memory state and files with the metadata describing a powered on VM can be transferred between hosts using their DAS.

 

No more desktop client…. 

vSphere 5.1 now comes with a new Web-based client that you can use instead of the Windows-based application that or the Linux-based virtualized console . This new web console snaps into vCloud Director, the cloud orchestration tool sold by VMware. Pity for IOS users as this new web client requires Adobe flash 🙁

I am curious as to how VCP certification will change given that the questions revolve heavily around the desktop client GUI.

 

Storage vMotion goes parallel!

Storage vMotion now supports up to four parallel disk copies per VM. Nuff said.

See What’s New in VMware vSphere 5.1 – Storage for more.

 

vCenter Single-Sign-On (SSO) 

vCenter Single Sign-on is a new feature in 5.1 that means you no longer log directly into vCenter Server but with a security domain defined in your vSphere environment.

Previous versions of vSphere meant you needed to log into vCenter Server directly, you were authenticated with the provided username and password against the Active Directory configured for vCenter Server

A point to note is that vCenter SSO is an additional component in the vSphere suite, but is required before any other vSphere 5.1 component (not ESXi) is installed or upgraded to 5.1.  The idea is that it will run as an additional service on top of vCenter service so there is no need to start re-architecting or working out the impact that it may have to your virtual environment

 

In closing.

VMworld 2012 was simply outstanding, the quality of the sessions and the solutions exchange were remarkable. You can find a link ,  I look forward to the next one I have the opportunity to go to and especially to meeting more people and catching up with in the industry we have come to love.

For a quick overview on Whats new content, please visit this link,

 

HP VMWare

HP Get Virtual Guarantee Whitepaper for VMware environments

Further to my earlier post on what is VM density, I mentioned HP’s ‘Get Virtual Guarantee” programme.

Here is a link to the whitepaper on this, have a read and consider.  Virtual environments are a lot more fun when they are performing optimally.

 

Ping me if you have any questions @andrecarpenter or andre.carpenter@hp.com

VMWare

vCloud Networking Poster

This is an excellent in any virtualisation processional’s toolkit.. A good friend of mine wrote the following about this poster here so I will quote from his post.

“The poster is a reference to all things related to vSphere Standard Switch (VSS), vSphere Distributed Switch (VDS), and Virtual Extensible Local Area Network (VXLAN) technology. It provides you information on the different components, terminologies and parameters of VSS, VDS, and VXLAN. It also explains the advanced features of VDS and discusses some best practices.”

Download the PDF from here

Virtualization VMWare

vSphere 5.1 How-to’s and Troubleshooting Links

For those who have been eager and upgraded to vSphere 5.1, here are some relevant KB articles should you get stuck.

 

Configuration

Troubleshooting

 

Virtualization VMWare

VMware vSphere 5.1 released!

 

VMware vSphere 5.1 has finally been officially released.

For me, one of the biggest things I have looked forward to is the revamped vSphere web client,  which was announced at VMworld 2012 a few weeks back.

 

With the desktop version going away, the new web based client means that I (and other MacOSX users) can now enjoy the freedom of not spinning up a Windows virtual machine in order to get access

to the windows based desktop client.

 

Below is the related downloads and documentation links:

 

vSphere Licensed Downloads (valid download account required).

ESXi 5.1.0 Installable

vCenter Server 5.1.0 and modules

VMware vCloud Director 5.1.0

VMware vCenter Site Recovery Manager 5.1.0

VMware vCenter Infrastructure Navigator 1.2.0

VMware vCenter Operations Management 5.0.3

VMware vCenter Configuration Manager 5.5.1

vSphere Data Protection 5.1.0 

vSphere Replication 5.1.0

vSphere Storage Appliance 5.1.0 

vCloud Networking and Security 5.1.0

vCenter Orchestrator Appliance 5.1.0

 

vSphere Free Downloads

HP Custom Image for ESXi 5.1.0

ESXi™ 5.1 Reference Poster

vSphere PowerCLI 5.1

vSphere CLI 5.1

vSphere Management Assistant 5.1

 

Release Notes

VMware vSphere® 5.1 Release Notes

vCloud Director 5.1 Release Notes

VMware vCenter Site Recovery Manager 5.1 Release Notes

VMware vCenter Infrastructure Navigator 1.2 Release Notes

vSphere Command-Line Interface 5.1 Release Notes

What’s New In ESXCLI 5.1

 

Documentation

VMware vSphere 5.1 Documentation

ESXi and vCenter Server Product Documentation Archives

Configuration Maximums for VMware vSphere 5.1

vSphere Command-Line Interface Documentation page

VMware vSphere PowerCLI Documentation page

vSphere Management Assistant Documentation page

VMware Sphere Replication Documentation page

VMware vCenter Update Manager Documentation page

VMware vCenter Orchestrator Documentation page

VMware vSphere Storage Appliance Documentation page

 

Knowledge Base

Installing vCenter Server 5.1 best practices

Methods of upgrading to vCenter Server 5.1

Upgrading to vCenter Server 5.1 best practices

Methods of installing ESXi 5.1

Installing or upgrading to ESXi 5.1 best practices

Methods of upgrading to ESXi 5.1

Location of ESXi 5.1 log files

Upgrading vCenter Server, ESX/ESXi hosts, and vShield Edge Appliances for vCloud Director 5.1

Network health check feature limitations in vSphere 5.1

Understanding vSphere 5.1 network rollback and recovery

Manually configuring HA slot sizes in vSphere 5.1

Upgrade paths from vSphere editions to VMware vCloud Suite 5.1

Network port requirements for vCloud Director 5.1

Installing vCloud Director 5.1 best practices

Supported web browsers in vCloud Director 5.1

Installing and configuring a vCloud Director 5.1 database

Supported guest operating systems in vCloud Director 5.1

Upgrading to vCloud Director 5.1 best practices

 

Licensing

vCloud Suite Licensing

VMware vSphere 5 Licensing, Pricing and Packaging

 

 

HP Virtualization VMWare

HP Customized version of ESXi 5.1 is now available!

HP Customized version of ESXi 5.1  is now available!

Download:

https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.1.0-GA-10SEP2012&productId=28

Release Notes:

http://h10032.www1.hp.com/ctg/Manual/c03537464.pdf

Also, an updated version of the HP Customized version of ESXi 5.0 U1 (October 2012)  is also available:

Download:

https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.0.0-U1-15MAR2012_V2&productId=229

Release Notes:

http://h10032.www1.hp.com/ctg/Manual/c03537272.pdf

HP Storage Virtualization VMWare

What is VM Density?

What is it?

VM Density used to be (and still is) referred to as the following three contextual definitions related to performance:

  • Number of VM’s that can run on a certain number of physical CPU’s
  • Number of VM’s or workloads running on a host
  • Number of VM’s or workloads running on a single datastore.

What is the business benefits for me?

Storage CapEx – In order to cater for a certain workload IO requirement you need the right mix of spindles to achieve that requirement.  Running more workloads on less spindles can decrease the initial CapEX spent and TCO is also reduced for a variety of reasons.

Server CapEx – By getting the best efficiency and optimal configuration allows the customer to run more VM’s on less physical servers.  So customers may not need to spend more CapEx buying extra servers, this provides particular benefits in TCO such as power, maintenance as well as licensing.

Licensing benefits – By having less hosts, customers generally spend less on host licensing.

 

So how do I optimise my VM density? Enter HP 3PAR

At HP, we introduced the HP 3PAR Get Virtual Guarantee earlier this year, which is a program to guarantee you will double your VM density from a IOPS perspective over your legacy solution if you shift to 3PAR.

So here is a hypothetical situation to explain the guarantee

If you are currently running an environment with workloads of say 30,000 IOPS (collectively), we guarantee we can provide an array that can handle at least 60,000 IOPS for those workloads.

Cool huh?

Small print – If you are running your virtual machines in an environment such as FusionIO or SSD’s then this guarantee will not apply.

 

Whats the secret sauce to this?

HP 3PAR Wide striping – Each physical drive within an HP 3PAR is broken up into chunklets which in turn is made up into RAID groups, so when data (VM’s in this example) is written, the blocks are striped to every single spindle in the array using these chunklets.  This approach aggregates the storage which of course pools the IOPS allowing better performance.

HP 3PAR has been designed from the ground up for virtual and cloud environments, with its VAAI, ASIC and thin provisioning capabilities contributing it really is one of best VMware integration stories on the market.

Refer back to my specific blog post on HP 3PAR Wide Striping here

 

Migration?

Interested but not sure how you may transition to HP 3PAR? Get in touch, HP Technology Consulting can provide expert storage consulting about how to get the job done.

 

Virtualization VMWare

Whats news in vSphere 5.1 – A dive in.

There has been a number of improvements made to the base hypervisor, probably notably around Auto Deploy.

For those who are not familiar with Auto Deploy who have not had a chance to play with it, what it essentially gives you is the power to rapidly deploy new vSphere hosts into your environment and bring them up to a specified patch level you define.  Putting this in a cloud-computing context and in particular infrastructure as service is a BIG step forward – This is becoming more and more automated,  time to market is a key metric when measuring how well your cloud computing business is running.

Without further ado, let’s dive into the new announcements.

 

vSphere 5.1 – Platform

The vSphere 5.1 has undergone a number of enhancements including:

  • Local ESXi shell users automatically get full shell access.  It is no longer required to share a single root account enhancing your audit-trail.
  • SNMPv3 is now supported bringing authentication and SSL support to the host-monitoring infrastructure.
  • Auto Deploy now offers a stateless caching mode that caches the boot image in order to permit a host to boot using the last good-known image should the Auto Deploy infrastructure be unavailable.  I see this as a potential turning point in the adoption of Auto Deploy.  There is a requirement for a dedicated boot device for this feature to function.
  • Auto Deploy can now be leveraged for stateful installs.  This may be beneficial to accounts that already have PXE in place but want to continue using traditional stateful methods.

 

vCenter 5.1

  • vCenter Server is the primary point of management for most environments and it too has been enhanced and tuned for this new release.  Some of the new additions include:
  • The vSphere Web Client is now the primary point of management.  It was noted during a session @ VMworld last week that the vSphere Client will no longer see development or receive new features.
  • An interesting new feature of the Web Client is the ability to pause a task and resume it from the “Work in Progress” task section.   This is helpful if you need to gather additional information to complete a task without cancelling it and starting over.
  • The Web Client does NOT need to be installed on the same server as vCenter Server and you can scale-out your vCenter services across servers.
  • Support for Open LDAP & NIS authentication using the Web Client (not the traditional vSphere client), this will make Linux-only environments happy.
  • Single Sign-on.  Read the PDF for more (the traditional vSphere Client is not supported).
  • The Web Client can track multiple vCenter Servers and inventory objects using the updated Inventory Service so you can now manage multiple vCenter environments from a single pane of glass without using linked-mode unless you wish to share permissions and licenses.

 

vSphere 5.1 – Performance

Outside of the obvious scalability improvements (64vCPU’s, 256 pCPU’s, >1M IOPS) vSphere has undergone a number of refinements in order to improve performance and management;

vSphere can now attempt to reduce the memory overhead of a VM by swapping out the overhead memory reservation of each VM to disk.  This can increase overall consolidation ratios and improve VM per host densities but it comes with the requirement that the swap file be manually created by the administrator to leverage this feature.

Use the following CLI command in your kickstart install or perform post-install.

esxcli sched swap system set -d true -n <datastore name>

 

If you have previously read and implemented the recommendations in the Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs technical white paper you will know that it can be a manual and administratively intensive process (outside of PowerCLI).  vSphere 5.1 now offers a checkbox to enable the VM .VMX settings for you saving a number of manual steps.

 

The traditional vMotion and storage vMotion (svMotion) have been combined into one operation offering the ability to perform a vMotion of a VM that does NOT leverage common shared storage.

This means that two servers using direct attached storage (DAS) can vMotion a VM between them.

Consider this feature beneficial for migration scenarios but there is a catch!  The svMotion operation occurs across the “Management Network” vmkernel interface.

So if you are using an HP BladeSystem/Virtual Connect infrastructure you may want to review your design if you have followed any of the Virtual Connect guides that say it is a “best practice” to use a 100Mbit Management FlexNIC.  A 1GE Management interface is recommended and what I recommend.

 

While on the svMotion topic; vSphere 5.1 has changed from performing serial disk migrations of VMDK’s within a VM to a parallel method if the VMDK’s reside on distinct datastores, so lets take a look at Storage stuff

 

 

vSphere 5.1 – Storage

Storage is commonly the least understood topic and receives the least exciting but most useful features.  I won’t cover the new disk format as it is primarily View related however there are other areas of improvement.

 

  • High Availability will now restart VM’s that encounter a Permanent Device Loss (PDL) state (5.0 U1 did too).  Please understand that a PDL is much less common than an All-Paths-Down (APD) state where HA does NOT respond but we may yet get there in the future.  HA responding to PDL’s is a step in the right direction.
  • 16Gb FC HBA’s are now supported.  Where vSphere 5.0 supported 16Gb HBA’s in 8Gb mode, vSphere 5.1 enables the full 16Gb throughput.  An interested tidbit confirmed by Emulex reps on the VMworld show floor indicated the leveraging a 16Gb HBA in 8Gb mode will outperform a similar 8Gb HBA due to the 16Gb HBA ASIC improvements in I/O processing.
  • SMART monitoring has also been introduce using esxcli  (but NOT vCenter) in order to examine disk error characteristics.  This has been targeted for SSD monitoring but it can only be leveraged using the command line.

 

  •  The ability to automatically detect and set the congestion threshold to the 90% percent throughput mark.  This is done using the SIOC injector that measure latency against throughput and can dynamically tune the threshold to the characteristics of the underlying disks.  It is very much a “set it and forget it” feature that dynamically adjusts to a changing environment.

 

  • Additionally, the underlying SIOC injector has also undergone improvement in where it measures the latency characteristics.  Instead of a leveraging the datastore latency metric which effectively ignores the storage stack above the datastore level, the new SIOC injector leverages a new value coined VmObservedLatency that measures higher up the virtualized storage stack as detected by the actual VM’s in order to more accurately reflect the performance characteristics experienced by the application or user.

 

  • The SIOC injector now also has the ability to detect common underlying disk striping configurations in order to avoid svMotioning VM’s across datastores backed by the same spindles on the back-end of the array.   The VMware vSphere Storage DRS Interoperability white paper includes recommendations when and when _not_ to enable I/O load-balancing in a SDRS cluster but obviously these recommendations were not always being followed.

 

 

vSphere 5.1 – Networking

Networking is another interesting topic and the vast majority of improvements are focused the vSphere Distributed Switch (vDS).  I should call out that if you are using Enterprise Plus licensing you should take a serious look at the vDS as the classic vSS (vSphere Standard Switch) is unlikely to evolve in the future effectively at its max feature potential.

  • Network Health Check (VLAN, MTU and failover team validation) is a very welcome addition as I have seen customer environments encounter HA events (and unplanned VM downtime) due to misconfigured teaming and/or switchports.  You want this feature!!

 

  • vDS Management network rollback and recovery is the catalyst that will calm the fears of a cluster-wide failure due to accidental misconfiguration of a fully vDS design.  If a change occurs and the management network loses connectivity the vDS will automatically rollback the last change(s).  A very impressive live-demo of this feature was shown at VMworld.  This is one of the last hurdles for what I see as the beginning of majority support for the vDS instead of the vSS.

 

  • vDS Distributed Port Auto Expand – while a nice touch in itself the PDF has some helpful information on selecting the best vDS “Port Binding” method for your environment.  The Static Binding method is the default and likely best candidate for the majority of environments out there.  Consider a traditional server has a fixed cabling configuration into a physical switch, the cables do not move.  This is akin to static binding, a fixed configuration that does not depend on vCenter to PowerOn VM’s.

 

  • Dynamic Binding is depreciated.

 

  • Ephemeral is a “plug-and-pray” method with no fixed binding but you therefore lose vCenter performance history and stats and increase the troubleshooting complexity.  Not recommended for most.
  • There are a number of other great features but I want to point out one last new feature that mitigates a risk that has been hiding under the radar across most environments.   The BPDU filter.  If your VMware environment is connected to a network that leverages the Spanning Tree Protocol (STP) then prior to vSphere 5.1 it is possible to take host VM networking offline if you follow VMware’s own switchport configuration guidelines.

 

  • VMware recommends that all hosts should NOT participate in STP by enabling PortFast and BPDU Guard that prevents accidental layer 2 bridging loops from causing a network disruption.  The problem is that a VM with two or more vNIC’s attached could potentially bridge interfaces and introduce a loop.  When this loop is introduced, BPDU packets are sent out and a properly configured switch would err-disable the port attached taking the VM offline and eventually all other vmnic’s attached to the switch by nature of VMware failover capabilities.  Consider this a denial of service risk.

 

  • Now with the vSphere v5.1 you can enable this advanced feature, Net.BlockGuestBPDU, which is disabled by default on both the vSS & vDS.  This is the only feature that I can see that has made its way into the vSS and I would highly recommend that any environment using STP and no intention to leverage VM-based bridging by design enable this setting.