Virtualization VMWare

vSphere 5.1 How-to’s and Troubleshooting Links

For those who have been eager and upgraded to vSphere 5.1, here are some relevant KB articles should you get stuck.

 

Configuration

Troubleshooting

 

Virtualization VMWare

VMware vSphere 5.1 released!

 

VMware vSphere 5.1 has finally been officially released.

For me, one of the biggest things I have looked forward to is the revamped vSphere web client,  which was announced at VMworld 2012 a few weeks back.

 

With the desktop version going away, the new web based client means that I (and other MacOSX users) can now enjoy the freedom of not spinning up a Windows virtual machine in order to get access

to the windows based desktop client.

 

Below is the related downloads and documentation links:

 

vSphere Licensed Downloads (valid download account required).

ESXi 5.1.0 Installable

vCenter Server 5.1.0 and modules

VMware vCloud Director 5.1.0

VMware vCenter Site Recovery Manager 5.1.0

VMware vCenter Infrastructure Navigator 1.2.0

VMware vCenter Operations Management 5.0.3

VMware vCenter Configuration Manager 5.5.1

vSphere Data Protection 5.1.0 

vSphere Replication 5.1.0

vSphere Storage Appliance 5.1.0 

vCloud Networking and Security 5.1.0

vCenter Orchestrator Appliance 5.1.0

 

vSphere Free Downloads

HP Custom Image for ESXi 5.1.0

ESXi™ 5.1 Reference Poster

vSphere PowerCLI 5.1

vSphere CLI 5.1

vSphere Management Assistant 5.1

 

Release Notes

VMware vSphere® 5.1 Release Notes

vCloud Director 5.1 Release Notes

VMware vCenter Site Recovery Manager 5.1 Release Notes

VMware vCenter Infrastructure Navigator 1.2 Release Notes

vSphere Command-Line Interface 5.1 Release Notes

What’s New In ESXCLI 5.1

 

Documentation

VMware vSphere 5.1 Documentation

ESXi and vCenter Server Product Documentation Archives

Configuration Maximums for VMware vSphere 5.1

vSphere Command-Line Interface Documentation page

VMware vSphere PowerCLI Documentation page

vSphere Management Assistant Documentation page

VMware Sphere Replication Documentation page

VMware vCenter Update Manager Documentation page

VMware vCenter Orchestrator Documentation page

VMware vSphere Storage Appliance Documentation page

 

Knowledge Base

Installing vCenter Server 5.1 best practices

Methods of upgrading to vCenter Server 5.1

Upgrading to vCenter Server 5.1 best practices

Methods of installing ESXi 5.1

Installing or upgrading to ESXi 5.1 best practices

Methods of upgrading to ESXi 5.1

Location of ESXi 5.1 log files

Upgrading vCenter Server, ESX/ESXi hosts, and vShield Edge Appliances for vCloud Director 5.1

Network health check feature limitations in vSphere 5.1

Understanding vSphere 5.1 network rollback and recovery

Manually configuring HA slot sizes in vSphere 5.1

Upgrade paths from vSphere editions to VMware vCloud Suite 5.1

Network port requirements for vCloud Director 5.1

Installing vCloud Director 5.1 best practices

Supported web browsers in vCloud Director 5.1

Installing and configuring a vCloud Director 5.1 database

Supported guest operating systems in vCloud Director 5.1

Upgrading to vCloud Director 5.1 best practices

 

Licensing

vCloud Suite Licensing

VMware vSphere 5 Licensing, Pricing and Packaging

 

 

HP Virtualization VMWare

HP Customized version of ESXi 5.1 is now available!

HP Customized version of ESXi 5.1  is now available!

Download:

https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.1.0-GA-10SEP2012&productId=28

Release Notes:

http://h10032.www1.hp.com/ctg/Manual/c03537464.pdf

Also, an updated version of the HP Customized version of ESXi 5.0 U1 (October 2012)  is also available:

Download:

https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.0.0-U1-15MAR2012_V2&productId=229

Release Notes:

http://h10032.www1.hp.com/ctg/Manual/c03537272.pdf

HP Storage Virtualization VMWare

What is VM Density?

What is it?

VM Density used to be (and still is) referred to as the following three contextual definitions related to performance:

  • Number of VM’s that can run on a certain number of physical CPU’s
  • Number of VM’s or workloads running on a host
  • Number of VM’s or workloads running on a single datastore.

What is the business benefits for me?

Storage CapEx – In order to cater for a certain workload IO requirement you need the right mix of spindles to achieve that requirement.  Running more workloads on less spindles can decrease the initial CapEX spent and TCO is also reduced for a variety of reasons.

Server CapEx – By getting the best efficiency and optimal configuration allows the customer to run more VM’s on less physical servers.  So customers may not need to spend more CapEx buying extra servers, this provides particular benefits in TCO such as power, maintenance as well as licensing.

Licensing benefits – By having less hosts, customers generally spend less on host licensing.

 

So how do I optimise my VM density? Enter HP 3PAR

At HP, we introduced the HP 3PAR Get Virtual Guarantee earlier this year, which is a program to guarantee you will double your VM density from a IOPS perspective over your legacy solution if you shift to 3PAR.

So here is a hypothetical situation to explain the guarantee

If you are currently running an environment with workloads of say 30,000 IOPS (collectively), we guarantee we can provide an array that can handle at least 60,000 IOPS for those workloads.

Cool huh?

Small print – If you are running your virtual machines in an environment such as FusionIO or SSD’s then this guarantee will not apply.

 

Whats the secret sauce to this?

HP 3PAR Wide striping – Each physical drive within an HP 3PAR is broken up into chunklets which in turn is made up into RAID groups, so when data (VM’s in this example) is written, the blocks are striped to every single spindle in the array using these chunklets.  This approach aggregates the storage which of course pools the IOPS allowing better performance.

HP 3PAR has been designed from the ground up for virtual and cloud environments, with its VAAI, ASIC and thin provisioning capabilities contributing it really is one of best VMware integration stories on the market.

Refer back to my specific blog post on HP 3PAR Wide Striping here

 

Migration?

Interested but not sure how you may transition to HP 3PAR? Get in touch, HP Technology Consulting can provide expert storage consulting about how to get the job done.

 

Virtualization VMWare

Whats news in vSphere 5.1 – A dive in.

There has been a number of improvements made to the base hypervisor, probably notably around Auto Deploy.

For those who are not familiar with Auto Deploy who have not had a chance to play with it, what it essentially gives you is the power to rapidly deploy new vSphere hosts into your environment and bring them up to a specified patch level you define.  Putting this in a cloud-computing context and in particular infrastructure as service is a BIG step forward – This is becoming more and more automated,  time to market is a key metric when measuring how well your cloud computing business is running.

Without further ado, let’s dive into the new announcements.

 

vSphere 5.1 – Platform

The vSphere 5.1 has undergone a number of enhancements including:

  • Local ESXi shell users automatically get full shell access.  It is no longer required to share a single root account enhancing your audit-trail.
  • SNMPv3 is now supported bringing authentication and SSL support to the host-monitoring infrastructure.
  • Auto Deploy now offers a stateless caching mode that caches the boot image in order to permit a host to boot using the last good-known image should the Auto Deploy infrastructure be unavailable.  I see this as a potential turning point in the adoption of Auto Deploy.  There is a requirement for a dedicated boot device for this feature to function.
  • Auto Deploy can now be leveraged for stateful installs.  This may be beneficial to accounts that already have PXE in place but want to continue using traditional stateful methods.

 

vCenter 5.1

  • vCenter Server is the primary point of management for most environments and it too has been enhanced and tuned for this new release.  Some of the new additions include:
  • The vSphere Web Client is now the primary point of management.  It was noted during a session @ VMworld last week that the vSphere Client will no longer see development or receive new features.
  • An interesting new feature of the Web Client is the ability to pause a task and resume it from the “Work in Progress” task section.   This is helpful if you need to gather additional information to complete a task without cancelling it and starting over.
  • The Web Client does NOT need to be installed on the same server as vCenter Server and you can scale-out your vCenter services across servers.
  • Support for Open LDAP & NIS authentication using the Web Client (not the traditional vSphere client), this will make Linux-only environments happy.
  • Single Sign-on.  Read the PDF for more (the traditional vSphere Client is not supported).
  • The Web Client can track multiple vCenter Servers and inventory objects using the updated Inventory Service so you can now manage multiple vCenter environments from a single pane of glass without using linked-mode unless you wish to share permissions and licenses.

 

vSphere 5.1 – Performance

Outside of the obvious scalability improvements (64vCPU’s, 256 pCPU’s, >1M IOPS) vSphere has undergone a number of refinements in order to improve performance and management;

vSphere can now attempt to reduce the memory overhead of a VM by swapping out the overhead memory reservation of each VM to disk.  This can increase overall consolidation ratios and improve VM per host densities but it comes with the requirement that the swap file be manually created by the administrator to leverage this feature.

Use the following CLI command in your kickstart install or perform post-install.

esxcli sched swap system set -d true -n <datastore name>

 

If you have previously read and implemented the recommendations in the Best Practices for Performance Tuning of Latency-Sensitive Workloads in vSphere VMs technical white paper you will know that it can be a manual and administratively intensive process (outside of PowerCLI).  vSphere 5.1 now offers a checkbox to enable the VM .VMX settings for you saving a number of manual steps.

 

The traditional vMotion and storage vMotion (svMotion) have been combined into one operation offering the ability to perform a vMotion of a VM that does NOT leverage common shared storage.

This means that two servers using direct attached storage (DAS) can vMotion a VM between them.

Consider this feature beneficial for migration scenarios but there is a catch!  The svMotion operation occurs across the “Management Network” vmkernel interface.

So if you are using an HP BladeSystem/Virtual Connect infrastructure you may want to review your design if you have followed any of the Virtual Connect guides that say it is a “best practice” to use a 100Mbit Management FlexNIC.  A 1GE Management interface is recommended and what I recommend.

 

While on the svMotion topic; vSphere 5.1 has changed from performing serial disk migrations of VMDK’s within a VM to a parallel method if the VMDK’s reside on distinct datastores, so lets take a look at Storage stuff

 

 

vSphere 5.1 – Storage

Storage is commonly the least understood topic and receives the least exciting but most useful features.  I won’t cover the new disk format as it is primarily View related however there are other areas of improvement.

 

  • High Availability will now restart VM’s that encounter a Permanent Device Loss (PDL) state (5.0 U1 did too).  Please understand that a PDL is much less common than an All-Paths-Down (APD) state where HA does NOT respond but we may yet get there in the future.  HA responding to PDL’s is a step in the right direction.
  • 16Gb FC HBA’s are now supported.  Where vSphere 5.0 supported 16Gb HBA’s in 8Gb mode, vSphere 5.1 enables the full 16Gb throughput.  An interested tidbit confirmed by Emulex reps on the VMworld show floor indicated the leveraging a 16Gb HBA in 8Gb mode will outperform a similar 8Gb HBA due to the 16Gb HBA ASIC improvements in I/O processing.
  • SMART monitoring has also been introduce using esxcli  (but NOT vCenter) in order to examine disk error characteristics.  This has been targeted for SSD monitoring but it can only be leveraged using the command line.

 

  •  The ability to automatically detect and set the congestion threshold to the 90% percent throughput mark.  This is done using the SIOC injector that measure latency against throughput and can dynamically tune the threshold to the characteristics of the underlying disks.  It is very much a “set it and forget it” feature that dynamically adjusts to a changing environment.

 

  • Additionally, the underlying SIOC injector has also undergone improvement in where it measures the latency characteristics.  Instead of a leveraging the datastore latency metric which effectively ignores the storage stack above the datastore level, the new SIOC injector leverages a new value coined VmObservedLatency that measures higher up the virtualized storage stack as detected by the actual VM’s in order to more accurately reflect the performance characteristics experienced by the application or user.

 

  • The SIOC injector now also has the ability to detect common underlying disk striping configurations in order to avoid svMotioning VM’s across datastores backed by the same spindles on the back-end of the array.   The VMware vSphere Storage DRS Interoperability white paper includes recommendations when and when _not_ to enable I/O load-balancing in a SDRS cluster but obviously these recommendations were not always being followed.

 

 

vSphere 5.1 – Networking

Networking is another interesting topic and the vast majority of improvements are focused the vSphere Distributed Switch (vDS).  I should call out that if you are using Enterprise Plus licensing you should take a serious look at the vDS as the classic vSS (vSphere Standard Switch) is unlikely to evolve in the future effectively at its max feature potential.

  • Network Health Check (VLAN, MTU and failover team validation) is a very welcome addition as I have seen customer environments encounter HA events (and unplanned VM downtime) due to misconfigured teaming and/or switchports.  You want this feature!!

 

  • vDS Management network rollback and recovery is the catalyst that will calm the fears of a cluster-wide failure due to accidental misconfiguration of a fully vDS design.  If a change occurs and the management network loses connectivity the vDS will automatically rollback the last change(s).  A very impressive live-demo of this feature was shown at VMworld.  This is one of the last hurdles for what I see as the beginning of majority support for the vDS instead of the vSS.

 

  • vDS Distributed Port Auto Expand – while a nice touch in itself the PDF has some helpful information on selecting the best vDS “Port Binding” method for your environment.  The Static Binding method is the default and likely best candidate for the majority of environments out there.  Consider a traditional server has a fixed cabling configuration into a physical switch, the cables do not move.  This is akin to static binding, a fixed configuration that does not depend on vCenter to PowerOn VM’s.

 

  • Dynamic Binding is depreciated.

 

  • Ephemeral is a “plug-and-pray” method with no fixed binding but you therefore lose vCenter performance history and stats and increase the troubleshooting complexity.  Not recommended for most.
  • There are a number of other great features but I want to point out one last new feature that mitigates a risk that has been hiding under the radar across most environments.   The BPDU filter.  If your VMware environment is connected to a network that leverages the Spanning Tree Protocol (STP) then prior to vSphere 5.1 it is possible to take host VM networking offline if you follow VMware’s own switchport configuration guidelines.

 

  • VMware recommends that all hosts should NOT participate in STP by enabling PortFast and BPDU Guard that prevents accidental layer 2 bridging loops from causing a network disruption.  The problem is that a VM with two or more vNIC’s attached could potentially bridge interfaces and introduce a loop.  When this loop is introduced, BPDU packets are sent out and a properly configured switch would err-disable the port attached taking the VM offline and eventually all other vmnic’s attached to the switch by nature of VMware failover capabilities.  Consider this a denial of service risk.

 

  • Now with the vSphere v5.1 you can enable this advanced feature, Net.BlockGuestBPDU, which is disabled by default on both the vSS & vDS.  This is the only feature that I can see that has made its way into the vSS and I would highly recommend that any environment using STP and no intention to leverage VM-based bridging by design enable this setting.
Virtualization VMWare

Whats new in VMware vSphere 5.1?

HP VMWare

Meeting Calvin Zito

I have “virtually” known Calvin for some time now having subscribed to and read his blog, I actually sent him a message just when I joined HP to say “G’day”, over the months we stayed in contact and organised to potentially meet at VMworld 2012 in San Francisco (depending on travel approvals). We organised to meet and finally I had the privilege of meeting Calvin at VMworld and recording a joint podcast with him that introduces me to his bloggers (Thanks Calvin!).  I also learnt that Calvin also lived in New Zealand once upon a time and also shared a love for the All Blacks!

Calvin is a great advocate for HP Storage and posts regularly on his blog about interesting events around HP Storage, VMware and similar.

You can read his blog post about me here and within that  is a link to the podcast I did with him, we are planning to work on more blogs\podcasts in the near future so watch this space.

Note: Calvin referred to me as an “Aussie” and not a New Zealander in his post – will need to get this fixed 🙂

photo

HP Storage Thin VMWare

Building the business case to choose HP 3PAR – The Ninja’s way.

One of the cool things I get to do in the HP storage consulting business is show customers the potential savings they may profit from simply by moving across to an HP 3PAR storage array. This assessment nicknamed the Capacity Savings Assessment is backed by HP’s Get Thin Guarantee and can really show what the move to 3PAR can do for your business.

This assessment can help you evaluate your data utilisation rates and provide reports on capacity, power, cooling and even floor space savings.

This can introduce a range of benefits to your business such as smaller footprint, smaller electricity bill, less management overhead etc.

 

Here is what some pages from the sample report looks like:

So how does it do this? it scans the host’a filesystem and reports allocated vs used vs freespace.  To report accurate potential thin savings, the environment must not be thin provisioned already (no point in thinning out an already thin environment right!)

Current support host sets are Windows (any flavour), Unix, Linux and VMware vSphere 4/5.

Key reporting includes Capacity Utilisation, Disk Space Usage, Summary of the 3PAR configuration and a comparison of a 3PAR configuration with the current environment.

From a VMware perspective, the newest release no longer uses VMware PowerCLI  to obtain the information, instead it utilised vSphere’s native SDK and can scan NFS datastores.

 

 

 

 

To help justify a business case of what this means in $ value, consider the following page:

Now I ran this on my home lab, so the statistics aren’t staggering but I hope you get the point.

The aim here is to by going thin will mean less storage which can mean less power and less datacentre costs!  which of course can mean cost savings.

So onto the technical stuff.

What network ports does it use?

Windows: port 135 (RCP/DCOM)

Linux: port 22 (SSH)

ESX: port 443 (HTTPS)

 

How much network traffic will there be?

Network traffic resulting from a capacity scan is marginal. Testing shows less than 4KB per scanned host.

 

What permissions does it require?

Windows: User account must have Read Security and Remote Enable permissions on WMI namespace root\cimv2

Linux/Unix: User Account must be enabled for SSH login and have permission to run the df and lshal commands.

ESX server: User account must have vCenter browse privilege.

 

Will it add overhead?

No, the discovery tool is a lightweight tool, the execution overhead is marginal, typically less than 5% CPU utilization.

 

Intrigued? Why not contact me and organise a free assessment on your storage and VMware environment. If you are not in my area, contact me anyway as we have ninja champions all over the world.

 

 

 

HP Storage Virtualization VMWare

HP 3PAR Management plugin for VMWare vCenter

 

What is it?

The HP 3PAR Management Software Plug-In for VMware vCenter is a vSphere vCenter management console plug-in that allows easy identification of HP 3PAR virtual volumes used by virtual machines and datastores.

 

What does it do?

It can provided a single pane view of the virtual machines and the 3PAR virtual volumes resources they are attached to.  It can show capacity, usage, thin and thick properties on a volume basis and also the disk type the virtual volumes are made up of.  The beauty of this plugin is that you do not need to login to the inform console to view the virtual volumes mapped to your ESXi hosts. Note: It does not allow you to provision\manipulate the storage from this GUI – Inform Management console is still required for that!

 

A peek at the interface

 

This screenshot really does say it all, you can see how much savings you are getting with using the 3PAR thin suite functionality, you can also see the name of the virtual volume that the VMFS is housed on making it easier for administrators to locate and map virtual machines, trouble shoot and provide reports.

And of course, it shows you whether or not you are getting the most out of your EZT VMDKs, by showing the status of the Zero Detection engine on the array ensuring your 3PAR stays thin!

And best of all, the plugin is free!

For more information on this great plugin please visit here or contact me

 

HP Storage Virtualization VMWare

T10 UNMAP in VMware vSphere and 3PAR

What is T10 UNMAP?

UNMAP is a SCSI command used to reclaim space from blocks that have been deleted by a virtual machine (OS or application).

In vSphere 5, UNMAP is used for space reclamation of deleted data after common operations

This is particularly beneficial and important in thinly provisioned environments as it allows the storage array to realise these are unwanted or unused blocks and to return them to the free capacity pool.

 

What makes UNMAP important?

HP 3PAR thrives on the thin suite and it supports UNMAP as of Inform OS 3.1.1, however the first release of vSphere 5.0 had some issues where there were unexpected timeouts when the UNMAP command was issued from the ESXi host during an operation.

So when an operation like storage v-motion or a virtual machine is deleted, a copy or movement of data is kicked off essentially leaving behind deleted blocks, HP 3PAR can only realise this so long as UNMAP is started to reclaim those blocks. Since administrators are using storage v-motion on a day-to-day basis, the impact can be huge.

 

So what do you do?

Disable it until VMware release a fix – expected in the next patch release.

Use a manual command such as sdelete on Windows or dd on Linux to write zeros at the file system level, The 3PAR’s ASICs will pick these zeros up so long as zero detect is enabled.

 

How can UNMAP be disabled?

Support for UNMAP in our storage arrays is enabled by default and cannot be disabled by customers. In vSphere support for UNMAP is enabled by default but can be disabled via the command line interface.

esxcfg-advcfg –s 1 /VMFS3/EnableBlockDelete

 

This can be completed automatically by installing ESXi 5.0 Patch 02. For more information, see VMware ESXi 5.0 Patch Image Profile ESXi-5.0.0-20111204001-standard (2009330).

 

Summary

  • This issue only affects thin provisioned arrays in the 3PAR family
  • UNMAP is a SCSI command standardized within T10 SCSI command set – It is not specifically a vSphere 5 feature
  • This Issue only occurs when using ESXi 5.0 and 3PAR arrays that have the latest firmware.
  • Customers can still reclaim space without UNMAP using the 3PAR arrays zero detect functionality should they disable UNMAP
  • A patch is available that resolves this issue
Performance VMWare

vBenchmark – A Quick look

What is vBenchmark?

vBenchmark is a tool from VMware and is a simple to use tool that measures VMware virtual environments looking into how much physical RAM you are saving by virtualising your servers, provision time for a server, what is HA, storage v-motion, v-motion doing for you from a downtime savings perspective.

It can obtain performance metrics across one or multiple vCenter servers

 

What can it do for me?

It can give you a view of what going virtual as a business decision is doing for you, it is good for justifying the decision to move to a virtualised environment – cost savings on hardware is the first and most prominent benefit that virtualisation can offer not to mention the green factor. vBenchmark gives you figures\statistics from a resources and business perspective.

 

 What does it look like?

I set vBenchmark up on my laptop as a virtual machine to take a look at the interface. Whilst my laptop isn’t the most performance system around, it did give me ability to gain an overview of the tool.

 

The Console

 

The web interface

Where can I get it from

http://labs.vmware.com/flings/vbenchmark

 

Overall, a very useful tool provided you have historical data in your VCDB to populate it.  It works brilliantly and can assist in trending\future proofing your virtual environment.

Virtualization VMWare

VMware vSphere 5.0 Whitepapers


I was sending these links to a VMware vSphere 5 newcomer as they give great overviews of some of the new features in vSphere 5.0, thought it would save some time for folk collating the same.

Enjoy!!

 

What’s New in vSphere 5.0

What’s New in VMware vSphere 5.0: VMware vCenter

What’s New in VMware vSphere 5.0: Platform Whitepaper

What’s New in VMware vSphere 5.0: Performance Whitepaper

What’s New in VMware vSphere 5.0: Networking Whitepaper

What’s New in VMware vSphere 5.0: Storage Whitepaper

What’s New in VMware vSphere 5.0: Availability Whitepaper

What’s New in VMware Data Recovery 2.0 Technical Whitepaper

What’s New in VMware vCenter Site Recovery Manager 5 Technical Whitepaper

What’s New in VMware vCloud Director 1.5 Technical Whitepaper

VMware vSphere Storage Appliance Technical Whitepaper

Virtualization VMWare

Cold migration in VMWare – What is it?

So the topic of what is a cold migration (in VMWare vmotion speak) came up in a conference call today with a customer.

It is a migration strategy when there are CPU compatibility constraints  between certain revisions of CPUs, it basically means there is no path to use v-motion as such so will associated outages.
When does it occur?

As mentioned, cold migration is a strategy or a decision to migrate virtual machines between different revisions of CPU (whether it be manufacturer or models). An example might be going from an AMD chipset to an Intel chip.  There are cases that going from a same vendor CPU requires an outage so cold migration would be an option.  More information on what to check, how to check can be found here

What happens? 

Quite simply, the virtual machine is powered off on the source host and powered on, so there is an outage but as long as both ESXi servers have visibility to the same shared storage, then cold migration can be very fast and the virtual machine downtime kept to a minimum.
The difference to vMotion

Biggest difference, vMotion is (typically) performed without any downtime on the virtual machine where as cold migration requires an outage to power down and power up the virtual machine on the destination host.

Another notable difference it happens at a management network layer and not the VMKernel layer (which vMotion uses)

That is cold migration in a nutshell!

Virtualization VMWare

VCP 510 exam – My thoughts

Having recently sat the VCP 5 exam, I thought I would offer some tips and study advice.  Overall, there are 85 multi-choice questions and you have 90 mins to complete and their is a lot more focus on new features, troubleshooting, and configuring than previous versions of the exam which was usually based around limitations and maximums.

 

 

Networking

  • VMKernel securing – how (and where) to do it.
  • Load balancing policies
  • Path Selection policies – Quite a few questions on these.
  • Traffic shaping
  • VDS – Lots on this topic. What features are unique to VDS and what aren’t,
  • Promiscuous mode vs forged transmits – About 2 questions involving these
  • Restarting management network – how to do it in vSphere 5
  • Securing your host – turning off ssh etc.
  • Uplinks – what are they and what do they do. Relationship with vSwitches
  • CNA – Question around image profiles and driver certification
  • ISCSI and implications of changing certain parameters such as CHAP

Storage

  • New features: VMFS3 vs VMFS 5, migration to VMFS5 – what changes and what doesn’t. Maximum file size supported
  • VSA – How to configure. Valid states of a VSA
  • RDM – Physical compatibility vs Virtual
  • Storage Profiles – Learn what it does.
  • VMKernel interation with storage array, what the array does and what the kernel does
  • VAAI – Benefits of VAAI and what the supported array can do.
  • Trouble shooting storage performance – What counters to look at

Advanced Features

  • HA – What it does.
  • FT – Why you would use it – use cases
  • DRS – Ports for DRS and HA
  • vMotion/EVC – Where to configure and requirements around CPU. NPIV and vMotion compatibility
  • Resource Pools, shares, limits and reservations – Lots of questions around these, learn what increasing and decreasing each element does and the effect.
  • Performance tuning and troubleshooting – They give you line graphs and ask you to understand and diagnose the issue, scenario based troubleshooting. (I.e Image shows error, what is the cause), ESXtop
  • Memory conservation – TPS vs ballooning
  • Upgrading from ESX3 to ESXi5 – One question on this, basically you can’t do it.
  • Upgrading from ESX4 to ESXi5 – rules, methods and things you need to look out for.
  • How to back up a ESXi host before upgrading
  • Understanding alarm warnings and alerts and how to configure
  • vApps and IP allocations, and what objects they contain.
  • Log file configuration. Increasing etc
  • vCenter Server – What extra capabilities does it give you over managing a host directly.
  • Auto Deploy – Learn Image profiles and how to use.
  • VSA – Quite a few on this, how to upgrade from earlier version was one question
  • Modifying Users and permissions and the impacts

Hope that helps.

Virtualization VMWare

Thin vs Thick: VMFS formats.

This took me a little while to get my head around the concepts.

But here is my understanding:

Thin In this particular format, the of the VMDK file on the datastore is equal to the amount that is used within the VM itself as it zeros out the space prior to I/O being written, so for example if you create a 200GB virtual disk, and you populate it with 100GB worth of data, the VMDK will be 100GB in size and will grow as more data is added to it.

Thick The VMDK file on the datastore is the size of the virtual disk file that you provisioned but no prezeroing takes place like it does in thin format.  So for example if you create a 200GB virtual disk and write 100GB worth of data to it, the VMDK will still appear as 200GB in size but only contain 100GB worth of data.

Eagerzeroedthick The “truely” thick virtual disk, the size of the VMDK file within the datastore is equal to the virtual disk size that is provisioned. If you create a 200GB virtual disk, and write 100GB worth of data the VMDK will be 200GB and contain 100GB worth of data and 100GB of zero’s. Which format is the best? There are pro’s and cons for each. Thin format requires more monitoring and cant be used with RDM’s where Thick/Eagerzerothick are not as efficient as thin and one might not see as much space savings when implementing this type.

Virtualization VMWare

Enabling SSH and SFTP on ESXi 5.x Host

So I had just built a ESXi 5 VM when I wanted to upload some ISO’s into a datastore, alas SSH is turned off by default in ESXI 5

So, first part is to turn it on, you need to be physically at your ESXi box in order to do this part.

At the ESXi console screen

Logon using the root account

Select “Troubleshooting Options” from the menu

In the next menu, select “Enable SSH”, you will notice that it says ‘Disabled’ in the right hand pane

Press enter to change to enable

Thats it!, you can now quit out of there and go onto the next part which is to get the SFTP server running, truth is it is missing by default in ESXi 4

So lets get it

ssh into your esxi box using the root account.

cd /sbin */ Changes to the right directory
wget http://thebsdbox.co.uk/wp-content/uploads/2010/08/sftp-server.tar.gz */ Downloads sftp-server files
tar xzvf sftp-server.tar.gz */ Extracts file into current directory /sbin
rm sftp-server.tar.gz */ Removes file now that we have extracted it

Log out.. Thats it! You should now be able to SFTP files to and from your ESXi 5 host!

HP Virtualization VMWare

HP 3PAR recovery manager for vSphere.

What is it?

HP 3PAR Recovery Manager Software for VMware vSphere enables the protection and rapid recovery of virtual machines and VMware datastores

What does it look like?

<insert image here>

What can it do?

It provides virtual copy management and allows you to take LUN-level snapshots of virtual machines and datastores through the vSphere management GUI by using array-based snapshots that are quick, space efficient, and virtual machine aware.

The plugin makes it possible to create hundreds of virtual copies. The number of virtual copies to retain and the retention period for each virtual copy can easily be specified

This plug-in can do granular restores at the VMFS level, the virtual machine level, or the individual file level.

Neat stuff!

If you want to know more, get in touch and email me or visit http://www.hp.com/go/3PAR

 

 

HP Storage VMWare

HP 3PAR Dynamic Optimisation and VMWare vSphere

In a nutshell, HP 3PAR Dynamic Optimisation Software is a software license/product enabled on the storage array itself that can provide a non-disruptive way to make changes to storage volumes hosted on the HP 3PAR Storage System.

Storage administrators can move volumes between different drive types or tiers (SSD, Fibre Channel, SATA/Nearline), leveling volumes as new drives are added into the array, all without outages or impacting any hosts that the system is busy serving I/O to.

So how is this good for virtual environments? It can be used to move running VMs between different tiers without impacting what the virtual machines are doing.

Similarly, as new drives are added to the array, the LUN that ESX is using can be striped across the new drives on the fly without taking an outage at the ESX server level. VMWare’s vMotion technology offers somewhat similar functionality, but at the host layer.

Dynamic Optimization works at the storage layer, which can be used to optimize storage service levels while VMware vMotion can be used to optimize CPU utilization across multiple hosts.  Very similar to storage vMotion but all on the array itself!!

Pretty cool!

HP Virtualization VMWare

http://www.vmware.com/a/vmmark/ – Impressive Benchmarks by HP

Personally, I love seeing stuff and tests like these.

Whilst other technologist’s from representing companies may be quick to defend why their particular company’s hardware doesn’t get the top score they would of hoped. I try to take another angle on approaching these sort of benchmarks.

Why? I’ll break down the reasons why I think these are good to have.

Competitive – Simply, without some form of benchmark or competitor to design your products to compete with – Then technology wouldnt get as sophisticated as it has. Whilst server virtualization hasn’t been as prevalent or utilised as much as it is over the recent years. This particular benchmarking results show there are some worthy competitors to HP in the server market. It wouldn’t be as fun if it was a one horse race. This keeps the engineering team from Fujitsu, Dell, HP etc returning to the drawing board to make servers better and better.

And for a virtualization geek, this is exciting.

Trending – We can see how well servers do now and compare in five years time, There may be a gradual improvement in scores in the five years, or they may just increase exponentially.

Reviews – Simply put, some-one looking to buy a server for virtualization purposes has a great source of information on best performing models as a starting point to purchasing the right server. It also provides the consumer with an idea of just what elements affect server performance.

Well done to the top four server vendors – Fujitsu, Dell, HP and Cisco

Virtualization VMWare

vSwitch vs Distributed vSwitch – What you need to know.

Content Lost due to hacking incident, they actually left the heading intact on this one – Will need to write this again 🙁

Virtualization VMWare

Vsphere 5.0 and the new licensing scheme

OK so how license worked in vSphere 4 was straightforward: licenses were bought on a per CPU socket and you could run unlimited virtual machines (VMs) on the host until it crashed and burned (if you desired).

Things have changed in vSphere 5.0, whilst still working on a per socket basis, licenses also now come with a set amount of virtual RAM or vRam that can be allocated to VMs.

This could result in a customer to spend additional unnecessary dollars in additional licenses to be compliant with new vSphere 5 licensing scheme. If the customer buys hosts that can hold a large amount of RAM, these licenses costs can start to prove very costly.

To quote VMWare from their white paper on the matter.

"VMware vSphere 5 is licensed on a per-processor basis with a
vRAM entitlement. Each VMware vSphere 5 processor license
comes with an entitlement to a certain amount of vRAM capacity,
or memory configured to virtual machines. Unlike in vSphere 4.x
where core and physical RAM entitlements are tied to a server
and cannot be shared among multiple hosts, the vRAM entitlements
of vSphere 5 licenses are pooled, i.e. aggregated, across all vSphere
servers managed by a vCenter Server instance or multiple vCenter
Servers instances in Linked Mode"

 

More information on the licensing scheme can be found on VMWare’s website @ http://www.vmware.com/files/pdf/vsphere_pricing.pdf