Good new year to you! I trust that if you are reading this then you survived the Christmas and New year break and hopefully took some time off to spend it with family and friends.
Last year, brought a ramp up in the adoption of Cloud services among many organisation – probably not as much as a lot of folk thought to warrant the term ‘shift’, but certainly the maturity of what we perceive to be cloud computing (and what it is not) led to more and more firms asking themselves – How we can do this better and more cost effectively?
This is largely due to the massive number of vendors in the ICT universe, now marketing their offerings as “cloud-ready” – Microsoft, Oracle, Amazon, and HPE are some of the notable larger players who are now aligning to this new phenomenon along with other smaller vendors now claiming to offer a path to adopting cloud computing.
One of the pitfalls of heading down the cloud path is that companies are finding out that some legacy services simply cannot transform. More often than not, this is due to the fact that the hardware platforms that exist in newer public cloud providers do not resemble their on-premises infrastructure and architecture.
The fallout from this difference is that their most critical applications are no longer performing as expected, proven or even supported leaving their business at risk. So a lot of companies want cloud, but simply cannot (or perhaps will not) transform to be able to adopt it due to this risk.
Let me pose a question to you – if you are considering cloud but have held off until now due to the unknown risks associated with the change). “What is the fundamental reason for the reluctance?”
Perhaps cost is an inhibitor, or perhaps it is the unpredictability of the transformation and how things are going to look and perform on the other side that has kept your business from travelling down the cloud road.
Whatever it is, it is not always possible that you can lift and shift all services to the cloud – at a conservative view, for every single CPU associated with Oracle Database, there is at least three CPU’s of other workloads tied to that database which relies on it. When you put it in this context, the task at hand to transform your business to be cloud-enabled seems enormous!
My next post will cover what Oracle is doing in regards to this transformation and managing change and risk by providing predictability and like-for-like performance for your Oracle database environment as well as other workloads you may have.
I will further discuss Oracle’s view on the journey to cloud and how it can help you:
- Streamline Enterprise IT on-premises to be cloud ready should you ever need it
- Expand your Private Cloud
- Deploy a Hybrid Cloud
- Bring Public Cloud onto your Premises
- Lift and Shift to Public Cloud
Recently Oracle held Oracle OpenWorld over in San Francisco, USA, for those who do not know what that is (perhaps a small group of you). Oracle OpenWorld is our big annual event for business decision-makers, IT management, and line-of-business end users that showcases new Oracle product features and other news.
It also gives our customers, partners and employees the chance to hear directly from the Oracle Executive team and product teams on company direction and product strategy.
Although I did not attend personally, I was still excited and proud to find out that one of our customers, Weta Digital from my hometown in Wellington, New Zealand was making the trip over to San Francisco to speak on stage about their ground-breaking VFX business and how Oracle ZFS provides incredible latency and disk performance to keep up with their intense rendering operations.
So who is Weta?
Weta Digital is a world class visual effects company based in Wellington, New Zealand. If you have not heard of them before, then maybe you might of seen some of the cutting edge visual effects they have produced on blockbuster movies such as The Lord of the Rings trilogy, The Hobbit Trilogy and King Kong, and to top it off, they have five Visual Effects Oscars to their credit to validate their world class status.
But producing the magic to bring these blockbusters to life is a time consuming process, and in the movie world time is crucial. The Media and Entertainment industry giants place incredible demands on Weta to meet tight production milestones or else their movie slips. And this can cost the giants…. big.
So how can does Weta Digital gain more time back? Processing large movie files requires a powerful storage solution to deal with the intense digital rendering requirements needed in the post production stage of the movie.
Large block size for tuning
The ZFS storage appliance gives Weta Digital the ability to use a 1MB block to enhance the streaming media writes coming into the appliance, by writing these larger records, large files such as media files are broken up into big parts and written extremely quickly.
If you read my last blog post, you would remember that the DRAM cache format handles up to 90% of all IO being thrown at the ZFS. And with DRAM performing up to 1000 times faster than flash, you can understand how incredible amounts of IOPS is achieved out of the appliance.
Multi core performance, all cores working simultaneously to deliver data and directory services that Weta consume, these could be running the compression engine service to deliver a better return on the investment in storage, or calculating the optimal way on how to serve the data out to the client.
The combination of the ZFS high-speed DRAM centric architecture, tuneable record size and SMP architecture are what allows Weta to achieve the performance, flexible required for the media workloads they are throwing at the appliance.
What this means is that, with a faster response time and rendering operation completion time, this will ultimately gives time back to the Weta rendering team to continue work. With thousands of servers in their rendering farm, this allows Weta to deliver superior visual effects for blockbuster movies on time.
Watch the video below to hear from Kathy Gruzas, the CIO of Weta Digital talk about what this means for her business and her team.
This is a two-part blog post that will provide you with an overview of Oracle ZFS and how it relates to VMware environments. The first part (this blog post) will introduce Oracle ZFS storage and discuss the architectural fundamentals and how they benefit VMware workloads. The second part will look at VMware specific support, integration and why this appliance proves to be an excellent platform to host your virtual workloads. With that, let’s introduce Oracle ZS!
What is Oracle ZFS
The Oracle ZFS Storage appliances are a set of highly scalable resilient storage systems that provide platform for mixed high-performing workload types. Built on SMP architecture designed to make full use of all CPU’s and threads, and using the ZFS file system which offers a multi layer caching architecture to run applications as quickly and efficiently as possible, the appliances support multi protocols ranging from file-level NFS and SMB, to block-level Fibre channel and iSCSI. Depending on your requirements, the appliances comes in two flavours – ZS3-2 and ZS4-4 with the key differences between these systems are predominantly hardware related (CPU cores, Max supported storage (Cache and disk) etc . Take a look at the below summary to identify the major differences:
Retrieved from http://www.oracle.com/us/products/servers-storage/storage/nas/oracle-zfs-storage-appliance-ds-2373139.pdf
The Virtual Architecture of Oracle ZFS
There may be a misconception that Oracle ZFS Storage Appliances are only designed for Oracle database workloads, given the unique capabilities customers gain, when combining Oracle storage, with the Oracle database subsequently allowing them to do some extended operations. Whilst this IS true, the misconception that other workloads do not work well on these appliances is NOT true. In fact given the nature and randomised workload of virtualisation and cloud environments, ZFS storage is a great choice for both Oracle and non-Oracle workloads. Why? Let’s start off by looking at the architecture of ZS appliances – Hybrid Storage Pools, Data integrity and performance RAID striping.
Hybrid Storage Pools
Hybrid Storage Pools (HSP) is a concept which describes the virtual pool layered over the whole collection of drives at the controller level. Each controller in a ZS owns this collection of drives which can be added to a virtual pool or HSP. From there, you can carve out block or file shares to your clients. Nice and easy!
End-to-end data integrity
Every block of data is checksummed which protects your data against silent data corruption, the file system employs a 256-bit checksum for every block. Instead of storing the checksum with the block itself, it stores the checksum in its parent block. Every parent block pointer contains the checksums for all its children blocks, so the entire pool can validate that the data is both accurate and recoverable.
RAID-Z -High-performance striping
Oracle ZFS Storage combines the capabilities of RAID volume management and a file system, which allows for intelligent decisions to be made about block placement, resilvering (RAID rebuilds), data repairs, etc. For example, if a disk needs to be replaced, only the ‘live’ data needs to be copied to the new drive which can reduce rebuild times. As another example, if a drive were to misbehave and start handing back corrupt data, the metadata kept within ZFS allows the system to identify, and correct problems on the fly transparent to the application.
Fundamentally, ZFS offers software based RAID called RAID-Z, which comes in three different flavours: Mirrored, RAID-Z1, RAID-Z2 and RAID-Z3:
Writes are divided and written in full across two or three drives depending on your redundancy requirements.
RAID-Z1 protects data against a single drive failure by storing data redundantly among multiple drives. RAID-Z is similar to standard RAID 5 does not have the write penalty that RAID5 encounters.
RAIDZ2 is similar to RAID 6 and offers double parity to tolerate multiple disk failures, and the performance is equivalent to RAIDZ1.
This type is similar to RAIDZ and RAIDZ2, but with a third parity point as an added protection. This allows up to three drive failures and the performance is similar to RAIDZ and RAIDZ2.
All raid types can be tailored depending on the workload, but overall the architecture is specifically designed for high-bandwidth and low-latency application requirements which is why workloads such as Splunk, Microsoft SQL, Oracle RAC, and other database types work so well!
But how do these relate to Virtualisation and Cloud environments? Why is it so beneficial? Lets explain in the next section.
Virtualisation and Private Cloud Workloads
Virtualisation and cloud workloads have fundamentally changed the type of stress placed on storage systems. Traditional storage architectures relied on disk spindles for performance (either conventional or flash) no longer meet the requirements that these new types of workloads need. Pooled shared resources means that CIOs and CTOs want to maximise the investment they make in technology by increasing the utilisation and efficiency of the infrastructure. This shift in mindset has influenced the rise of private cloud environments and converged infrastructure. Legacy storage infrastructure struggles with this new found type of workloads resulting in some companies retrofitting their architecture to meet the demands. So how do Oracle do it?
We include massive Dynamic random-access memory (DRAM) cache and an operating system that optimises its use, allowing up to 90 percent of I/O to come from the fastest possible medium which is actually faster than flash drives.
Serving Virtual workload IO’s via Hybrid Storage Pools
Back in 2008, Sun (Later acquired by Oracle) devised a way to serve storage out of DRAM resulting in faster response times for those tier-one application workloads. How does this happen?
When IO requests come in, our intelligent adaptive cache manages the IO from any workload type. This smart management results in up to 90% of the incoming IO requests getting served out of fast DRAM, this results in an extremely fast response time.
Once the IO has been served, the data remains in DRAM and is utilised on a MRU (Most Recently Used) or demoted to disk based on LFU (Less Frequently used) basis – The Adaptive Replacement Cache intelligently changes the cache based on the workload. So when data is requested from the ZFS, it first looks to the DRAM; if it is there, it can be retrieved extremely fast (typically in nanoseconds) and provided back to the application requesting it. The even cooler thing here is that there are multiple levels of acceleration within the ZFS storage stack – which determine how best to service the IO request. The first level is simply referred to as ARC or Adaptive Replacement Cache which resides in the DRAM module, the second level (generally if the data is not present in the first level) is L2ARC which resides on SSD – offering a slower response time but still as quick as you could expect from SSDs.
One might argue that there are various AFA models on the market today that offer similar functionality and smarts within a system which is correct, the difference here is that these SSD’s that typically serve the IO’s are not as close to the CPU as ARC hence a quicker response time.
SMP for performance.
Oracle ZFS runs on a Symmetric Multi-Processing (SMP) Operating System. A ZFS appliance is capable of running thousands of CPU threads simultaneously and have full access to all I/O devices, and are controlled by a single operating system instance that treats all processors equally. This prevents from running into any CPU bottlenecks that may impact storage and subsequently VM performance. (Holy Kahuna!)
Achieving higher rates of VM Density
To achieve a better ROI on hardware in virtual and cloud environments, Oracle ZS excels at achieving high VM density ratios How does it do this?
VM density for the purpose of this article can be defined as number of virtual machines housed on the datastore and does not relate to the number of virtual machines residing on server architecture.
Generally speaking, the main challenges in achieving high VM density are IO bottles necks, with DRAM-centric architecture employed in the ZFS subsystem this dramatically increases the number of VMs one can deploy per system, lowering costs and increasing efficiency overall! ZFS offer multiple storage profiles (Mirrored, Single Parity, Double-Parity and triple-parity) depending on your application performance and availability requirements.
Reducing storage footprint with De-duplication
Generally speaking, virtual workloads are great candidates for de-duplication as they share a lot of the same blocks within the VMDK’s of the virtual machines if you run multiple VM’s of the same type on the same datastore within ZFS. The de-duplication engine recognises this and stores only one copy of the identical block resulting in higher ROI and efficiency of your infrastructure. ZFS also offers compression to further reduce your storage footprint and maximise your investment.
Analysing using Oracle DTrace Analytics – What is it?
Oracle DTrace provide visual real-time storage analytics of your virtual environment allowing customers to better utilise storage resources by identifying, troubleshooting, and resolving storage bottlenecks. Sometimes, these bottlenecks result in customers throwing more hardware at the problem to keep up, by leveraging these smart analytics, these customers can essentially squeeze the most out of their hardware before investing in new hardware. This is offered in the base default configuration, and Oracle ZFS can provide even further statistics using Advanced DTrace.
To enable Advanced DTrace, in the ZFS storage appliance GUI browse to Configuration>Preferences and tick the box entitled “Make available advanced analytics statistics” however this should only be used for troubleshooting purposes and should not be left on. You can get a fairly sufficient level of analytic results without this turned on:
Once this has been ticked, it will enable an extended set of granular statistics available for the ZFS Storage appliance.
What does Oracle DTrace look like?
From the example below, you can get a gauge as to the level of detail it can provide of the resources, this is the dashboard view and in practice can drill further..
In terms of NFS connectivity, the following analytic counters are available for monitoring/troubleshooting etc:
- NFSv3 operations per second of type read broken down by latency
- NFSv3 operations per second of type write broken down by latency
- NFSv3 operations per second broken down by size
- NFSv3 operations per second broken down by type of operation
- NFSv3 operations per second of type read broken down by size
- NFSv3 operations per second of type write broken down by size
This also provides a good mechanism to show that all CPU resources in the system are being used (Thanks SMP!), another example is diving a level deeper and determining where these read\write IO’s are coming from, what particular files they are attributed to etc – Being able to observe these operations on a per-VM basis is extremely helpful.
Stay tuned for the next part in which I dive into some of the VMware related aspects of the ZFS appliance such as VASA, VAAI as well as the Storage Manager Plugin for VMware!
Happy new year! I hope that my readers have had a good break and are fully refreshed to hit 2016 running, I know for me it is going to be a defining year after 2015. (its now March and 2015 seems a long time ago!). I have not blogged for a while and have been fairly absent from Twitter as well as LinkedIn. So where did I fall off the radar to? HIATUS!
For the last six months, I have been taking a hiatus (going to be honest here and say I was not familiar with this word originally):
(Retrieved 27th February from http://www.dictionary.com/browse/hiatus)
This resulted in resigning from my job and making a “bucket list” which included seeing more parts of Australia I have never seen before as well as New Zealand. Sightseeing, Hiking, Wine tasting in different regions, Driving, Kayaking, Cycling, Swimming, Salmon fishing were some of the activities I did whilst on my break.
So what is next?
I have had a lot of people ask me what next for me, and I am please to announce I have agreed to join the team at Oracle Australia, focusing on all things storage cloud, analytics and virtualisation.
My role as Principal Technologist will see me dealing with customers of all sizes across all sectors across Australia and New Zealand , and helping them to plan new I.T services through Oracle cloud as well as working with channel partners in delivering top solutions to their customers.
Simply, The People, Technology and Possibilities:
Haven spoken to a lot of the people I would be working with in this role, I gained a clear view and appreciation that there are some very dedicated and enthusiastic people on board which is truly exciting and invigorating – when you are surrounded with that enthusiasm and those positive attitudes, how can one not be inspired.
I spoke to the leadership team and got the feel that it was the right type of leadership to work for (extremely important to me); there is so much opportunity to build a really successful business through partners and customers. The truth is, the group is a start-up business within a massive corporation which gives off a real sense of start-up culture and vibe.
If we look at the most recent Integrated Systems Magic Quadrant published by Gartner as of August 2015, we see that Oracle along with VCE, Nutanix and HP are real contenders in the converged infrastructure market which is rising to be the future of storage platforms as we know it.
Yes, there are still companies that are focusing just on the next evolution of storage (flash), but some of these contenders will struggle as customers are no longer wanting just storage and are looking for real-world integrated packages that are proven and backed by strong ventures. There will be storage startups exiting the market this year.
I like to build and grow businesses, and the Oracle storage business has so much potential to grow with some pretty impressive tech so evangelizing and educating customers and partners on why this technology can really help their I.T challenges seemed like a fun way to spend my 9-5.
Lastly, Oracle is a huge multinational and traditionally not known for storage – so the opportunity to learn other platforms and technologies other than storage was attractive. Whilst I do not think I will end up as a DBA type, I do know that their application stack sit in the Enterprise space so being able to be exposed to these platforms would be a fresh breath of air.
vExpert 2016 – Evangelist
I am proud to be recognized again in this elite group by VMware, this will be my fourth camp receiving the Evangelist award, and although I have not been posting as much content as I have in previous years, I plan to step it up in 2016 as a new opportunity has arisen which I plan to use to engage the community.
What is really cool is that not all vExperts are alike, they are no longer posting similar content due to VMware diversifying their portfolio . This is resulting in mass generation of rich content in all aspects of VMware! We have vExperts in Server virtualization, Desktop virtualization, Network Virtualization, Automation etc. and the list has grown phenomenally since I first got awarded a few years back. A special shout out to Corey Romero for his hard work in keeping this program running.
For a more complete list please visit http://blogs.vmware.com/vmtn/2016/02/vexpert-2016-award-announcement.html
Thats it for now, my next posts will be focused on Oracle ZFS and some of the best practices and learnings I come up with around that! Stay tuned!
Today is a very exciting day at Veeam, for those who use NetApp ONTAP and/or EMC VNX\VNXe primary storage in their environments. We are announcing further integration to our existing storage integration capabilities by allowing customers to now:
- Use storage snapshots to create complete, isolated copies of your production environment in just a few clicks, for fast and easy testing and troubleshooting—available for VMware vSphere with HP 3PAR\StoreVirtual, NetApp ONTAP, and EMC VNX\VNXe primary storage.
- Perform VMware vSphere backups faster and with reduced impact on your virtual environment by backing up directly from NFS primary storage using Veeam’s proprietary NFS client.
- Completely eliminate the additional impact from backup activities on your production storage by retrieving virtual machine data from NetApp SnapMirror or SnapVault secondary storage systems, instead of from the primary storage system.
Also, earlier on this year (actually about a month ago) we announced EMC VNX\VNXe snapshot integration which will allow Veeam Backup & Replication v9 to now read the snapshots from EMC VNX and VNXe arrays. These can be the snapshots set up via a schedule via the Unisphere or within the Veeam Backup & Replication interface itself. Once Veeam has access to the storage array snapshot, customers can dive in and browse those snapshots and launch these powerful restore options without any agents:
It should be noted that this particular integration is limited to snapshots NFS shares provided by the VNX\VNXe only, yes this means no FC support…………….yet.
Why is this important for your business?
For me personally, this gives me added discussion points when speaking to our global alliance partners and their customers – leveraging storage snapshots is the best way to deliver Availability for the Modern Data Centre and is how I see customers taking their existing RPO\RTO capabilities and really taking them into overdrive to be able to achieve RTPO’s of 5 minutes or better.
Storage and Backup used to be treated completely separately, customers had a storage administrator and a backup administrator each with their own agendas and arguably not sharing any synergy between them. We then saw an evolvement where the convergence of Storage and backup technologies were treated separately but very much as a pair of separates (oxymoron?), hardware was being procured that way and it drove behaviour in the customer market to look for synergies to maximise their ROI on each. Enter virtualisation, as well as shared resources and we now have a clear need to treat storage and backup as one. Customers need an availability strategy that touched both storage and backup, and was able to meet the business needs of the company it was serving, and they are demanding more from their infrastructure. Veeam bridges that gap between Storage and Backup and taps into the Storage Infrastructure resources to enable fast, efficient and smart restores for tier one applications. It’s that simple.
All of these features will be available in Veeam Availability Suite v9 onwards due for release later this year.
For our official release on v9 and EMC integration please read http://www.veeam.com/news/new-veeam-availability-suite-v9.html
And for a more in depth article – check out my colleague Luca Dell’Oca’s post here
So as you know, VMware has recently gone GA on vSphere 6, along with the new features, bells and whistles it comes with, there is also a new requirement on the certification front! VCP5.5 was the last technical cert that I sat and hopefully the last for a while given my current role and focus. But that is not why I am writing this post, because whilst the release of vSphere 6 brings new cert and cool features to the world of Virtualisation, it is with great pleasure I announce that Veeam is fully supporting vSphere 6 in Veeam Availability Suite v8 with the following enhancements:
- Support for VMware Virtual Volumes (VVols) and Virtual SAN (VSAN) 2.0
- Quick Migration to VVol datastores
- Cross-vCenter vMotion awareness
- Storage Policy-Based Management (SPBM) policy backup and restore
- Support for backup and replication of Fault Tolerant VMs
- vSphere 6 tags integration
- Hot add transport for SATA virtual disks
- Monitoring and reporting support of VVols
This latest update to Veeam Availability Suite v8 is the largest non-version-increasing update in Veeam history, with more than 1,000 change sets applied. In addition to full support for VMware vSphere 6, it also includes complete integration with Veeam Endpoint Backup FREE, a new free product that enables IT to back up Windows-based desktops, laptops and even a small number of physical servers. How cool is that!?
VeeamON is our global event that focuses on Availability for the Modern Data Centre, its a means for customers and partners to hear from Veeam’s various product, technical, and management teams about what we are doing in the availability space and give them a one-of-a-kind experience filled with education and networking opportunities.
This year, its going to be bigger and better and will once again be run in the city of sin, Las Vegas at Aria. It is going to be epic… Not convinced? Watch the following teaser.
And now go visit http://go.veeam.com/veeamon for more information on how to get there!!!
Veeam has recently announced “Veeam Endpoint Backup Free” to our portfolio of products, I am excited to share with you what Veeam Endpoint Backup is and what it is not. The release of this product is our showcase of our exciting innovation we drive at Veeam. Ok so what does this mean to virtualisation practitioners, home users etc? Let’s start with what it is.
What it is
Veeam Endpoint Backup Free is a standalone product designed to allow users to back up Windows-based desktops and laptops, because while we believe that the modern data center should be virtual, we recognize that some devices – like desktops and laptops – only exist in a physical form factor. With this new product we will now make it possible for users to back up these types of devices. With Veeam Endpoint Backup FREE, you can easily back up your machine to an external hard drive, NAS share or a Veeam backup repository. And if you ever need to get your data back, there are multiple easy recovery options available.
What it is not
Veeam Endpoint Backup Free is designed to back up Windows-based desktops and laptops – it is NOT an enterprise physical backup server solution. It represents a new market for Veeam, and one that we are very excited about.
Best of all, Veeam Endpoint Backup Free is (if you haven’t caught on already) 100% FREE!
Where can I get it?
Please visit http://go.veeam.com/endpoint to request a download link, you should get a response in a couple of days.
How simple is it?
I recently installed Veeam Endpoint Backup onto a VM and configured it to backup to my home NAS, here was the process. First thing we do is Click Configure Backup, this launches the wizard which will take you through the various steps of configuration, the first step is to click the “Configure Backup” link in the top right of the application – this will launch the trusty old wizard interface which all Windows kids adore. The following modes are available:
Entire Computer – Backup all volumes presented to the local computer, this does not include Samba shares.
Volume Level Backup – Backup a selection of volumes (Think C:, D: etc)
File Level Backup – Backup a sub-selection of folders and/or files within a volume.
Please feel free to get in touch with me if you have any questions, in addition there are the Veeam Endpoint forums where other users will be posting questions and solutions to any queries they may have.
Just a quick post to wish you all a very merry Xmas and a prosperous New Year – Eat and drink everything in front of you today!
See you all soon.
We have just updated and released the “Veeam Backup & Replication v8 for VMware: General Overview” Poster which covers deployment methods, components of Veeam Backup and Replication v8 and probably most useful – requirements of all the different moving parts of what makes our software.
Use it as quick reference – Hang in the office, any shared space, home lab, bedroom, toilet whatever…
It’s that time again, applications for the inaugural vExpert 2015 award is now open – It feels like it was only yesterday that the 2014 vExpert announcement was made, I was luckily and privileged enough to be included in that list.
There is no point regurgitating what the vExpert programme is and what tracks you can apply for so I thought I would take a different angle this year.
Why is being a vExpert important?
I am going to go out on a limb here and state being a vExpert is not important, it does not define who you are and how you should do business. It is an award from VMware that acknowledges your contribution to an ever growing virtualisation community of like-minded technology enthusiasts and users. I think back to Grant Orchard‘s post (one of my all time fav’s of his) on providing value in your employment (http://grantorchard.com/general/opinion/want-valued-something-valuable/) – Please read it, it provides an honest and fair view on the VCDX program and what value it could (or could not bring) to your career.
The key takeaway I took from his write up is that it is a achievement, it doesn’t necessarily mean you are “better” than anyone else, more knowledgeably than anyone else, or more valued. I think the same way about being a vExpert, its an acknowledgment of your efforts – keep it at that and remain humble. The ones who do I think are more respected by their peers.
Please visit http://blogs.vmware.com/vmtn/2014/11/vexpert-2015-applications-open.html for more information on vExpert programme.
2014 vExperts: Fastrack application: http://bit.ly/1ikZ8hi (For previous vExperts)
New vExperts: 2015 vExpert application: http://bit.ly/LMJqB5
Thanks for reading and get applying!
Time and time again, I get asked by partners and customers what is the value prop behind Veeam Backup and replication integrating with the various storage arrays for offload operations. Why is there value there? I usually respond by stating that Veeam integrates with specific SAN arrays to let them perform the “heavy-lifting” when creating a backup job resulting in a lot faster backup and restore operations? By why is having quicker backup and restore options that important?
At Veeam, we speak about the 3 C’s of Backup challenges, Cost, Complexity and Capabilities. Inherent to backup architecture and environments that is considered “legacy” do these challenges exist – Why? The obvious reason is no doubt that these environments have grown organically and have never really been designed for the type of workloads that we are seeing in todays world. What is troubling is that customers simply get use to these environments and subsequently “accept” the pitfalls of having technology that has been retrofitted for newer trends and workloads – particularly for customers that are new to virtualisation, backup could be an issue that they don‘t realise they need to address.
For example, if a customer wants to restore a single virtual machine, how would they go about that with a conventional backup solution? With Veeam Backup and Replication it is extremely simple. Veeam has great momentum in the market and when you position it with primary storage arrays such as HP 3PAR or NetApp FAS systems with their unique integration points. These represent the “heavy-lifter” I mentioned earlier in this post, getting a more suitable device to handle specific tasks – for those who are familiar with virtualisation, think vStorage APIs for Array Integration or VAAI – certain storage operations that get offloaded to compliant storage arrays is very similar.
How do we do it? Veeam Software is an innovative provider of data protection solutions for VMware vSphere and Windows Server Hyper-V environments, and offers integration with HP 3PAR InformOS and NetApp ONTAP based arrays as of version 8 of our backup and replication platform. Veeam Explorer for SAN Snapshots enables IT administrators to recover whole virtual machines and application data directly from SAN snapshots, including HP 3PAR snapshots and NetApp SAN snapshots. This capability enables administrators to quickly restore any or all of a Virtual Machine directly from SAN snapshots, which can be taken throughout the day with very little impact on production systems. This enables short recovery point objectives (RPO) for the most common recovery scenarios: users accidentally deleting data, users deleting emails, and system updates gone wrong. This in turn provides a better return on investment in both your backup infrastructure as well as your storage infrastructure as its hardware utilisation increases making sure you get maximum use out of your investment, and reducing costs by not having to fork out for more CPU power to handle backup jobs.
Here are the key capability points for Veeam and Storage Array integration: It is fast – customers will be able to recover an entire VM or individual items in 2 minutes or less. It is flexible – Storage Administrators can restore exactly what they need -quickly and easily: a full Virtual Machine, individual guest files, individual Microsoft Exchange items, Microsoft Active Directory objects. In addition, it is also agent-free; there are absolutely no agents to deploy on virtual hosts or within Virtual Machines so upgrading the software in the future is straight forward. By having this architecture and ease of use, IT administrators can see a reduction in complexity and management of their backups.
So more aggressive RPO’s using compliant SAN based snapshots mean that quicker restore options and lower RTO’s are achievable using Veeam.
Note: Veeam currently supports HP 3PAR and HP StoreVirtual for snapshot integration, NetApp FAS systems will be supported in version 8.0 which is due out in the next month or so.
This year, Veeam is holding its first annual conference “VeeamON” focused on the evolution of Virtualization, the shift to the cloud, and in particular Data Protection and why now, more than ever data availability should be on the focus list of every single CIO/CDO across the globe. Why is data protection important? Let’s take a look at what workloads there were 20 years ago compared to now.
In the beginning….
Historically, Backup really revolved around tape – usually a single tape backup drive connected to a computer that would back up days and weeks worth of precious company data to a single tape. Backups could naturally take hours to complete, as did restores which were rarely tested and also had the potential to take hours to locate the right tape drive, load it into the tape unit and restore in the event of data loss. Some may argue that the process of backing up and restoring back then was fairly straight forward as there were purpose built devices with one job but the efficiencies in doing were so were absent.
But in hindsight, there were (and still are) pitfalls in traditional backup methods, modern day features such as deduplication, compression, incremental block changes that have become a norm in storage and backup administrator’s vocabulary were not available to the business back then so things took time to complete. Eventually these features came but usually through having to purchase a new set of physical hardware or external device to perform the tasks which at times could prove costly. Virtualization was not the norm back then so servers were not portable containers and bare metals restores were 100% part of daily life for the various IT teams. One thing that has not changed from legacy backup times is the reason why backups had to happen. To protect companies Intellectual property in the form of data and to ensure this data was online and accessible after a disaster – Availability.
The need for data availability
During my tenure at HP, I wrote a two-part article entitled “Woudl you like a side of Disaster Recovery with that” aimed to discuss the sometimes misunderstood (and unappreciated) area of Disaster Recovery – what it meant and why it is important. I touched on points that not having data available and online can be (and is) a very costly exercise depending on what industry vertical your business operates in. You may have a read at the two part post at the following two links:
The important message I wanted to convey through these writings was availability has a cost associated with it when things goes wrong and SLAs are no longer being met due to hardware and/or software failures not to mention other unforeseen incidents such as human error (perhaps someone accidentally pulled the wrong cable out – it happens!) , Most storage and backup advocates know that, a lot of our day jobs revolve around making sure SLAs are met whether you are a company providing a service or a vendor selling technology that helps meet those SLAs. – They all tie back to availability and how quickly this happens.
The landscape is evolving.
Take a look at this forecast released by Cisco entitled ‘Cisco Global Cloud Index: Forecast and Methodology, 2012–2017’ and in particular Figure 5. Workload Distribution and its accompanied Table 3, and just observe how massive the shift from traditional workloads to virtual workloads is becoming, of course this varies from country to country but as a general trend you can see the IT landscape is heading more and more into dealing with virtual environments as opposed to physical. In fact, this studies just verified that these days, virtual workloads are the norm and location, location, location also matters – whether it is based on-premise, or off-site perhaps in a cloud provider environment, these workloads still need to be protected at a local (site) basis as well as a off-site (replica) basis. With the newest version of our flagship product – Veeam Backup and Replication, Veaam is positioned perfectly to manage virtual workloads no matter where they reside.
Now, hopefully that set the context of why VeeamOn is important to IT Leaders – CIO’s, CDO’s, CSO, and CTO’s alike. So here are the details:
What is it?
VeeamON is a three day event focused on modern day data centre availability, not just backup, not just DR but all encompassing and will feature some well-known industry speakers and analysts from companies such as Gartner and ESG. It is being held October 6th-8th at the Cosmopolitan hotel in Las Vegas. This event will host a series of quality sessions focused on modern day availability solutions and trends in the modern data centre that help form the “Always-On” story – Think back to the briefs I wrote about and why it is so important to remain always on!. Whether you are business focused, or technical – there will be valuable content catering for both business and partner streams so you can choose from a great variety of content.
Where is it?
The Cosmopolitian in Las Vegas, USA
When is it?
The event is happening between October 6th – October 9th 2014
But wait…. theres more!!!!
Veeam is giving away five free regular passes and one free VIP pass to VeeamON, all you have to do is to go here -> http://go.veeam.com/veeamon-free-pass and enter in your details.
Its that easy!
Cisco Global Cloud Index: Forecast and Methodology, 2012–2017. retrieved 30th July 2014 from http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/Cloud_Index_White_Paper.html
Small change in the updated whitepaper from HP and VMware vSphere 5 when using Round Robin multi-pathing policy (which is the recommended best practice. )
The old recommendation was to use IOPS=100, which has now changed to IOPS=1.
Make sure host persona 11 is selected!
Issue the following command on your ESXi host(s) that are being served storage from the 3PAR.
esxcli storage nmp satp rule add -s
-O iops=1 -c
"HP3PAR Custom iSCSI/FC/FCoE ALUA Rule"
This week, I had the privilege of attending HP’s storage summit in Macau, China. The event focused on enterprise storage market trends, product announcements from HP Discover and where HP Storage are heading in terms of their infrastructure stack and strategy. The event hosted approximately 350 customers and I felt it was received extremely well by all. I got to catch up with old friends and have some great chats over dinners and drinks which was an added bonus.
Veeam APJ at the Summit
This year, Veeam Asia-Pacific and Japan was Gold sponsor of this event, Veeam Backup and Replication v7 has great product integration and support for HP 3PAR StoreServ snapshots as well as HP StoreVirtual snapshot support giving customers the ability to tap into and leverage array based snapshots through the Veeam GUI. Myself and Don Williams, VP of Asia Pacific and Japan, Veeam Software each did an video interview talking about the HP and Veeam Alliance and where the value of working with both HP and Veeam is. UPDATE: Video added – >
Highlights for me
- For me, the networking with the HP Storage staff was invaluable. Having worked with a lot of these folk in my last role it made me realise even more how passionate and enthusiastic they were about where HP Storage is going and the Converged Infrastructure Stack story.
- Speaking to a lot of customers as well and had some really fruitful discussions, this was another highlight for me as the usual question I got is “What does Veeam do”. Meeting these questions with a great conversation around Veeam, Virtualisation and HP was simply outstanding.
- Trying to speak and discuss the value of Veeam and HP Integration to non-English speaking customers was a challenge. Makes me think I should probably learn a new language! Thankfully, we had two technical resources on board that spoke Japanese and Chinese to aid in this area.
- Keynotes – As ever, its very unique to hear from Global leaders of a particular company, and was good to see David Scott, SVP of HP Storage as well as my good friend Dale Degan, WW Product Marketing Manager for Software Defined Storage make the trip down to present.
A fun week, lots of information and a successful first outing in my new role, I can’t wait for the next one!
Next week HP is holding a two-day event that offers focused storage solution sessions in Macau starting on June 25th, HP is a very special Alliance partner for Veeam and I am honored to be attending and representing the Veeam/HP partnership. This will be my first event as the new Veeam Alliances Manager for Asia-Pacific & Japan so am really excited to be attending.
Join HP partners and customers to hear from various HP Technical Experts and Leadership as they speak about the recent announcements from last weeks HP Discover event held in Vegas.
Veeam APAC is also proud to be a Gold sponsor of this event, and will have a booth on-site that has the latest and greatest information on our software suite and its integration with HP Storage Products. If you are heading to this event, and wish to catch up. Please get in contact -> email@example.com.
Look forward to seeing you there!
For more information->
So I got this error message today:
Cannot use CBT: Soap fault. A specified parameter was not correct. . deviceKeyDetail: ‘<InvalidArgumentFault xmlns=”urn:internalvim25″ xsi:type=”InvalidArgument”><invalidProperty>deviceKey</invalidProperty></InvalidArgumentFault>’, endpoint: ”
Virtual machines running on ESX/ESXi hosts can identify and track disk sectors that have changed. This feature is called Changed Block Tracking (CBT)
After much research and trying to manually fix this, I came across the following solution which turned out to be easier than I thought! What we need to do is reset CBT as VMs are processed without CBT typically and in this instance the blocks arent being identified.
Thanks to http://www.veeam.com/kb1113 for the images/resolution.
1. Power off the VM
2. Right click the VM, click “Edit settings”, find the “Options” tab and click “Configuration Parameters”
3. Set the “ctkEnabled” value to false
4. Set the “scsi0:x.ctkEnabled” value to false for each disk of the VM in question
5. Open the source folder and remove any -CTK.VMDK files
6. Power on the VM and power it off again
7. Set the “scsi0:x.ctkEnabled” value back to true for each disk of the VM in question
8. Power the VM on
OK this is really important, when Update one for ESXi 5.5 was released, customers connecting to datastores via the NFS protocol were experiencing intermittent connectivity and in particular All Paths Down or APD.
What is ‘All Paths Down’?
For those who are unfamiliar with this “condition”, APD occurs on an ESXi host when a storage device is removed in an uncontrolled manner from the host, or the device simply fails, and the VMkernel essentially panics. This results in the datastore not accepting any I/O from the virtual machines for the duration of the APD condition. The result is that Windows virtual machines begin BlueScreening and filesystems becoming read only for Linux VMS. This can be permanently or temporarily, either way bad stuff happens.
but alas, a patch has just been released for this.
This patch resolves the following issues:
PR1242103: When you run ESXi 5.5 Update 1, the ESXi host intermittently loses connectivity to NFS storage and an All Paths Down (APD) condition to NFS volumes is observed. During the duration of the APD condition and after, the array still responds to ping and the netcat tests are also successful. There is no evidence to indicate a physical network or a NFS storage array issue.
Entries similar to the following are logged in the vobd.log file for volume named 12345678-abcdefg0 as an example:
YYYY-04-01T14:35:08.075Z: [APDCorrelator] 9414268686us: [esx.problem.storage.apd.start] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down state.
YYYY-04-01T14:36:55.274Z: No correlator for vob.vmfs.nfs.server.disconnect
YYYY-04-01T14:36:55.274Z: [vmfsCorrelator] 9521467867us: [esx.problem.vmfs.nfs.server.disconnect] 192.168.1.1/NFS-DS1 12345678-abcdefg0-0000-000000000000 NFS-DS1
YYYY-04-01T14:37:28.081Z: [APDCorrelator] 9553899639us: [vob.storage.apd.timeout] Device or filesystem with identifier [12345678-abcdefg0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed.
Where can I get it?
Earlier this year, I tweeted that 2014 is a great year to re-baseline and reinvent yourself. What I didn’t know then that I realise now was how much I was going to do this. So what has been reinvented and created change for me in my personal life and professional life?
A new wife – On the 15th February, I married my long-term girlfriend Elise in a small ceremony in Victoria, Australia. As any wedding is, it was a special day as we were joined by friends and family from around the globe to celebrate.
A new degree – Last week I flew back to my homeland of New Zealand to graduate with the degree of Master of Management from the Massey University School of Business, this is special for me as it was approximately six years ago I was forced to withdraw from the very last paper due to relocating across to Australia.
Last year I finally completed that paper and submitted just before I went overseas to the UK. I remember being nervous and unsure at the time as one minute I felt I have done enough and my report was good enough and the next I began to question whether or not I should review once more and add more. I am glad I didn’t as I would of never submitted.
A new role – Last Monday (2nd), I started at Veeam Software as their Alliances Manager for Asia-Pacific and Japan looking after the business for some of our top alliances partners. I have always admired Veeam not for the incredible software capabilities they offer, but for the people and culture I have come to know during my time in my presales roles over the years. As part of the APAC team, I am tasked with driving business and fostering healthy business relationships right across the region with these alliances which I am truly excited about. Furthermore, Veeam has always been on the bleeding edge of virtualisation and well respected within the industry, this is appealing as it will still keep me close to the virtualisation community and outstanding people I have met through this community
A new pet – Elise and I adopted our first pet – a kitten named Miller after seeing a friend post on Facebook that he had found a group of them in a box beside a very busy road in Melbourne. This has created havoc in our household as there is no longer peace and quiet but a welcome addition to our household
We are now halfway through 2014, for me change has come in many forms as you have read. I look forward to the remainder of the year and hopefully more exciting changes.
So I have spent the best part of a day configuring Microsoft Office 365 to use my Cloud-land domain name.
The hardest part (and not actually hard) was creating DNS records within the domain portal to verify that I did indeed own the domain name.
One of the best parts is that I can now use Microsoft Lync with my own domain name. I have yet to explore the Microsoft Azure path, either way I bet Ben Diqual (@bendiq) would be proud!
Try it, Add me on Lync -> firstname.lastname@example.org
HP Insight control plugin for vCenter is the go-forward VMware vCenter management integration plug-in for HP Storage. In addition to managing all HP block storage arrays, this vCenter plug-in also manages HP servers and virtual connect networking with the Server module. HP Insight Control for VMware vCenter Server integrates with both VMware vSphere Client and VMware’s new vSphere Web Client.
HP Insight Control for VMware vCenter enables the VMware administrator to quickly obtain contextual information about HP storage directly from vCenter. Available as a no-charge download, this functionality provides the VMware administrator the ability to easily see and manage how virtual machines relate to data stores and individual disk volumes.
By providing the ability to clearly view and directly manage these relationships between virtual machines, data stores and HP Storage, the VMware administrator’s productivity increases, as does the ability to ensure quality of service.
Features and Benefits
- Fully compatible with the latest version of VMware, vSphere 5.5
- Monitor and manage the physical / virtual relationships between VMware Virtual Machines, ESX servers and HP Storage
- Map the VMware virtual environment to HP storage and provide detailed contextual storage information
- Create / expand / delete VMware data stores
- Create virtual machines from a template
- Clone virtual machines from an existing virtual machine
- Delete an unassigned volume
- Integrated with vSphere Client and the new vSphere Web Client
- Visualize complex relationships between VMs and storage.
- Easily manage peer persistence for HP 3PAR StoreServ
What’s New in HP Insight Control for VMware vCenter
- HP Insight Control Storage Module for VMware vCenter v.7.3.1 April 2014
- Support for the new MSA 1040
- Support for HP 3PAR Inform OS 3.1.3
- Bug fixes
Download here: SW Depot.
Note: Licensing for the storage module is enabled as part of the download and installation sequence. The server module requires Insight Control licenses.
Do you have a VCP from VMware? If so read on! Maintaining currency in the expertise gained and proven by VMware certifications is just as important as earning the certification initially. If your skills are not current, your certification loses value. The technical and business communities expect that VMware certified professionals are current on the latest technologies and capable of configuring and implementing VMware products with the highest level of skill. To ensure that all certification holders meet these expectations, VMware is instituting a re-certification policy for current VCP’s starting today – March 10, 2014 . To recertify, VMware Certified Professional (VCP) holders must pass any VCP or higher-level exam within two years of earning their most recent VCP certification. For more information please visit http://mylearn.vmware.com/mgrReg/plan.cfm?plan=46667&ui=www_cert
|What are they?
HP 3PAR Application Suite of solutions eases administration by providing rapid, efficient online backup and recovery of application specific databases. These software packages enhances the functionality of HP 3PAR Virtual Copy software to allow dozens of VMware VM, Hyper-V VM, Exchange, SQL or Oracle snapshots to kept online economically allowing for extended or frequent recovery points. Using an easy-to-use graphical user interface these snapshots can then be used to quickly restore Exchange, SQL or Oracle instances or databases, or to non-disruptively back them up to tape for near-continuous data protection.
HP have just announced the availability of the following Recovery Manager Application suites for 3PAR (Along with Oracle, SQL and Exchange releases):
What’s new in these two particular product releases?
HP 3PAR Recovery Manager for VMware v2.5
• Support for 3PAR OS 3.1.2MU2, 3.1.2MU3, 3.1.3(pre-enabled)
• Support for VMware ESXi 5.1u1, 5.5
• Integrated with Insight Control for VMware vCenter – No more standalone RMV package
• RM for VMware is a part of ICV4C plug-ins which also includes Server component, Storage component
• New web client support for IC4VC Plugin integration – supports all new features
• Host Explorer for VMware is no longer a part for RMV package
• Standalone VI client – still supported
• Updated Documentation – Linking objects through hyperlinks in the UI. Users can jump easily between objects.
• Support for RM Diagnostics Tool
HP 3PAR Recovery Manager for Hyper-V v2.0
Support for 3PAR OS 3.1.2MU2, 3.1.2MU3, 3.1.3(pre-enabled)
• Support for Windows Server 2012 R2
• Support for 3PAR VSS HWP 2.3
• Enhanced restore of Virtual Machines using repository backups
• Ability to take snapshot on remote array using 3PAR Remote Replication
• Replication modes supported – Synchronous and Periodic Asynchronous
• RMHV (International version) now supports Japanese local environment (localised Windows versions)
HP 3PAR Application Suite for VMware links
HP 3PAR Application Suite for Hyper-V links
Download : HP 3PAR Recovery Manager for Hyper-V
HP Insight Control for VMware vCenter server, which is available as a single download, seamlessly integrates the manageability features of HP ProLiant, HP BladeSystem, HP Virtual Connect and HP Storage into the VMware vCenter console.
By integrating HP Converged Infrastructure management features directly into VMware vCenter, administrators gain insight and control of their HP infrastructure supporting their virtualized infrastructure —reducing the time it takes to make important change decisions, manage planned and unplanned downtime and do lifecycle management.
Today HP Insight Control for VMware vCenter Server 7.3 with support for HP OneView is available!
What’s new in HP Insight Control for VMware vCenter Server 7.3?
- HP OneView support: HP OneView and HP Insight Control licensed hosts are managed under one single VMware plug-in: HP Insight Control for VMware vCenter Server 7.3. All previous features are now available for HP OneView managed hosts.
- Grow a cluster consistently and reliably with HP OneView profiles/HP OneView reference host in 5 easy steps. When engaged, the process fully automates the application of an HP OneView profile on selected servers, server provisioning is then used to deploy the selected ESXi build and the VMware host networking configuration is applied at the end to match the server Virtual Connect profile configuration.
- Coordinated and schedulable firmware deployment. Now uses HP OneView or HP SUM to deploy a firmware baseline. The process can be scheduled and host VM’s are evacuated and returned to the normal state using VMware maintenance mode to maximize availability
- Cluster level host networking configuration. Host networking configuration (introduced in 7.2) mitigates networking configuration differences between the host and Virtual Connect profiles, this feature is now expanded so that simple click yields the same effect at a VMware cluster level
- New HP Enclosure view delivers greater simplification in understanding the relationship between the physical and virtual
- Simplified initial configuration of the plug-in credentials aiming at reducing initial configuration errors
- Enhanced Storage Management
- Full HP 3PAR Recovery Manager for VMware Integration
- HP 3PAR StoreServ Peer Persistence management
- Enhanced provisioning wizard for volumes
- Simplified Configuration Management
- Visualization of peer persistence configuration
- Graphical relationship representation between VMs and volumes
*As a reminder, VMware is moving away from the classic application based client to the a new web client, as such, new features are only implemented in the new web client.
Web Resources (updates will be live on Feb 18th, download available today!)
- HP Insight Control for VMware vCenter server web page and download information
- HP Insight Control for VMware vCenter server documentation (includes latest docs for this release)
- HP Insight Control for VMware vCenter server documentation (includes latest docs for this release.
Enjoy folks, this is really cool stuff – I can’t wait to run it up in my home lab!
VMware has released ESXi 5.1 U2 to the general public.
The HP custom ISO for this version is available at the following URL: https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.1.0U2-GA-JAN2014&productId=285 meaning that HP now support this Update across our server line.
The updated recipe document has been published and is available on vibsdepot at http://vibsdepot.hp.com/hpq/recipes/HP-VMware-Recipe.pdf
What is it?
The HP StoreFront Analytics for VMware vCenter Operations Manager, an adapter for vCenter Operations Manager from VMware, relates virtual machines and datastores to volumes from HP 3PAR StoreServ storage systems. Administrators using HP StoreFront Analytics are able to visualize and analyze data from their HP 3PAR StoreServ storage array directly within vCenter Operations Manager.
What does it do?
HP StoreFront Analytics reports detailed health, capacity, and performance metrics for HP 3PAR StoreServ components including volumes, CPGs, storage systems, controllers, drive cages, disk drives, ports, and fans. It is distributed as a management pack that installs directly in VMware vCenter Operations.
Why is it full of goodness?
StoreFront Analytics includes preconfigured dashboards enabling the vSphere administrator to quickly get started monitoring, troubleshooting, and optimizing their HP 3PAR StoreServ storage systems from within vCenter Operations Manager. These dashboards clearly show relationships between virtual machines, datastores, and the underlying HP storage. This visibility increases the VMware administrator’s productivity and ability to deliver improved quality of service.
Ahh licensing, I always forget to write licensing considerations and end up getting emailed lots about it. HP StoreFront Analytics includes a 60 day instant-on evaluation license. After 60 days the product will provide basic array health information at no charge, but requires a license for capacity and performance metrics.
Where can I get it?
Download from HP Software Depot
For more information
HP StoreFront Analytics for VMware vCenter Operations Manager web page
When using 3PAR VVSet, 3PAR InForm OS does not allow the removal of a Single Active VLUN if the VLUN template is created with a VV set originally. For those who arent familiar, a VVSet is generally used for cluster configurations when a set of hosts all need to see the same VLUNS.
Using the IMC or CLI, removing a VLUN from a VLUN template created a VV set will result in all active VLUNs created by the template being removed which is not a desired result.
But alas, there is a work around as explained here.
Remove a single VV from the vvset. This will leave all the VV through the VV set VLUN template exported except the VV that is removed from the VV set.
Let’s jump into the CLI:
CLI% removevvset vvsetname volume_name
To find the setname, please run the showvvset command.
You can use the 3PAR InForm Management Console (IMC) interface for this as well. Please make sure you are using the latest 3PAR GUI (IMC) which you can download @ http://www.hp.com/go/hpsoftwareupdatesupport
The High-level GUI steps are:
1) Click on provision
2) Click on vvset to edit
3) Right click on the vvset
HP 3PAR InForm OS 3.1.2 Command Line Interface:
HP 3PAR VMware ESX Implementation Guide:
Now this is cool, Think Inception – Hypervisor on Hypervisor!!
VMware have updated their Guest Compatibility Guide to include Microsoft Windows Server 2012 R2 as a supported guest operating system with vSphere 5.0 U2 or above.
For more information please to go
Recently I wrote a summary discussion on an Executive Brief by Frost & Sullivan (F&S) titled Beyond Overhead: How Your Backup and Recovery Architecture Can Contribute to Strategic Business Success., rather than duplicating the article here. The discussion post was published on the worldwide HP Transforming IT Blog site:
Would love to know your thoughts on this brief, I found myself mostly agreeing with the points F&S make, but of course have my own ideas as to what matters and what doesn’t in this case. Do you? If so feel free to comment!
So after successfully updating my vCenter and ESXi host to version 5.5, I proceeded to add back the NFS mount that had previously been mounted only to be met by the following error message:
The “Create NAS datastore” operation failed for the entity with the following error message. An error occurred during host configuration. Operation failed, diagnostics report: Unable to resolve hostname ‘nas.local”
At this stage, my DNS server was not up (and had not been for a while due to me being forgetful over the last few months) and I was mounting by IP Address so couldnt understand why it kept referring back to the hostname. Now the issue I have is that I have become perhaps too comfortable with the desktop version of the vSphere client so am still learning my way around Web GUI. Things have changed, just ask @nickmarshall9 who mentioned recently in a presentation he is still adapting to the new graphical world of vSphere Client Web style
For some reason, I was unable to see and unmount any dead\stale datastore mounts within the GUI, or the desktop client for that matter. If all else fails, go CLI!
- Enable SSH access (Off by default), refer to my previous post on how to do this here
- SSH into your box using an ssh client, I used terminal within MacOSX so naturally had to specify the root@ username
- run the command esxcfg-nas -l to list the volumes it thinks it has currently mounted (why these results do not filter up to the GUI astounds me)
- now to add a temporary entry into the hosts file, vi /etc/hosts and add <ip address of NFS Server> <hostname>
- Save and exit vi
- esxcfg-nas -d <stale_NFS_mount>
- Delete the temporary entry that you added to the /etc/hosts file in Step 4 using vi
- You should now be able to add the NFS volume through the vSphere client
- Turn off SSH access
Simple solution, but one for a problem that seems more or less broken in the software stack.
This came up in an internal email and thought it might be of use to some customers. It depicts the maximum latency that HP supports when choosing your flavour of HP 3PAR Remote Copy replication (Remember to think about RTO). Note: This is latency, and has no hard distance, it all comes down to the link but as a rule of thumb fibre links have 0.005ms per km. May the force be with you!
HP Insight Control Storage and Server Module for vCenter is plug-in for vCenter Server that allows HP storage arrays to be managed from a single pane of glass that is integrated into the vSphere client.
Available at no cost from HP, the plug-in can be installed once for the entire vCenter environment using a single, all-inclusive installer.
Let’s take a look at the storage component of this plugin, some of the key features as part of this component are:
- Monitoring the status and health of HP storage arrays
- Automated provisioning of datastores and VMs– Adding, removing, and expanding datastores on the fly
- Creating VM from templates on new datastores
- Using array-based snapshot and snap clone technologies to clone VMs, tapping into VAAI primitives essentially which speeds up operational tasks
- Common provisioning and removal tasks.
- Datastore mapping information
And yes, its supported with VMware vSphere 5.1 and from a storage stand point the following HP arrays:
- HP 3PAR StoreServ Storage
- HP MSA Storage
- HP StoreVirtual Storage including the Virtual SAN Appliance
- HP EVA Storage
- HP XP Storage
- HP StoreOnce Backup appliance
So what it gives you is true and fruitful information on your server farm. I was going to add some screenshots of the GUI here but my lab has temporarily taken a holiday so unable to run it up right now.
The good thing is, these two modules (server and storage) is bundled in one installation package and is very easy to install, for customers running HP BladeSystem and any of our major storage arrays (3PAR, StoreVirtual VSA/Appliance, MSA, EVA and XP) .
VMware has just released version 5.5 of their ever popular Hypervisor, to coincide with this release, HP has also provided the customised image of this release prepared for HP Proliant servers running ESXi environments, put simply – this image provides customised drivers, features that Proliant Servers can leverage. I run this on my home lab comprising of two HP Proliant servers.
The HP custom ISO is available at the following URL:
The vCloud Suite is a complex combination of vSphere, vCloud Networking and Security (vShield), vCenter Operations with vCloud Director automating the show with all products now aligned at version 5.1.
VMware has also released an updated vCloud Architecture Toolkit (vCAT) for vCloud Director. The vCAT provides modular components so that you can design a vCloud reference architecture that supports your cloud use case. It includes design considerations and design patterns to support architects, operators, and consumers of a cloud computing solution based on VMware technologies. Attached here are some useful vCloud Documentation links..
Installation & Upgrade
Earlier this year (I actually wrote this post to co-incide with the announcement but it has sat in my draft box ever since – my bad…), we announced the HP 3PAR StoreServ 7450 – A purpose built flash optimised 3PAR array designed for the environments wheres things can get a little ….. crazy. Crazy in the sense where the application is demanding up and above the “usual” performance requirements architects typically see with other workloads – File serving, VMware (which is mixed and sometimes unpredictable)-even SQL and Exchange environments are part of the “usual” suspects when it comes to architecting for IOPs. We also announced a number of other enhancements throughout this launch including QOS and Recovery Manager for Hyper-V environments.
The need for lower latency and faster response.
Not all applications and subsuquently IO’s are created equal, flash is purposefully design for the applications which require sub-millisecond and high-end IOPS for performance intensive environments. the new HP 3PAR StoreServ 7450 clocks in approximately 550,000 IOPS with a latency of under .7ms. And the 7450 uses an 8-core Intel Xeon Sandy Bridge processor running at 2.3Ghz, compared to the previous 6-core 1.8Ghz model
Flash Virtual Environments “Flirtual” 🙂
Virtual powers cloud, Cloud powers Services, and providing services are all about meeting SLA’s. Delivering a service is one thing, delivering a service well and quickly with a guaranteed satisfaction level is another.
Cloud-hosting services may benefit from 3PAR 7450 for their customers who require those latency times and IOPS I mentioned earlier in this post. However I must stress that the HP 3PAR StoreServ 7450 is NOT a one size fits all, consider the following graphic on where it sits within the 3pAR family. Additionally, check out my post on the other 3PAR 7000 members here.
The purpose of me making this emphasis is that there is one family, and subsequently one operating system, so a multi-3PAR environment means you can shift workloads between the different arrays using HP 3PAR Peer Motion – or as we call it Storage Federation – check out this brief on Peer Motion covering the requirements and supported O/S . Note: Tiering at the cache and hard-drive levels only occur within the one array, meaning Dynamic Optimisation\Adaptive Optimisation cant (yet) identify a suitable tier on a separate 3PAR to place a particular volume.
But back to the 3PAR 7450 specifically.
All cached up – Cache Handling and Cache Offload on 3PAR
The software on the HP 3PAR 7450 uses a page size of 16K for cache, what that means is that it can handle up to 16K of read and\or write IO’s for serving data from cache. So without flash, if we had a read operation of 8KB, we check to see if we can serve out of cache, if it is not there – we retrieve the data from back-end disk to serve the request and then store it in cache for future use. This can result in a slightly higher latency if spinning disk is used.
But we are talking flash here!!! Let’s take a look how an all flash array may do this same request. Same example, a 8K read request comes through, if it is not in cache, again 3PAR software looks to the backend storage to serve the request but as there are no spinning drives or concept of disk heads aligning. So from Flash to Cache, we call this Adaptive Reads – 8K of data is read from the flash drives back into cache at super speeds. More granular examples such as 4K result in even less data being read (but still the right amount), only 4K of data gets transported enabling even higher backend throughput making it super efficient.
The same goodness extends to write operations but altered slightly to allow fragmented writes, for IO’s smaller than 16K, such as 8K – we would only write 8K to the flash drives, we don’t flush the whole 16K. If we did, we would see the flash drives getting hit more than what is required . Flash has a limited lifetime, writing as little data as possible is best for this reason.
But wait theres more………Just as HP 3PAR Dynamic and Adaptive Optimisation provides policy driven “autonomic” movement of data blocks based on utilisation levels at the CPG level. We extend these ideas into the cache and flash tiers on the 7450. So
To be flash is to be expensive
But Flash storage is traditionally expensive and comes in different flavours SLC and MLC – one more reliable and expensive than the other (I’ll save this discussion for a separate post), Why would a CIO splash out thousands of dollars without proper justification as to why a certain response time is required – in fact I still see situations where Infrastructure architects are speccing flash with capacity in mind, flash is NOT about capacity given the price points so it should not be treated as such. IOPs per GB is the wrong way to look at it given the tiering solutions there are out there these days that assist in storage efficiencies. IOPS per IO is a suggested followed by meeting capacity requirements.
For more information on HP 3PAR StoreServ 7450, please visit http://www8.hp.com/us/en/products/disk-storage/product-detail.html?oid=5386547#!tab=features
Uh oh!! a bug in HP Insight Control 7.2.1 and 7.2.2 Storage Plugins with 3PAR environments.
Confirmed with both IC4VC 7.2.1 and 7.2.2 storage plugins with 3PAR 3.1.2 MU2
If using the plugin to perform a “New Datastore” action at the Cluster level, when the volume is presented to all hosts in a cluster in one go, Inserv will take the WWNs of the other hosts in the cluster and strip them from that host entry and apply them to the first host entry in the cluster.
This results in an immediate halt of any VMs running on the subsequent members of the cluster and the ESXi hypervisor itself, assuming boot-from-san, having its rug pulled out from under it – PSODing.
This will be fixed with 7.2.3 of the plugin,due out very soon
7.2.3 available now!
Finally back from an epic time in the US of A, VMworld next week. I was intending on doing a post on the leadup to VMworld including a timetable of events I will be attending. For those who are going, get in touch – would be great to catch up/meet some of you.. HP are a platinum sponsor and will be at booth #1405, ready to show you our newest developments such as:
- HP Virtual System that lets you transform your IT environments and maximize the advantages of virtualisation and cloud.
- Cloud Management solutions for heterogeneous clouds.
- Mobility solutions make it easy for your users to work on any device anywhere.
- Networking technology that lets you offload the networking tasks from the host, while also monitoring and securing VM-VM traffic.
- Advances in storage that ensure fast, secure retrieval of the data you need.
HP also have a space devoted to the Software-Defined Zone in Booth #2235.
And don’t miss the opportunity to learn best practices from the experts in the following HP sessions: (US Time)
- Monday, 2:30 – 3:30 p.m.
Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts (STO4907)
- Tuesday, 11:30 a.m – 12:30 p.m.
OpenStack for the Enterprise (VSVC6656)
- Tuesday, 5:00 – 6:00 p.m.
The Top 10 Things You MUST Know About Storage for vSphere (STO5545)
- Wednesday, 11:30 a.m – 12:30 p.m.
Implementing a Scalable and Highly Available Desktop and Application Architecture with a VMware AlwaysOn Solution (EUC5672)
- Wednesday, 2:30 p.m – Â“3:30 p.m.
Storage – The Next Frontier of Virtualization – How VMware Technologies Can Enable and Accelerate Software Defined Storage (STO5787)
Be sure to stop by to see our exciting in-booth theater sessions providing detailed information on cloud, virtualization, mobility, end-user computing, and more. Also an HP Slate 7 Tablet will be given away at each in-booth theater session.
For the social bunnies, please check out this list of all the gathering/events/parties/tweet-meets that is happening. http://www.vmworld.com/community/gatherings#!
For now, Jet-lag is killing me, and I have a bunch of work to do. Such is life, great to be back.
Quick note – The latest revision of this technical white paper has been released – although a minor release I still recommend if you are running VMware vSphere 5 on a 3PAR StoreServ array – do yourself a favour a take a read! And please if you havr any feedback, contact me so I can look at including in the next release.
You can download this whitepaper at the following link http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA4-3286ENW.pdf
About the Technical White Paper
When deployed together, VMware vSphere and HP 3PAR StoreServ Storage deliver a compelling virtual data center solution that increases overall resource utilization, provisioning agility, application availability, administrative efficiency, and reduces both capital and operating costs. This white paper outlines best practices on how to set up HP 3PAR StoreServ Storage with VMware vSphere 5.1 as well as how to take advantage of HP 3PAR StoreServ’s unique features such as vSphere integration, HP 3PAR Thin Provisioning technologies, Dynamic and Adaptive Optimization, and Recovery Manager for VMware to create a world class virtualized IT and application infrastructure
VMware released ESXi 5.1 U1 in April earlier this year, HP also released a custom image of this release that incorporates customised HP drivers and management software saving you time to download and install the software yourself. A good point to make is that by using the customised HP Proliant ESXi image – this is a step towards keeping consistency in the software/driver level in your VMware environment. This is best practices in my opinion.
The custom image is available on the VMware site.
VMware from HP ProLiant Server VMware Support Matrix
Not all server’s are created equal! For compatible HP Proliants, always check compatibility on the following website:
Now here’s where it gets cool – You can build your own HP Custom image of the ESXi image! Just head over to http://vibsdepot.hp.com/ allowing you to add comptabile HP bundles for VMware Image Builder, Update Manager and ESXCLI etc.
For those who have upgraded to 5.1 U1 (Custom and Non-custom image) please be aware of the following:
http://kb.vmware.com/kb/2050941 – Cannot log in to vCenter Server using the domain username/password credentials via the vSphere Web Client/vSphere Client after upgrading to vCenter Server 5.1 Update 1 (2050941)
About a month ago, VMware announced two new “VMware Ready” devices for their VMware Horizon Mobile software, although Apple iPhones are still yet to make an appearance in this list – this is still something I am very excited about. Firstly, because its an area where I am still technically upskilling and secondly because I see mobility as an area which is on the verge of exploding.
Switch to VMware with VMware Switch
Put simply, VMware Switch (not to be confused with a VMware vSwitch 🙂 allows multiple personalities or persona to exist on the same mobile hardware device.
Now think back to my post on the trends in storage and cloud computing for 2013, I noted at the time that VMware Horizon was opening up new doors to the mobile user, EUC as we know it is changing tremendously.
The possibilities could be endless.
Think segregation, think defined work schedules, think BYOD – Now as an avid user of BYOD myself (Apple Fanboy here) I truely am excited to see VMware extend their virtualization technology to mobile devices. Technology is fuelled by innovation, and to me this is innovation. Almost every IT/Technologist/Enthuiast has at least one mobile device. I sat on the tram the other day and was astonished (but not surprised) at at least 90% of the fellow passengers glued to their mobile device – Facebooking, tweeting, reading news, email etc All thirsty for information and social interaction.
Our mobiles no longer just serve the purpose of being able to phone and speak to someone, we make decisions on buying these mobile devices on other capabilities such as SMS, web browsing, email capabilities and social media access. Exciting times!!!
Yesterday, VMware announced this years awardees with the prestigious vExpert 2013 title, an award that has been around since 2009 to recognise individuals on their contributions to the global virtualization and cloud community. The list is put together by VMware and in particular John Troyer (@JohnTroyer) and the VMware Social Media & Community Team- No easy task in my mind as there are a lot of great VMware practitioners and Evangelists out there.
Announcement link (Just in case you dont believe me 🙂
For me, this is my very first year being recognised in this category, and I am very humbled and honoured to be recognised among some of the great evangelists in this field – some who are personal friends of mine which makes it even more special.
Does it mean you’re now an expert?
No, not necessarily. The title is not based on what you know or how much you know. A great extract from the announcement page shows what it takes (and doesnt take) to be recognised:
“A vExpert not a technical certification or even a general measure of VMware expertise. The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year” (Retrieved from http://blogs.vmware.com/vmtn/2013/05/vexpert-2013-awardees-announced.html)
Lastly, I would also like to extend my congratulations out to all of the other vExperts for 2013. Looking forward to meeting some of you over the next year.
Time for a technical deep dive – As you may or may not know, HP 3PAR boasts a sexy array-based software feature set. You’ve probably heard me rave on about it in other posts and podcasts I have done. I’ve worn this fan-boy cap for a while now and spoken about our leading thin suite.
Some 3PAR software features perhaps don’t get as much limelight as our thinning capability or wide striping but they are extremely supportive and tell a great benefit story, especially in the area of High Availability (HA) or Business Continuity (BC). These features are array based and are called peer persistence, peer motion, persistent ports and persistent cache.
The names appear similar but they mean different solutions so let’s take a look at some of these concepts and how they work particularly with the vSphere platform—starting with peer persistence.
Spotlight on HP 3PAR Peer Persistence
Another way to look at peer persistence is to tie it back to something that has been around for a while in the virtual world – VMware vMotion technology.
So what vMotion does for local site workloads, peer persistence does for geographically disparate data centres, meaning it offers transparent site switchover without the application knowing it has happened. It provides a HA configuration between these sites in a metro-cluster configuration.
Peer persistence allows you to extend this coverage across two sites, enabling use of your environment and load balancing your virtual machines (VMs) across disparate arrays. Truly a new way to think of a federated data center in this context IMO. Note, this implies an active\active configuration at the array level but not at the volume level, which is active\passive which infers hosts paths to the primary volume and secondary volume
Transparent switchover and Zero RPO/RTO
What does it mean to offer “transparent site failover”? In the context of 3PAR peer persistence, it simply means that workloads residing in a VMware cluster can shift from site 1 to site 2 without downtime. This makes aggressive RPO\RTOs an achievable reality.
This is particular fit for mission-critical applications or services that may have this RTO/RPO of 0.
Note: I mention VMware for this as this particular technology only currently supports VMware hosts (vSphere 5.0+) and not vMSC.
How does it do it?
Peer persistence leverages HP 3PAR remote copy synchronous to manage the transfer of the remote from local site to remote site and gain acknowledgement back to the host operating system (vSphere in this instance). Today the switchover is a manual process executed via the 3PAR CLI (the automated process is coming later this year).
Working with VMware vSphere, this allows your ESXi cluster to virtually span across data centers. So in the above figure VMs are being serviced by HP StoreServ Storage A and other VMs are being serviced by HP StoreServ Storage B but all are existing in the same VMware data center. Moving VMs between sites would typically need a reset, but peer persistence removes this limitation by continuous copying or shadowing the VM on the remote volume via RC and switching over the volumes logically.
I’ll write the high level process with the above figure 1 in mind without any pretty pictures:
- Host can logically see both HP StoreServ Storage arrays by means of stretch fabric.
- VM resides in Site 1, on Volume A. Volume A’s partner (Volume B) is being presented to Host B in Site 2 and can be considered active but not primary.
- The primary and secondary volumes are exported using different Target Port Groups supported by persona 11 (VMware).
This presentation is possible via Asymmetric Logical Unit Access or ALUA allowing a SCSI device (Volume A in this instance) to be masked with same traits (Source WWN).
Provided this configuration exists, the process is:
- Controlled switchover is initiated by user manually via CLI on primary array. Using ‘setrcopygroup switchover <groupname>’
- IO from the host to the primary array is blocked and in flight IO is allowed to drain.
- The remote copy group is stopped and snapshots are taken on the primary array.
- The primary array target port group is changed to transition state.
- The primary array sends a remote failover request to the secondary array.
- The secondary array target port group is changed to transition state.
- The secondary array takes a recovery point snapshot.
- The secondary array remote copy group changes state to become primary-reversed (pri-rev). At this point the secondary volume will become read/write.
- The secondary target port group is changed to active state.
- The secondary array returns a failover complete message to the primary array.
- From here, The primary array target port group is changed to standby state and any blocked IO is returned to the host with the following sense error: “NOT READY, LOGICAL UNIT NOT ACCESSIBLE, TARGET PORT IN STANDBY STATE”
- The host will perform SCSI inquiry requests to detect what target port groups have changed and which paths are now active. Getting your host multipathing configuration is very important here!
- Volume B is now marked primary and hosts continues to access the volume via the same WWN as before. Host IO will now be serviced on the active path to the secondary array without the host application even knowing what happened!
What you need
Here’s your list:
- A WAN link between your two data centers should not have more than 2.6ms latency. This is important as remote copy synchronous needs to be able to send the write and wait for an acknowledgement within a given time frame.
- One thing to note is that previously, vMotion used to be supported only on networks with round-trip (RTT) time latencies of up to 5 ms but with VMware vSphere 5 introduced a new latency-aware Metro vMotion feature that increases the round-trip latency limit for vMotion networks from 5 ms to 10ms. The requirement for 5ms seems to be a thing of the past moving forward allowance cool things like spanned virtual data centres in this regard.
- The ESXi hosts must be configured using 3PAR host Persona 11.
- Host timeouts need to be less than 30 seconds. This time does not include the subsequent recover and reverse operations so leave some headroom.
- The WWNs of the volumes being replicated have to be the same. Therefore, ALUA is also a requirement
- From a 3PAR-licensing point of view, you need 3PAR remote copy synchronous, and Peer persistence of course which is licensed on a per array basis as well.
Peer persistence supportability
For now, HP 3PAR Peer persistence is only available for VMware clusters: vSphere 5.x and up. More platforms to be supported in the future.
So Peer Persistency is a HA solution that removes barriers traditionally found in physical data centers . This is not the single entity defining a virtual data centres, but simply acting as just one of the pillars supporting it allowing virtualization and storage to not be constrained by physical elements anymore like in the past. To this end, achieving a more aggressive BC plan is becoming more realistic.
For more on Peer persistence, please check out this service brief: HP 3PAR Peer Persistence—Achieve high availability in your multisite federated environment
Every major initiative for optimizing data center performance, decreasing TCO, increasing ROI, or maximizing productivity – including consolidation, virtualization, clouds, server upgrades, tiered storage, data analytics and BI tools – involves storage data migration.
Data has an incalculable value, and its loss can have significant impact. As Frost & Sullivan says in a recent Executive Brief, “one would expect that storage data migrations should be approached with the same attention a museum lavishes on a traveling Rembrandt exhibit.” To expand on this, in 2012 it was estimated that $8 billion dollars worldwide was spent in data migration services.
A research white paper published in December 2011 entitled “Data Migration – 2011″ by Philip Howard from Bloor Research shows the average cost for a data migration project is $875,000, so to extrapolate the value and criticality on these types of projects should be fairly straightforward. Overrunning project budget, or rolling back a failed migration due to lack of planning, are normal occurrences – in fact this same study proposes that the average cost of a project overrunning its budget is $268,000.00 – approximately 30% of the average cost of a data migration project.
Between 1999 and 2007, 84% of data migrations went over budget and overtime; this is astronomical and costly – and it can get very tricky when trying to pinpoint just why did the data migration project go over budget and over time. More often than not, it is usually down to lack of experience and planning (and I do believe that experience and planning should come in the same sentence.)
And there are potentially serious risks involved. Recent studies show that migration projects nearly always have unwanted surprises: 34% of migrations have data missed or loss, a further 38% have some form of data corruption.
And probably the biggest risk associated with migrations is that 64% of migration projects have unexpected outage/downtime. Now, tie this back to a research paper put forward by Vision Solutions in 2011, which shows that the typical cost of downtime can reach nearly 6.5 million dollars per hour for some in the Brokerage service industry, and up to 2.8 million dollars per hour for those in the Energy Industry. To really understand this and put it in context, let’s have a look at some of the reasons why we migrate.
Why do we migrate data?
The migration of data isn’t typically something an IT manager or CIO does for fun, end of the day it will cost money and time. Ageing infrastructure or the need for a particular technology feature that’s not available on the current infrastructure are just a couple of the reasons why people migrate. In my experience, it’s all of the above. CIO’s are constantly (or should be) looking at new and innovative ways to reduce footprint and drive down environmental costs, such as data centre space and power, as well as expose newer and greater technological advancements within a given product set. Newer product releases for infrastructure seldom take a step back when it comes to form factor and power draw.
So do customers who perform migrations achieve their overall goals? Not exactly…As I mentioned above, those undertaking DIY migrations typically have surprises which result in a heavier investment in staff to try and remediate those surprises, subsequently resulting in a project budget that is exceeded. Yes, 54% of the time a project budget is overrun due to these challenges but I’m not here to throw stats at you – I’m here to raise the awareness that if not properly planned and executed, your data migration project (as big or small as it may be) will run into at least one of these surprises.
HP Data Migration Services can help you address those challenges and risks. Each data migration project has astorage and data migration project manager assigned to make sure everything goes smoothly. We understand that storage infrastructures are typically multivendor, which is why our service is vendor-agnostic. We work to keep costs down and help you avoid the common pitfalls and risks of data migration.
To learn more about the new HP Data Migration Service, check out this online presentation. You’ll learn about the typical project flow and your migration technology options. Data migration is not usually just a simple copy-and-paste exercise.
Read more about HP Storage Migration Consulting.
You can learn more about ways to ease the pain of data migration at HP Discover 2013.
Quick update – This new patch release boasts new features such as SQL Server 2012 support as well as updated guest operating system support.
The official VMware release notes on this patch are available at:
Thats all, VMware PEX this coming week in Sydney – Should be interesting
This year VMware Partner Exchange will be held at the Australian Technology Park South Sydney between 1st May – 2nd May. This event takes place every year in different regions all around the world giving VMware partners and associates a glimpse on next-generation VMware products and programs so they are better prepared to talk to their customers.
Who should attend?
Whether you’re an executive, technology buff, or sales beagle, there is usually something for everyone – I usually try to mix it up a bit with attending a balance of business themed sessions, with technical deep dives and some labs on the side. This year, will be lab focused for me.
So Technology Alliance Partners, System Integrators, OEM partners and VMware Service Providers are all eligible to attend, and I highly recommend it. A highlight this year will be Carl Eschenbark (VMware COO) who is flying out for the event which is quite exciting as well as Raghu Raghuram, who is the Executive Vice President of Cloud Infrastructure and Management
There will be a networking session on the thursday evening as well as unofficial social media themed events organised by the virtual community.
See you there!
Last year, Calvin Zito did a good overview on VMware Virtual Volumes or vVols.
This is game changing in my opinion, this essentially means the virtual machines will be able to leverage the 3PAR for ANY operation – when you throw an ASIC in the mix and our wide striping and you will have a solution that will simply leave the competition for dead in my opinion. Cloud Computing and time to market to provision or to complete day-to-day tasks will become unreal.
At the moment, this is still in development mode so I am limited to what I am allowed to share, but stay tuned folks – seriously this is going to rock.
NOTE: This is a techology preview and doesn’t represent a commitment from VMware nor HP to deliver anything shown in this video
Recently, Calvin Zito , Craig Waters and myself jumped on a call to discuss a range of topics around VMware – more specifically Craig’s involvement in Melbourne VMUG and what the recently VMUG meeting in Melbourne had in store. We also discussed the upcoming VMware PEX which Calvin will be going along to.
Unfortunately, there is also a part which I am forced to discuss the All Blacks losing to the English – Was hoping Calvin would cut that part from the final reel! 🙂
Head over to Calvin’s site and have a listen!