My first write up for Oracle is live, head over to https://blogs.oracle.com/infrastructure/no-downtime-for-the-enterprise to read!
It is not very often that I get to talk or write about VMware related technologies within Oracle, other than Customer purchasing our Oracle X86 based servers for their VMware farms or some workshops I have run in the past for our ZFS customers (VM troubleshooting with dTrace is excellent!).
Even more rarely, does this event crossover span into a cloud-based discussion too. That is not to say that there aren’t any joint solutions out there in the field, (there are thousands of deployments globally with Oracle and VMware serving a joint purpose of providing a solid storage\computer\virtualisation solution for our customers). But today is that day!
If you are a virtualization nut, you might have heard of Ravello from way back, I remember first using it a few years ago under the vExpert NFR program and ironically, this blog post centres around my vExpert account that I continue to use.
What is it
Ravello is a lovely picturesque resort town set 365 meters above the Tyrrhenian Sea on Italy’s Amalfi Coast……. Kidding!, Ravello Systems started up back in 2011 by Benny Schnaider and Rami Tamir who created the KVM hypervisor at Qumranet (which is now part of Red Hat) and raised $10 million dollars in their first venture capital round. Their objective was not to recreate lightning in a bottle or add another layer of abstraction between hardware and software much like the KVM hypervisor, but rather than virtualizing and partitioning the hardware for multiple stacks to use it at the same time (cloud?), and act somewhat as a facilitator of communication between separate computers.
Along came Oracle and at the beginning of 2016, acquired Ravello Systems with the intent to run VMware/KVM development, test and demo environments in the cloud without migration. The business case for this requirement is huge, why waste precious resources (labour, infrastructure, software) on something that you only need every now and then and is not business critical.
Let’s be fair, these sorts of environments are no doubt virtualised and running on VMware vSphere so the target became “Running VMware workloads on public clouds – without any changes” and form part of our Lift and Shift cloud journey at Oracle.
The Ravello import tool gives the user the capability to clone their entire VMware environment which could be running on Amazon Web Services, Azure or Google Cloud Platform and shift it to the Oracle Public Cloud. It had cloud written on it since the beginning!
What does it look like?
I am glad you asked, the console itself is built around “applications” which is similar to a virtual machine but should be seen as more of a “service offering”. Below you see an application I had published in our Sydney DC, an application can be a group of virtual machines I just labelled it VMware ESXi 6.5 as it was a cluster of ESXi hosts published together!)
This is the default view and provides a list of all of your published cloud applications, which you can drill into to find which VM’s are running to support that application.
Below is the VM library, of which I can create multiple applications from using these as somewhat of a template. You can see from the owner down the right-hand side that Ravello has pre-populated some templates to use (with predefined configuration options) and that I have also uploaded a few in order to build out my application stacks. Sharing is caring so you can even share your VMs or disk images on the Ravello Repo to collaborate with others, however, you must have a public profile before you can share or use library items on Ravello Repo. If you make changes to your shared VM or Disk Images in your library, they will be automatically updated to the version that is visible to other users.
This is probably where the platform sells itself, the various canvas views provide an excellent glance over what your Ravello environment looks like from high up, you can see which ports are exposed to the public (internet) and which are internal as well as resource configurations on the right hand side.
On the Network Canvas (shown below), you can quickly connect your applications to various network-based services (DHCP, Proxy, DNS etc) quickly and easily by clicking and dragging connection points between each other.
Users only pay per hour of use and there are no upfront commitments so you can cancel when you need to.
So easy! I 1CPU, 1GB storage and 1GB RAM will set you back about 43 cents an hour (AUD), or approx $315 Australian dollars a month. A lot cheaper than space, power and tin in a data centre.
Keen to learn more?, head over to https://cloud.oracle.com/en_US/tryit to try it for yourself for free or wait for my next post on creating and publishing applications in the Ravello Cloud!
Good new year to you! I trust that if you are reading this then you survived the Christmas and New year break and hopefully took some time off to spend it with family and friends.
Last year, brought a ramp up in the adoption of Cloud services among many organisations – probably not as much as a lot of folks thought to warrant the term ‘shift’, but certainly the maturity of what we perceive to be cloud computing (and what it is not) led to more and more firms asking themselves – How we can do this better and more cost effectively?
This is largely due to the massive number of vendors in the ICT universe, now marketing their offerings as “cloud-ready” – Microsoft, Oracle, Amazon, and HPE are some of the notable larger players who are now aligning to this new phenomenon along with other smaller vendors now claiming to offer a path to adopting cloud computing.
One of the pitfalls of heading down the cloud path is that companies are finding out that some legacy services simply cannot transform. More often than not, this is due to the fact that the hardware platforms that exist in newer public cloud providers do not resemble their on-premises infrastructure and architecture.
The fallout from this difference is that their most critical applications are no longer performing as expected, proven or even supported leaving their business at risk. So a lot of companies want cloud, but simply cannot (or perhaps will not) transform to be able to adopt it due to this risk.
Let me pose a question to you – if you are considering cloud but have held off until now due to the unknown risks associated with the change). “What is the fundamental reason for the reluctance?”
Perhaps cost is an inhibitor, or perhaps it is the unpredictability of the transformation and how things are going to look and perform on the other side that has kept your business from traveling down the cloud road.
Whatever it is, it is not always possible that you can lift and shift all services to the cloud – at a conservative view, for every single CPU associated with Oracle Database, there is at least three CPU’s of other workloads tied to that database which relies on it. When you put it in this context, the task at hand to transform your business to be cloud-enabled seems enormous!
My next post will cover what Oracle is doing in regards to this transformation and managing change and risk by providing predictability and like-for-like performance for your Oracle database environment as well as other workloads you may have.
I will further discuss Oracle’s view on the journey to the cloud and how it can help you:
- Streamline Enterprise IT on-premises to be cloud ready should you ever need it
- Expand your Private Cloud
- Deploy a Hybrid Cloud
- Bring Public Cloud onto your Premises
- Lift and Shift to Public Cloud
This year, Veeam is holding its first annual conference “VeeamON” focused on the evolution of Virtualization, the shift to the cloud, and in particular Data Protection and why now, more than ever data availability should be on the focus list of every single CIO/CDO across the globe. Why is data protection important? Let’s take a look at what workloads there were 20 years ago compared to now.
In the beginning….
Historically, Backup really revolved around tape – usually a single tape backup drive connected to a computer that would back up days and weeks worth of precious company data to a single tape. Backups could naturally take hours to complete, as did restores which were rarely tested and also had the potential to take hours to locate the right tape drive, load it into the tape unit and restore in the event of data loss. Some may argue that the process of backing up and restoring back then was fairly straight forward as there were purpose built devices with one job but the efficiencies in doing were so were absent.
But in hindsight, there were (and still are) pitfalls in traditional backup methods, modern day features such as deduplication, compression, incremental block changes that have become a norm in storage and backup administrator’s vocabulary were not available to the business back then so things took time to complete. Eventually these features came but usually through having to purchase a new set of physical hardware or external device to perform the tasks which at times could prove costly. Virtualization was not the norm back then so servers were not portable containers and bare metals restores were 100% part of daily life for the various IT teams. One thing that has not changed from legacy backup times is the reason why backups had to happen. To protect companies Intellectual property in the form of data and to ensure this data was online and accessible after a disaster – Availability.
The need for data availability
During my tenure at HP, I wrote a two-part article entitled “Woudl you like a side of Disaster Recovery with that” aimed to discuss the sometimes misunderstood (and unappreciated) area of Disaster Recovery – what it meant and why it is important. I touched on points that not having data available and online can be (and is) a very costly exercise depending on what industry vertical your business operates in. You may have a read at the two part post at the following two links:
The important message I wanted to convey through these writings was availability has a cost associated with it when things goes wrong and SLAs are no longer being met due to hardware and/or software failures not to mention other unforeseen incidents such as human error (perhaps someone accidentally pulled the wrong cable out – it happens!) , Most storage and backup advocates know that, a lot of our day jobs revolve around making sure SLAs are met whether you are a company providing a service or a vendor selling technology that helps meet those SLAs. – They all tie back to availability and how quickly this happens.
The landscape is evolving.
Take a look at this forecast released by Cisco entitled ‘Cisco Global Cloud Index: Forecast and Methodology, 2012–2017’ and in particular Figure 5. Workload Distribution and its accompanied Table 3, and just observe how massive the shift from traditional workloads to virtual workloads is becoming, of course this varies from country to country but as a general trend you can see the IT landscape is heading more and more into dealing with virtual environments as opposed to physical. In fact, this studies just verified that these days, virtual workloads are the norm and location, location, location also matters – whether it is based on-premise, or off-site perhaps in a cloud provider environment, these workloads still need to be protected at a local (site) basis as well as a off-site (replica) basis. With the newest version of our flagship product – Veeam Backup and Replication, Veaam is positioned perfectly to manage virtual workloads no matter where they reside.
Now, hopefully that set the context of why VeeamOn is important to IT Leaders – CIO’s, CDO’s, CSO, and CTO’s alike. So here are the details:
What is it?
VeeamON is a three day event focused on modern day data centre availability, not just backup, not just DR but all encompassing and will feature some well-known industry speakers and analysts from companies such as Gartner and ESG. It is being held October 6th-8th at the Cosmopolitan hotel in Las Vegas. This event will host a series of quality sessions focused on modern day availability solutions and trends in the modern data centre that help form the “Always-On” story – Think back to the briefs I wrote about and why it is so important to remain always on!. Whether you are business focused, or technical – there will be valuable content catering for both business and partner streams so you can choose from a great variety of content.
Where is it?
The Cosmopolitian in Las Vegas, USA
When is it?
The event is happening between October 6th – October 9th 2014
But wait…. theres more!!!!
Veeam is giving away five free regular passes and one free VIP pass to VeeamON, all you have to do is to go here -> http://go.veeam.com/veeamon-free-pass and enter in your details.
Its that easy!
Cisco Global Cloud Index: Forecast and Methodology, 2012–2017. retrieved 30th July 2014 from http://www.cisco.com/c/en/us/solutions/collateral/service-provider/global-cloud-index-gci/Cloud_Index_White_Paper.html
So I have spent the best part of a day configuring Microsoft Office 365 to use my Cloud-land domain name.
The hardest part (and not actually hard) was creating DNS records within the domain portal to verify that I did indeed own the domain name.
One of the best parts is that I can now use Microsoft Lync with my own domain name. I have yet to explore the Microsoft Azure path, either way I bet Ben Diqual (@bendiq) would be proud!
Try it, Add me on Lync -> firstname.lastname@example.org
The vCloud Suite is a complex combination of vSphere, vCloud Networking and Security (vShield), vCenter Operations with vCloud Director automating the show with all products now aligned at version 5.1.
VMware has also released an updated vCloud Architecture Toolkit (vCAT) for vCloud Director. The vCAT provides modular components so that you can design a vCloud reference architecture that supports your cloud use case. It includes design considerations and design patterns to support architects, operators, and consumers of a cloud computing solution based on VMware technologies. Attached here are some useful vCloud Documentation links..
Installation & Upgrade
Earlier this year (I actually wrote this post to co-incide with the announcement but it has sat in my draft box ever since – my bad…), we announced the HP 3PAR StoreServ 7450 – A purpose built flash optimised 3PAR array designed for the environments wheres things can get a little ….. crazy. Crazy in the sense where the application is demanding up and above the “usual” performance requirements architects typically see with other workloads – File serving, VMware (which is mixed and sometimes unpredictable)-even SQL and Exchange environments are part of the “usual” suspects when it comes to architecting for IOPs. We also announced a number of other enhancements throughout this launch including QOS and Recovery Manager for Hyper-V environments.
The need for lower latency and faster response.
Not all applications and subsuquently IO’s are created equal, flash is purposefully design for the applications which require sub-millisecond and high-end IOPS for performance intensive environments. the new HP 3PAR StoreServ 7450 clocks in approximately 550,000 IOPS with a latency of under .7ms. And the 7450 uses an 8-core Intel Xeon Sandy Bridge processor running at 2.3Ghz, compared to the previous 6-core 1.8Ghz model
Flash Virtual Environments “Flirtual” 🙂
Virtual powers cloud, Cloud powers Services, and providing services are all about meeting SLA’s. Delivering a service is one thing, delivering a service well and quickly with a guaranteed satisfaction level is another.
Cloud-hosting services may benefit from 3PAR 7450 for their customers who require those latency times and IOPS I mentioned earlier in this post. However I must stress that the HP 3PAR StoreServ 7450 is NOT a one size fits all, consider the following graphic on where it sits within the 3pAR family. Additionally, check out my post on the other 3PAR 7000 members here.
The purpose of me making this emphasis is that there is one family, and subsequently one operating system, so a multi-3PAR environment means you can shift workloads between the different arrays using HP 3PAR Peer Motion – or as we call it Storage Federation – check out this brief on Peer Motion covering the requirements and supported O/S . Note: Tiering at the cache and hard-drive levels only occur within the one array, meaning Dynamic Optimisation\Adaptive Optimisation cant (yet) identify a suitable tier on a separate 3PAR to place a particular volume.
But back to the 3PAR 7450 specifically.
All cached up – Cache Handling and Cache Offload on 3PAR
The software on the HP 3PAR 7450 uses a page size of 16K for cache, what that means is that it can handle up to 16K of read and\or write IO’s for serving data from cache. So without flash, if we had a read operation of 8KB, we check to see if we can serve out of cache, if it is not there – we retrieve the data from back-end disk to serve the request and then store it in cache for future use. This can result in a slightly higher latency if spinning disk is used.
But we are talking flash here!!! Let’s take a look how an all flash array may do this same request. Same example, a 8K read request comes through, if it is not in cache, again 3PAR software looks to the backend storage to serve the request but as there are no spinning drives or concept of disk heads aligning. So from Flash to Cache, we call this Adaptive Reads – 8K of data is read from the flash drives back into cache at super speeds. More granular examples such as 4K result in even less data being read (but still the right amount), only 4K of data gets transported enabling even higher backend throughput making it super efficient.
The same goodness extends to write operations but altered slightly to allow fragmented writes, for IO’s smaller than 16K, such as 8K – we would only write 8K to the flash drives, we don’t flush the whole 16K. If we did, we would see the flash drives getting hit more than what is required . Flash has a limited lifetime, writing as little data as possible is best for this reason.
But wait theres more………Just as HP 3PAR Dynamic and Adaptive Optimisation provides policy driven “autonomic” movement of data blocks based on utilisation levels at the CPG level. We extend these ideas into the cache and flash tiers on the 7450. So
To be flash is to be expensive
But Flash storage is traditionally expensive and comes in different flavours SLC and MLC – one more reliable and expensive than the other (I’ll save this discussion for a separate post), Why would a CIO splash out thousands of dollars without proper justification as to why a certain response time is required – in fact I still see situations where Infrastructure architects are speccing flash with capacity in mind, flash is NOT about capacity given the price points so it should not be treated as such. IOPs per GB is the wrong way to look at it given the tiering solutions there are out there these days that assist in storage efficiencies. IOPS per IO is a suggested followed by meeting capacity requirements.
For more information on HP 3PAR StoreServ 7450, please visit http://www8.hp.com/us/en/products/disk-storage/product-detail.html?oid=5386547#!tab=features
Finally back from an epic time in the US of A, VMworld next week. I was intending on doing a post on the leadup to VMworld including a timetable of events I will be attending. For those who are going, get in touch – would be great to catch up/meet some of you.. HP are a platinum sponsor and will be at booth #1405, ready to show you our newest developments such as:
- HP Virtual System that lets you transform your IT environments and maximize the advantages of virtualisation and cloud.
- Cloud Management solutions for heterogeneous clouds.
- Mobility solutions make it easy for your users to work on any device anywhere.
- Networking technology that lets you offload the networking tasks from the host, while also monitoring and securing VM-VM traffic.
- Advances in storage that ensure fast, secure retrieval of the data you need.
HP also have a space devoted to the Software-Defined Zone in Booth #2235.
And don’t miss the opportunity to learn best practices from the experts in the following HP sessions: (US Time)
- Monday, 2:30 – 3:30 p.m.
Capacity Jail Break: vSphere 5 Space Reclamation Nuts and Bolts (STO4907)
- Tuesday, 11:30 a.m – 12:30 p.m.
OpenStack for the Enterprise (VSVC6656)
- Tuesday, 5:00 – 6:00 p.m.
The Top 10 Things You MUST Know About Storage for vSphere (STO5545)
- Wednesday, 11:30 a.m – 12:30 p.m.
Implementing a Scalable and Highly Available Desktop and Application Architecture with a VMware AlwaysOn Solution (EUC5672)
- Wednesday, 2:30 p.m – Â“3:30 p.m.
Storage – The Next Frontier of Virtualization – How VMware Technologies Can Enable and Accelerate Software Defined Storage (STO5787)
Be sure to stop by to see our exciting in-booth theater sessions providing detailed information on cloud, virtualization, mobility, end-user computing, and more. Also an HP Slate 7 Tablet will be given away at each in-booth theater session.
For the social bunnies, please check out this list of all the gathering/events/parties/tweet-meets that is happening. http://www.vmworld.com/community/gatherings#!
For now, Jet-lag is killing me, and I have a bunch of work to do. Such is life, great to be back.
Yesterday, VMware announced this years awardees with the prestigious vExpert 2013 title, an award that has been around since 2009 to recognise individuals on their contributions to the global virtualization and cloud community. The list is put together by VMware and in particular John Troyer (@JohnTroyer) and the VMware Social Media & Community Team- No easy task in my mind as there are a lot of great VMware practitioners and Evangelists out there.
Announcement link (Just in case you dont believe me 🙂
For me, this is my very first year being recognised in this category, and I am very humbled and honoured to be recognised among some of the great evangelists in this field – some who are personal friends of mine which makes it even more special.
Does it mean you’re now an expert?
No, not necessarily. The title is not based on what you know or how much you know. A great extract from the announcement page shows what it takes (and doesnt take) to be recognised:
“A vExpert not a technical certification or even a general measure of VMware expertise. The judges selected people who were particularly engaged with their community and who had developed a substantial personal platform of influence in those communities. There were a lot of very smart, very accomplished people, even VCDXs, that weren’t named as vExpert this year” (Retrieved from http://blogs.vmware.com/vmtn/2013/05/vexpert-2013-awardees-announced.html)
Lastly, I would also like to extend my congratulations out to all of the other vExperts for 2013. Looking forward to meeting some of you over the next year.
This year VMware Partner Exchange will be held at the Australian Technology Park South Sydney between 1st May – 2nd May. This event takes place every year in different regions all around the world giving VMware partners and associates a glimpse on next-generation VMware products and programs so they are better prepared to talk to their customers.
Who should attend?
Whether you’re an executive, technology buff, or sales beagle, there is usually something for everyone – I usually try to mix it up a bit with attending a balance of business themed sessions, with technical deep dives and some labs on the side. This year, will be lab focused for me.
So Technology Alliance Partners, System Integrators, OEM partners and VMware Service Providers are all eligible to attend, and I highly recommend it. A highlight this year will be Carl Eschenbark (VMware COO) who is flying out for the event which is quite exciting as well as Raghu Raghuram, who is the Executive Vice President of Cloud Infrastructure and Management
There will be a networking session on the thursday evening as well as unofficial social media themed events organised by the virtual community.
See you there!
Recently, Calvin Zito , Craig Waters and myself jumped on a call to discuss a range of topics around VMware – more specifically Craig’s involvement in Melbourne VMUG and what the recently VMUG meeting in Melbourne had in store. We also discussed the upcoming VMware PEX which Calvin will be going along to.
Unfortunately, there is also a part which I am forced to discuss the All Blacks losing to the English – Was hoping Calvin would cut that part from the final reel! 🙂
Head over to Calvin’s site and have a listen!
Which 5 IT areas look to prosper in 2013?
Check out my latest trending article on our HPSD site, thanks to @HPStorageGuy for hosting it.
Click Which 5 IT areas look to prosper in 2013? to read!
Where is storage going in this century? How does it integrate with Cloud? What should CIO’s be aware off in this new age? Want to find out?
My blog post on this is now live on the HP site, you can have a read at http://h30507.www3.hp.com/t5/Transforming-IT-Blog/There-and-back-again-Storage-in-the-21st-Century/ba-p/130519
Today, at HP Discover in Frankfurt, we announced a new line of HP 3PAR storage systems boasting smarter architecture, higher storage capacity and greater performance for customers.
Built for converged infrastructure, cloud and virtual environments
Without being bias, I believe the HP 3PAR architecture holds a cutting edge story like no other. With full mesh architecture and the ability to scale as your virtual environment grows, 3PAR simply rocks. OK, that does make me sound like I have drunk the cool aid a tad, but a lot of customers like converged infrastructure models -that’s why 3PAR is in synch with our server stack and VMware technologies through management plugins.
Take a read why I think 3PAR architecture is a great platform for VMware environments and also read about our VirtualSystem offering in this space. And also Learn about other HP converged storage announcements in HP Discover in Frankfurt this week.
So without further ado, lets introduce these new kids:
HP 3PAR StoreServ 7200
The new HP 3PAR StoreServ 7200 series is the entry level 7000 series model in the family. This model offers a 2-node architecture, and up to 144 drives (SSD, SAS , NL) with a maximum capacity of 250TB raw. Perfect for the home offices of small to medium size businesses.
On the connectivity side of things, the HP 3PAR StoreServ 7200 offers up to 12 8GB FC ports and up to 4 x 10Gb ISCSI host ports with an additional 2 built-in ports dedicated for HP 3PAR remote copy replication use.
HP 3PAR StoreServ 7400
Need more grunt than the 7200 model? Enter the HP 3PAR StoreServ 7400 model. This model offers up to 4-node architecture with up to 480 drives (SSD, SAS, NL) for simply awesome tiering capability through the use of our Dynamic Optimisation and Adaptive Optimisation automated tiering functionality. With a maximum capacity of 864TB of raw space, this puppy won’t see you running out of space anytime soon.
For connectivity, the HP 3PAR StoreServ 7400 offers up to 24 8GB FC ports and up to 8 x 10Gb ISCSI host ports with an additional 4 built-in ports dedicated for HP 3PAR remote copy replication use.
Stay tuned for SPC-1 benchmark results on this model.
What is the difference between the two 7000 series models?
Technically speaking, the number of drives supported, maximum capacity and port counts are the major differences, with the 7400 model offering more than the 7200 model across all accounts.
Both support RAID levels 0, 1, 5 and 6 giving you the flexibility to choose, and not locking you into just one.
Take a look at the back of one.
The good thing to remember is that both models have support for our HP 3PAR software packages, which means that you can get the most out of your environment.
Still the same cool architecture and guarantees you’ve known from before…
System wide striping for greater performance – Everything touches everything, full mesh performance, write-cache mirroring, smart I/O processing for mixed workloads.
Gen4 ASIC for hardware based thinning – Good for those Eager-zero thick VMDKs and removing the excess “fat” from thick LUNs .
But wait there’s more, these bad boys also support HP 3PAR Dynamic and Adaptive Optimisation – policy-based autonomic LUN and sub-LUN meaning it will put your virtual machines on the right storage tier at the right time resulting in a cost-effective way to store your data.
Double VM density – guaranteed!
With the unique system wide striping, and support for mixed workloads (whether virtual or not), we can guarantee you twice the density of VM’s over legacy arrays. We have such tight support with VAAI primitives such as atomic test and set that we can offer HW-assisted locking on VMFS volumes as well as superior performance through Adaptive optimization.
Learn more about HP 3PAR Get Virtual Guarantee Program
50% less capacity – guaranteed
The key is in the magic sauce – GEN4 ASIC offering Silicon-level integration with VMware VAAI primitives such as Write Same for thin conversion/detection and Space Reclamation to keep virtual environments thin!
Support for VMware stuff?
You bet! These new models have Remote Copy Integration for VMware Site Recovery Manager (SRM) and Peer Persistence Integration with VMware MetroCluster.
HP 3PAR Management Plugin for vCenter
Given the recent announcements and developments released in vSphere 5.1 (read my post here on these), the plugin will be available via new Insight Control Storage Module for vCenter. This will be the one stop management plug-in for all HP Storage and HP Servers. Calvin Zito did a really good overview on this plugin and its capabilities, be sure to take a read.
The HP Insight Control Storage Module for vCenter v.7.1 is available free from the HP Software depot – download it here
The HP 3PAR StoreServ models are supported on the following hosts:
- Citrix XenServer
- IBM AIX
- Microsoft Windows Server, including Microsoft Hyper-V
- Oracle Linux (UEK and RHEL compatible kernels)
- Oracle Solaris
- Red Hat Enterprise Linux
- Red Hat Enterprise Virtualization
- SUSE Linux Enterprise
- VMware vSphere
Need a hand? Sure!
HP Technology Services can assist in getting you up and running as well as any migration services you may require. Get in touch with myself or check out these pre-canned services.
HP 3PAR StoreServ 7000 Storage Installation and Startup Service:
This carepack service can help you plan and deploy the new HP 3PAR StoreServ 7000 into operation with up to two hosts connected to the virtual volumes
A final oriental session ensures you are comfortable with the new platform.
HP 3PAR Best practices whitepaper for VMware vSphere 5.x
In closing, HP 3PAR is optimally built for virtual environments, if you have a VMware farm and looking to get more out of your environment – take a look at the official BP document.
I am now blogging for HP.com WW, I will endeavour to try and update this blog as much as I can. Running two blogs is time consuming!!
check out my first post on HP.com here – “Taming Big Data”, please comment – I like reading and responding to any suggestions or feedback etc.
Thank you for visiting
This will no doubt be tweaked over the years to come, but I thought I would share my own personal definition of what the concept of cloud computing is today.
“Cloud Computing refers to a virtual shared IT infrastructure where resources are provisioned as required from a shared pool of computer, storage and network on a pay per use basis via the Internet or WAN.”
Alternatively Wikipedia offers the following definition (Retrieved 5th August 2011 from (http://en.wikipedia.org/wiki/Cloud_computing) “Cloud computing refers to the logical computational resources (data, software) accessible via a computer network (through WAN or Internet etc.), rather than from a local computer.”
My employer HP define it as “a delivery model for technology-enabled services that provides on-demand access to an elastic pool of shared computing assets.” from Finding the right cloud solutions for your organization,
Gartner defines cloud computing as a style of computing where scalable and elastic IT-related capabilities are provided as a service to customers using Internet technologies (Retrieved August 8th 2011 from http://www.gartner.com/technology/research/cloud-computing/).
So lets dissect these and dig out the common denominator between the three different definitions. The common theme or series of words are
“Shared, Instant, scalable and accessible”.
Behind all these definitions, you’ll see a lot of supporting detail that give a broader understanding of what the cloud really is. I think the common themes really boil down to:
- Instantly available – services that can be made immediately available
- Scalability & elasticity – both are enablers of resources becoming instantly available. Without the cloud’s scalability, the whole speed aspect of the cloud goes away.
- Shared resources – services that run within a set of shared resources – infrastructure or applications – that gain the benefit of multi-tenancy.
- Accessibility – services that are processed over the Internet to the end user.
Private Cloud A private cloud refers to a cloud computing environment which offers services within a single enterprise organisation and it’s firewall but may be hosted internally or externally to the organisation.
Public Cloud A public cloud refers to a cloud computing environment made available to the general public using the Internet and is external to the 0rganisation’s firewall that owns the environment
Hybrid Cloud A hybrid cloud refers to a computing environment that combines both private and public cloud computing environments.
Agree? Please comment and share your definitions, I’d love to read! There are more exciting posts to follow this one, particularly around Hybrid Cloud the many cloud offerings that HP have in this space. But for now, we’ll save it for another time