Best Practices Data Management Storage Thought-Leadership

Data Quality and Governance – why should you care?

Working in data storage in my earlier years, there was always an element of what type of data would be stored on the storage we would architect so that we would get a feel for the performance and capacity required to hold that data – this discussion would naturally lead onto data management, its lifecycle and what it needs to do once it expires.

In a bid to expand the conversation (and sometimes the sale), someone in the room would always ask about data quality, and what software do we have to help to ensure that high data quality exists throughout the complete lifecycle of the data when it is created and stored.

The outputs are only as good as the inputs, and if you feed an analytics engine or database dirty data then you can expect to fall down a rabbit hole – especially if your organisations relies heavily on this information to make key business decisions.

Take a look at the following statistics that I came across a year or so ago (I’m glad I saved this):

All of these factors can have huge implications on achieving business results, not surprisingly over half of the errors are attributed to having no data at all! So how does one prevent making bad decisions with bad data?

It all starts with how you architect data collection points and its relevant data types – take a web form for example. Now I know if you are reading this, then no doubt you would of had to fill in a web form multiple times during your life, whether it is to create a new account on a new social media platform, or to fill in a survey. You might of noticed that some of the drop down boxes only allow certain syntaxes and formats in the bid to ensure accurate data – like typing an email address without the @ sign should not be accepted.

If these measures are not in place, then the end result is a bunch of useless data that is not accurate and likely going to cause your organisation headaches (you can’t email a prospect with no @ in their email).  This leads to lost sales, mistakes, and a lot of administration time trying to clean the data.

My next post in this area will dive a little deeper, and provide some fundamentals on how your organisation can architect a robust data quality and management strategy.

 

Best Practices BURA Vembu vExpert Virtualization

Vembu release version 3.9.0 announcement (Sponsored Post)

Some exciting news from our friends over at Vembu with the announcement of version 3.9 of their backup and recovery software, for those who arent aware of what Vembu; their BDR Suite product provides data protection for a number of workloads including, physical, virtual, and cloud based workloads.

It is packed with some new features which you may read about below.

 

New Features in v3.9.0:

 

Tape Backup Support:

Vembu extends its support to native tape backup through the Vembu BDR console. You can utilise the best known 3-2-1 backup strategy of having 3 copies of backup data on 2 different media and 1 off-site location for an efficient DR plan. This secondary backup approach will help you to archive the image based backups in both virtual and physical environments like VMware, Hyper-V and Windows Servers and recover them in multiple formats like VHD, VMDK, VHDX, VMDK-Flat and RAW.

Auto Authorization at Off-site DR:

Enabling the auto authorization feature will allow only the authorized Vembu BDR servers to get connected with Vembu OffsiteDR servers (Replication) using the registration key.

Quick VM Recovery on ESXi for Hyper-V and Disk Image Backups:

Instant recovery of backed up data on VMware ESXi is now made available for both the Hyper-V VMs and Windows Servers backup jobs for an effective Disaster Recovery. This recovery of the virtual machines happens in a matter of seconds by booting the backed up machine directly from the backup storage repository, from where all the files, folders and applications can be accessed.

Windows Event Viewer Integration:

All the events of the Vembu BDR, Off-siteDR & agents like information on critical, warnings and major events are updated in the Event Viewer of Windows Servers  when enabled for better management.

Advanced Backup level encryption for all agents:

Users will be able to configure AES-256 bit encryption for all their backup jobs of VMware, Hyper-V, Disk Image through their customized passwords using corresponding password hints as well.

OffsiteDR Server retentions using Vembu Network Backup:

OffsiteDR retention policies is now available for the Network Backup plugins too, like the image based backups of VMware, Hyper-V and Disk Image backups at the too.

Listing of files & Folders in aciTree structure:

The listing of files and folders while configuring backups on a Network Backup plugin is now up with the aciTree structure for easy and quick navigation.

Pre/Post backup scripts for all Network & Image Backup clients:

This feature provides the ability to configure running custom actions before and after the execution backup schedule. The custom actions may include running an application using some script files.

AngularJS conversion of UI for Vembu OnlineBackups:

The overall UI of Vembu OnlineBackup is improved for better performance like the current Vembu BDR and Off-siteDR.

Update of consumed space on the Vembu Portal:

Based on the display of the consumed space of Vembu Online Backup and SaaSBackup plugins, the customer will be able to allocate and purchase further cloud storage post purchase and upgrade.

 

Vembu 3.9 will be available for general availability in the next week or so.

Best Practices Veeam vExpert Virtualization VMWare

Veeam Backup & Replication v8 for VMware: General Overview Poster

Veeam Backup and Replication v8 Poster

We have just updated and released the “Veeam Backup & Replication v8 for VMware: General Overview” Poster which covers deployment methods, components of Veeam Backup and Replication v8 and probably most useful – requirements of all the different moving parts of what makes our software.

 

Use it as quick reference – Hang in the office, any shared space, home lab, bedroom,  toilet whatever…

Download the poster by clicking here.

 

3PAR Best Practices Virtualization VMWare

New setting in HP 3PAR and VMware vSphere 5 Best Practices document.

Small change in the updated whitepaper from HP and VMware vSphere 5 when using Round Robin multi-pathing policy (which is the recommended best practice. )

The old recommendation was to use IOPS=100, which has now changed to IOPS=1.

Make sure host persona 11 is selected!

Issue the following command on your ESXi host(s) that are being served storage from the 3PAR.

esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -P "VMW_PSP_RR" -O iops=1 -c "tpgs_on" -V "3PARdata" -M "VV" -e "HP3PAR Custom iSCSI/FC/FCoE ALUA Rule"
3PAR Best Practices DR

HP 3PAR Remote Copy topologies and Maximum Latencies

This came up in an internal email and thought it might be of use to some customers. It depicts the maximum latency that HP supports when choosing your flavour of HP 3PAR Remote Copy replication (Remember to think about RTO). Note: This is latency, and has no hard distance, it all comes down to the link but as a rule of thumb fibre links have 0.005ms per km. May the force be with you!

image001

3PAR Best Practices HP Performance Storage Virtualization VMWare

Technical White Paper – HP 3PAR StoreServ Storage and VMware vSphere 5 best practices – June 2013, Rev.3 released


Quick note – The latest revision of this technical white paper has been released – although a minor release I still recommend  if you are running VMware vSphere 5 on a 3PAR StoreServ array – do yourself a favour a take a read!  And please if you havr any feedback, contact me so I can look at including in the next release.

You can download this whitepaper at the following link http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA4-3286ENW.pdf

 

About the Technical White Paper 

When deployed together, VMware vSphere and HP 3PAR StoreServ Storage deliver a compelling virtual data center solution that increases overall resource utilization, provisioning agility, application availability, administrative efficiency, and reduces both capital and operating costs.  This white paper outlines best practices on how to set up HP 3PAR StoreServ Storage with VMware vSphere 5.1 as well as how to take advantage of HP 3PAR StoreServ’s unique features such as vSphere integration, HP 3PAR Thin Provisioning technologies, Dynamic and Adaptive Optimization, and Recovery Manager for VMware to create a world class virtualized IT and application infrastructure

 

Best Practices HP Servers Virtualization VMWare

VMware ESXi 5.1 U1 Release and HP Custom ESXi Image

VMware released ESXi 5.1 U1 in April earlier this year,  HP also released a custom image of this release that incorporates customised HP drivers and management software saving you time to download and install the software yourself. A good point to make is that by using the customised HP Proliant ESXi image – this is a step towards keeping consistency in the software/driver level in your VMware environment. This is best practices in my opinion.

The custom image is available on the VMware site.

https://my.vmware.com/web/vmware/details?downloadGroup=HP-ESXI-5.1.0U1-GA-25APR2013&productId=285

 

VMware from HP ProLiant Server VMware Support Matrix

Not all server’s are created equal! For compatible HP Proliants, always check compatibility on the following website:

http://h71028.www7.hp.com/enterprise/cache/505363-0-0-0-121.html

Now here’s where it gets cool – You can build your own HP Custom image of the ESXi image! Just head over to http://vibsdepot.hp.com/ allowing you to add comptabile HP bundles for VMware Image Builder, Update Manager and ESXCLI etc.

 

UPDATE

For those who have upgraded to 5.1 U1 (Custom and Non-custom image) please be aware of the following:

http://kb.vmware.com/kb/2050941 – Cannot log in to vCenter Server using the domain username/password credentials via the vSphere Web Client/vSphere Client after upgrading to vCenter Server 5.1 Update 1 (2050941)

 

 

 

Best Practices Data Migration HP Storage

Sometimes Data migration is inevitable. High costs and increased business risks don’t have to be.

Every major initiative for optimizing data center performance, decreasing TCO, increasing ROI, or maximizing productivity – including consolidation, virtualization, clouds, server upgrades, tiered storage, data analytics and BI tools – involves storage data migration.

 

Data has an incalculable value, and its loss can have significant impact. As Frost & Sullivan says in a recent Executive Brief, “one would expect that storage data migrations should be approached with the same attention a museum lavishes on a traveling Rembrandt exhibit.”   To expand on this, in 2012 it was estimated that $8 billion dollars worldwide was spent in data migration services.

 

A research white paper published in December 2011 entitled “Data Migration – 2011″ by Philip Howard from Bloor Research shows the average cost for a data migration project is $875,000, so to extrapolate the value and criticality on these types of projects should be fairly straightforward. Overrunning project budget, or rolling back a failed migration due to lack of planning,  are normal occurrences – in fact this same study proposes that the average cost of a project overrunning its budget is $268,000.00 – approximately 30% of the average cost of a data migration project.

 

Between 1999 and 2007, 84% of data migrations went over budget and overtime; this is astronomical and costly – and it can get very tricky when trying to pinpoint just why did the data migration project go over budget and over time. More often than not, it is usually down to lack of experience and planning (and I do believe that experience and planning should come in the same sentence.)

 

And there are potentially serious risks involved. Recent studies show that migration projects nearly always have unwanted surprises: 34% of migrations have data missed or loss,  a further 38% have some form of data corruption.

 

And probably the biggest risk associated with migrations is that 64% of migration projects have unexpected outage/downtime.  Now, tie this back to a research paper put forward by Vision Solutions in 2011, which shows that the typical cost of downtime can reach nearly 6.5 million dollars per hour for some in the Brokerage service industry, and up to 2.8 million dollars per hour for those in the Energy Industry.  To really understand this and put it in context, let’s have a look at some of the reasons why we migrate.

 

Why do we migrate data?

The migration of data isn’t typically something an IT manager or CIO does for fun, end of the day it will cost money and time. Ageing infrastructure or the need for a particular technology feature that’s not available on the current infrastructure are just a couple of the reasons why people migrate. In my experience, it’s all of the above. CIO’s are constantly (or should be) looking at new and innovative ways to reduce footprint and drive down environmental costs, such as data centre space and power, as well as expose newer and greater technological advancements within a given product set.  Newer product releases for infrastructure seldom take a step back when it comes to form factor and power draw.

 

So do customers who perform migrations achieve their overall goals? Not exactly…As I mentioned above, those undertaking DIY migrations typically have surprises which result in a heavier investment in staff to try and remediate those surprises, subsequently resulting in a project budget that is exceeded. Yes, 54% of the time a project budget is overrun due to these challenges but I’m not here to throw stats at you – I’m here to raise the awareness that if not properly planned and executed, your data migration project (as big or small as it may be) will run into at least one of these surprises.

 

HP Data Migration Services can help you address those challenges and risks. Each data migration project has astorage and data migration project manager assigned to make sure everything goes smoothly. We understand that storage infrastructures are typically multivendor, which is why our service is vendor-agnostic. We work to keep costs down and help you avoid the common pitfalls and risks of data migration.

To learn more about the new HP Data Migration Service, check out this online presentation. You’ll learn about the typical project flow and your migration technology options. Data migration is not usually just a simple copy-and-paste exercise.

 

Read more about HP Storage Migration Consulting.

 

You can learn more about ways to ease the pain of data migration at HP Discover 2013.

3PAR Best Practices HP Storage VMWare

HP 3PAR StoreServ Storage and VMware vSphere 5 best practices whitepaper

 

When supported with the correct underlying storage platform, server virtualization delivers greater consolidation, administrative efficiency, business continuity and cost savings. As a result, server virtualization is not only transforming the data center, but also the businesses that those data centers fuel. However, these transformative results depend on enterprise class storage to deliver the performance, availability, and flexibility to keep up with the dynamic and consolidated nature of virtualized server environments.

HP 3PAR StoreServ Storage is the next generation of federated Tier 1 storage and was built from the ground up to exceed the economic and operational requirements of virtual data centers and cloud computing environments by providing the SAN performance, scalability, availability and simplified management that clients need. It does this through an innovative system architecture that offers storage federation, secure multi-tenancy, built-in thin processing capabilities, and autonomic management and storage tiering features that are unique in the industry.

When deployed together, VMware vSphere and HP 3PAR StoreServ Storage deliver a compelling virtual data center solution that increases overall resource utilization, provisioning agility, application availability, administrative efficiency, and reduces both capital and operating costs.

 

Download here

 

Feedback welcome