Commvault attends AWS re:Invent and Takes Out AWS Global Storage Partner of the Year

Commvault AWS Global Storage partner of the year!

Coming off hot from their own SHIFT event, Commvault has been named AWS Global Storage Partner of the Year at AWS re:Invent 2025, a recognition that puts them at the top of the field among AWS Storage Competency partners delivering large-scale data and storage solutions through their cyber resilience and data protection solution.

The award, announced at the AWS Partner Awards Gala during AWS re:Invent 2025, highlights alliance partners that have shown strong specialisation, innovation, and close collaboration with AWS over the past year.

So why did they get it? well Commvault stood out for the depth (and breadth) of its capabilities across AWS environments. Its solutions span backup and restore operations to, from, and within AWS, primary storage services using file, block, and object protocols, active and long-term data archiving, and business continuity and disaster recovery.  This diverse recoverabilouty capability makes it easy for organisations using all sorts of workloads across these protocols to recover quickly and from one interface that can also stretch to on-premises. But its a bit more than just backup and recovering, as Commvault continue to reposition themselves in the Cyber resiliency and business continuity space, awards like this are a great testament not just to their vision, but a pat on the back to their product engineering and marketing teams working behind the scenes.

One of the other cool (no pun intended) capabilities that Commvault provides for AWS environments is rapid iceberg recovery.

Rapid Iceberg Recovery – What is it?

Commvault (via Clumio) provides backup and recovery for Apache Iceberg tables that live on Amazon S3 and are managed through AWS Glue. That’s it. No magic. No vapourware.

But that matters a lot, because Iceberg on AWS ships with basically zero native protection out of the box.

Sure, AWS Iceberg is great for analytics and AI pipelines:

  • Open table format

  • Versioned metadata

  • Works well with Spark, Trino, Athena, etc.

But here’s the reality:

  • Iceberg metadata can be corrupted

  • Tables can be deleted or overwritten

  • Ransomware doesn’t care that your table format is “modern”

  • Glue catalog entries get nuked more often than people admit

And when that happens? You’re manually stitching metadata, replaying logs, or restoring entire buckets and hoping you didn’t just break downstream pipelines which could lead to catastrophe .

End of the day, that’s not recovery. That’s gambling and risky.

What Commvault Iceberg Recovery adds

Commvault Iceberg recovery gives you:

  • Point-in-time recovery for Iceberg tables

  • Protection of both table metadata and underlying data

  • Recovery from:

    • Ransomware (Always a hot topic)

    • Accidental deletes

    • Bad writes

  • Coverage integrated with S3 + AWS Glue, not bolted on

In short…

You can roll an Iceberg table back to a known-good state without rebuilding the entire data lake. That’s pretty.  powerful and something that may save you a lot of time and money.

But who this is for?

This is for:

  • Teams running Iceberg in production

  • Analytics platforms feeding AI/ML pipelines

  • Enterprises that care about RPO/RTO, not just storage cost

  • Anyone who’s already been burned once and doesn’t want a repeat

This is not for:

  • Sandbox experiments

  • “We’ll rebuild it if it breaks” cowboy teams

  • People who think backup is optional because “cloud” will fix it

Bottom line and in closing

Commvault AWS Iceberg recovery turns Iceberg from a science project into an enterprise-grade platform.

If Iceberg data matters to the business, this capability stops it from becoming tomorrow’s postmortem.

Related Post

Vendor lock in and how to avoid the pitfalls when protecting Multi cloud scenariosVendor lock in and how to avoid the pitfalls when protecting Multi cloud scenarios

Screen Shot 2022 11 02 at 8 03 41 am

According to a 2019 Gartner survey, 81% of organisations are using two or more public cloud providers, and whilst this survey is three years old now – it still rings true with what we are seeing in the market today.

With long-standing leader AWS losing ground to Microsoft’s Azure in the market share space (according to IDC), and along with the and the rise of the Google cloud platform (GCP) and Oracle’s own cloud, it is a no-brainer as to why organisations are using more than two providers – because they have choice.

And with choice comes the ability to adopt a multi-cloud strategy out of a desire to avoid vendor lock-in or to take advantage of best-of-breed solutions, or a particular price point which is always top of mind when controlling IT spending.

But with multi cloud adoption strategies, introduces a new challenge for offering and building one’s own cloud services across multiple hyper-scalers. How do I protect my workload if they exist in multiple (and non-integrated) clouds.

Whilst it is apparent that cloud companies operating in the modern IT world, want to and do implement stickiness with customers using their technologies, it does provide advantages and disadvantages.

Vendor lock-in and more vendor lock-in.

The more workloads you use in one cloud, the more you are buying into the stickiness to which cloud companies desire, and when some data protection companies only offer protection or compatibility with 1 or two of the major providers leaving many strapped for choice once they commit to a particular cloud.

Doing some background research on the portability of backups is necessary when selecting a vendor in this space, and whilst even using Third party protection vendors, can also end in Vendor lock in too as most store their backup images in their own propriety format, at least you have the option to shift your backups around, and recover to as well.

One word of caution with recovery actions, depending on where you recover to, may lead to some unforeseen egress charges from your provider so think twice before doing any unnecessary recovery tasks.

Different cloud? Different rules!

Each of the major cloud providers have its own strengths and weaknesses, this is clear but what is also known is that each of these cloud players have different requirements, restrictions or rules for customers to operates their workloads in.  These may be physical constraints – Microsoft Azure data centres are quite literally running out of capacity, or logical restrictions (remember these are shared resources).

Either way, these restrictions now place different constraints on how you can protect one workload over another in your multi-cloud data protection strategy.  This may result in not being able to offer the same RPO across similarly crucial workloads

So how do I navigate this and make things easier?

The important thing here is to understand and find protection solutions that can work in a hybrid model (on-premises and cloud), and that the same user-level experience, service and product functionality is delivered consistently regardless of what cloud it attaches to.

Tying this all into one GUI is another must-have, as the more data protection solutions you have in your environment, the more taxing on staff to stay skilled, capability deviation resulting in different RPO\RTO for your workloads, as well as different formats to manage should you ever want to consolidate.

Multi-cloud DP with BDRSuite

One great player in this space, that provides one unified platform to manage multi-cloud VM deployments is BDRSuite by Vembu, and you’ve probably read some of my earlier posts on their data protection offering. With their BDR Suite having both on-premises and in-the-cloud deployment options, creating a true hybrid cloud management plane for your data protection environment.

And for the 81% of organisations using two or more cloud providers, I can’t emphasise enough the importance of having simplicity when protecting VMs across these – their simplification of cloud disaster recovery for customers allows the same user experience, and near like granular recoverability.

As of the time of this post, BDRSuite offer backup for AWS EC2 (Virtual Machines), GCP (GCE) and Azure with the ability to use S3 storage as a secondary copy if you’re following the 3-2-1 backup rule. (This rule is actually a good reason to use multiple clouds in your data protection strategy). And being able to restore an entire VM to other Public Cloud providers gives you the flexibility to recover to bring your environment back up after a failure wherever you may choose.

Rounding it out

In closing, organisations that have invested in a multi-cloud data protection strategy are reaping the benefits of reduced vendor lock-in, best-of-breed solutions and leveraging price points to achieve further cost reduction in your operations.

It all ties back to simplicity and capability in my view, and BDRSuite offer a solid one-stop platform to manage your VM backups across the three major cloud providers, of course, there are always restrictions imposed by the cloud provider but certainly from a capability perspective. BDRSuite makes things easy to manage and maximise these.

Part Three: Embracing Cloud-native Data Protection with BDRSuite (Sponsored Post)Part Three: Embracing Cloud-native Data Protection with BDRSuite (Sponsored Post)

Part 3 of our series delves into the groundbreaking features of BDRSuite and spotlights its remarkable capabilities in delivering robust data protection while seamlessly integrating with major cloud platforms like Azure, AWS, and Google Cloud.

BDRSuite’s cloud-native data protection empowers customers to effortlessly scale their storage and computing resources in tandem with evolving business requirements. Whether confronted with expanding data volumes or a growing number of virtual machines, the integration with cloud platforms ensures seamless scalability and flexibility.

By leveraging cloud services such as Azure Blob Storage, Amazon S3, and Google Cloud Storage, BDRSuite provides cost-effective storage solutions. This approach optimises costs by harnessing scalable and efficient cloud storage, eliminating the need for substantial upfront investments in physical infrastructure.

Disaster Recovery Readiness across the Globe:

Cloud integration enhances disaster recovery preparedness by leveraging the infrastructure of Azure, AWS, or Google Cloud. This enables customers to establish geographically dispersed backup locations, ensuring data redundancy and swift recovery in the face of a disaster, thereby minimising downtime and data loss.  BDRSuite ensures that customers stay at the forefront of innovation by enabling them to leverage the latest technologies offered by cloud platforms. This includes advanced features for data management, security, and compliance, providing businesses with a competitive edge and future-proofing their data protection strategies.  Cloud-based solutions enable global accessibility to data, allowing BDRSuite users to securely access and manage backup data from anywhere in the world. This feature is particularly valuable for organisations with distributed teams or those operating in a global business environment.

As you may have read in an earlier post of mine, BDRSuite’s scheduling feature

The integration with cloud platforms streamlines backup and recovery processes, allowing for automated workflows within BDRSuite. This, combined with the cloud’s infrastructure, enables customers to schedule regular backups, monitor data integrity, and automate recovery operations, reducing manual intervention and ensuring a reliable data protection strategy.

Compliance and Security Assurance:

Utilising cloud services for data protection with BDRSuite means inheriting the robust security and compliance features that Azure, AWS, or Google Cloud has to offer. This includes encryption, access controls, and regular security audits, providing assurance regarding data integrity and regulatory adherence.

Cloud Integration with Azure:

BDRSuite compatibility with Microsoft Azure transforms data protection processes for organisations leveraging Azure’s cloud services. The integration ensures a smooth and efficient backup and recovery process, allowing businesses to safeguard their critical data seamlessly. By leveraging Azure Blob Storage, this affords organisations a cost-effective and scalable solution for storing backup data while also ensuring businesses can scale their storage needs according to requirements, optimising costs without compromising data integrity.

For organisations already running virtual machines in Azure, BDRSuite extends its capabilities to these Azure VMs, enabling users to perform backup and recovery operations directly within the Azure ecosystem, ensuring streamlined protection for virtualised workloads on the Azure platform.Cloud integration with AWS:

For organisations invested in AWS, BDRSuite Backup & Recovery for Amazon Web Services seamlessly integrates, enhancing data protection capabilities within the AWS environment.  BDRSuite utilises Amazon S3 for cloud storage, ensuring durability, availability, and scalable performance. This integration allows businesses to harness the benefits of AWS storage infrastructure while ensuring data remains accessible and protected.

BDRSuite extends its support to AWS Elastic Compute Cloud (or EC2 as most people know it), enabling efficient backup and recovery of EC2 instances and providing comprehensive protection for virtual servers running on AWS.

Google Cloud Integration:

BDRSuite seamlessly integrates with Google Cloud, offering a powerful data protection solution within the Google Cloud Platform. BDRSuite integrates with Google Cloud Storage, providing a reliable and scalable option for storing backup data, allowing organisations to leverage Google Cloud’s storage infrastructure for efficient data protection.    BDRSuite extends support to Google Cloud Compute Engine, enabling users to perform backup and recovery operations for virtual machines within the GCP environment, enhancing overall data protection capabilities.

In closing:

BDRSuite’s robust integration with Azure, AWS, and Google Cloud establishes it as a leading solution for cloud-native data protection. Businesses can confidently embrace the cloud, assured that their critical data is secure, accessible, and efficiently backed up with BDRSuite‘s feature-rich platform in addition to unlocking a world of new possibilities in the technological realm. 

The seamless collaboration between BDRSuite and leading cloud providers ensures not only the security and accessibility of critical data but also a pathway to future-proofing data protection strategies. 

Backup naming conventions made easy with BDRSuite (Sponsored Post)Backup naming conventions made easy with BDRSuite (Sponsored Post)

One thing I am big on is neat and tidy formatted naming conventions. This applies to CRM opportunity naming conventions. This discipline dates back to my days as a storage architect where I was responsible for solutioning many enterprise environment’s storage, server and backup infrastructure architecture. These days, having tools like BDRSuite makes it really easy to employ best practices right across the board, some are built in but some aren’t – and while naming conventions are typically left up to the administrator, In this context, drawing from my experiences, I’ll share three best practice tips to assist you in creating purposeful and orderly names for your backup jobs.

Tip #1 – Be Descriptive and Consistent:

Consider incorporating key information such as the source system, type of data, and frequency of the backup into your job names. For instance, a backup job named “HR_Server_Daily” provides a clear indication of the data source (HR Server) and the frequency (Daily). Consistency across all your backup job names enhances predictability and ensures that your entire backup landscape is easily understandable.

Tip #2 – Include Timestamps for Versioning:

Consider adopting a format like “JobName_YYYYMMDD_HHMM” (e.g., “FinancialData_20231113_1200”). This format provides a clear snapshot of the backup, facilitating precise identification of the version needed during the restoration process. Embracing timestamping as part of your naming convention adds a layer of precision to your backup strategy and fosters a robust versioning system. While many well-thought-out Backup software interfaces such as BDRSuite make these embedded anyway, think about the poor backup administrators who live in the command line.

Tip #3 – Account for Scalability and Flexibility:

Anticipating scalability and embracing flexibility in your naming conventions is crucial for accommodating future changes seamlessly. Design your backup job names with a forward-thinking mindset, considering potential additions or modifications to your IT environment.

One effective approach is to structure your names hierarchically, reflecting the organisational structure of your IT systems. For example, “Department_Server_Daily” provides room for expansion by accommodating various departments within the organisation. Additionally, incorporating generic terms that encompass multiple systems or data types ensures that your naming convention remains adaptable as your IT infrastructure evolves.

In closing:

I have spent many years navigating the intricate terrain of IT backups, these three best practices have proven instrumental in ensuring the resilience and reliability of our data protection strategies. Embrace these principles, and you’ll find yourself better equipped to safeguard your organisation’s digital assets. All of these tips (and there are many more I could share) can be implemented extremely easily with BDRsuite and make anyone’s life who deals with backup jobs on a daily basis, a heck of a lot easier.