We were promised that the cloud would simplify our lives when it came to handling files. But if you’re an IT person dealing with more than one provider — along the lines of AWS, Azure, and Google Cloud — you probably have felt pain in the process. I can honestly say, with years in the space, multi-cloud file management feels broken. Instead of a single well-thought-out system, we have had to juggle a hodgepodge of tools and interfaces for each platform.
In this article, we’ll delve into why cloud file management is in such a disaggregated state and how we can reclaim a bit of sanity (with help from the right tools).
The Fragmentation of Cloud Storage
Each major cloud has its own process for doing things. AWS S3 buckets, Azure Blob storage containers, and GCP storage buckets. On paper, these services are even - they all let you store objects in the cloud. In reality, they each have their own web console, API, and quirky features.
For an IT pro, that means learning three different interfaces and rule sets – in the end, there’s no one “dashboard” to go to look at all your cloud files in one place.
As we can see, managing files in AWS S3 is a totally different interface and process, and it has no uniformity with the other providers. And if you want to copy files from one cloud to another? Good luck: There’s no built-in, one-click way to do that.
You’ll likely end up popping into AWS to download files, just to upload them to Azure. Cloud companies aren’t exactly incentivized to make it easy to move data off their platforms. The consequence is that multi-cloud architectures often become data silos, with one silo for each cloud provider.
Eventually, this leads to a fractured file structure, duplicate data, and a ton of human error as we manually manipulate files. Even outside the interfaces, potentially basic concepts differ (e.g., access permissions, retention policies, naming conventions — all behave differently on either platform).
The AWS S3 console can present objects with folder-like prefixes in it, or in the portal for Azure, where it displays a directory tree for blobs. (These subtle distinctions catch even advanced engineers out. I’ve also watched as teammates thought a file was deleted on Azure because they didn’t realize Azure has “soft delete” enabled – it works nothing like AWS versioning, by the way. Multiply such idiosyncrasies across clouds, and it’s a fantastic way to create confusion.
War Story: S3-to-Azure Migration Gone Wrong
Allow me to share a true-life cloud migration nightmare that demonstrated just how broken multi-cloud file management can be. The challenge: We had a big dataset – tens of terabytes – on Amazon S3 in a bucket, and we wanted to move it to an Azure Blob storage. On paper, the strategy seemed simple enough: Copy the data over. In truth, it was more like a comedy of errors (but there was nothing funny about it at the time).
At first, we attempted the naive approach: We downloaded files from S3 onto a local server using AWS’s CLI and uploaded them to Blob storage using Azure’s CLI. This was agonizingly slow and error-prone. Halfway through the transfer (after days of copying), there was a network hiccup, and it failed. We hadn’t really provided a good restart mechanism. We were effectively back to square one with half the data still stranded on AWS, and now an approaching deadline.
Then we tried to optimize things by using an external graphical tool – CloudBerry Explorer (now MSP360 Explorer) was one of them. It’s a file manager that is capable of hooking up to both the AWS and Azure cases, meaning that theoretically, we can drag-and-drop between the two in the same interface. That certainly CloudBerry had made it easier to begin a big copy job from S3 to Azure – we could view both accounts side-by-side.
But “easier” didn’t necessarily mean “easier” — the transfer still choked on large files, and our network bottleneck didn’t go away. We found out the hard way that copying 10+TB over the internet isn’t exactly straightforward, GUI or not.
Ultimately, that migration project ran over time and budget, and we ended up doing a complete pivot to bring in a more complete solution (using AWS DataSync with an Azure private link) to just get it done. But the die had been cast – the missed opportunity was a waste of time, money, and a part of our pride that was difficult to let go of. The moral was plain: without best-in-class cross-cloud file management at their disposal, even battle-hardened IT teams can get their cloud feet tangled.
We had to concatenate tools and scripts in order to achieve something that the cloud platforms should have simplified for us. It was like trying to splice a busted pipe with duct tape.
Why Cloud File Management Sucks
What's so broken about multi-cloud file management, anyway? Based on my experience, the reasons why are numerous:
-
Siloed Ecosystems: Every provider only optimizes for their own ecosystem. AWS offers S3 tools, Azure has its Storage Explorer — but there’s scant consideration about working across the boundaries.
-
Lack of Standardization: No common protocol exists across cloud file systems. Metadata, access control models, and even terminology (is it “download” or “egress”?) vary by platform.
-
Bandwidth and Costs: Transferring massive files to and from clouds requires internet bandwidth and comes with large egress fees. If a transfer gets interrupted halfway, you may end up paying twice to (re-)transfer the data.
-
A few transfer tools: Cloud providers, of course, provide migration utilities (AWS DataSync, Azure Storage Mover, etc.), but they can be featureful, complex to configure, and not always an elegant solution for single instances of file transfer. The average sysadmin ends up manually downloading or script writing.
-
Human Error: With so many logins and UIs, it is easy to upload to the wrong bucket or be confused about what data overwrites. There is no trash bin or undo button — a mistake in one cloud stays there and can have catastrophic effects.
Add them all up and you have a storm of frustration. It’s not as if cloud storage itself is broken – using it with competing providers, on the other hand, is like playing with puzzle pieces that don’t fit together. It reminds me of the days of networking when it wasn’t all TCP/IP, and you had to be a guru to get anything to connect to anything else. It’s a bit of the Wild West for multi-cloud file management right now.
Multi-Cloud File Management Tools and Tips
It’s not all doom and gloom. Cloud platforms themselves don’t make it simple to control files across AWS, Azure, GCP, and others, but the IT community has found workarounds. Here are a few strategies and tools I’ve learned, mostly through trial and error, that can help tame the chaos:
-
Make Use of a Cloud-Agnostic File Manager: Tools like CloudBerry Explorer let you interact with multiple cloud storage services through one interface. For instance, you can open an AWS S3 bucket on one side, an Azure Blob container on the other, and transfer files back and forth. This doesn’t miraculously cure restricted bandwidth, but it does save you from juggling three web consoles. (Other such multipurpose tools include Cyberduck and Mountain Duck for macOS or web-based offerings like MultCloud — although if you take this latter option, be mindful of any security issues.)
-
Harness Command-Line Power: If you don’t shy away from the CLI, utilities such as rclone are a blessing. Rclone is a free command-line program for syncing files and directories to and from a variety of cloud services. Using a single command, you can copy or move data between any two storage providers, without the need to download the data and re-upload it. It supports retries, bandwidth throttling, and encryption. The learning curve is a bit steep, but it’s scriptable and very powerful when you have big projects.
-
Built-In Migration Services: For large one-time migrations, consider the cloud vendors’ proprietary offerings. (AWS DataSync, Azure Storage Mover, and Google’s Transfer Service are built to move data at scale.) In our S3-to-Azure story, it was DataSync that ended up saving the day thanks to its ability to conduct the transfer from start to finish with a level of resiliency and speed manual approaches didn’t offer. The catch is that these services can be tricky to set up (and potentially costly), so they’re best for planned projects, not everyday copying.
-
Be Mindful of Egress Costs & Limitations: Don't forget, pulling data out of one cloud (egress) usually isn't free. Do the math before transferring a multi-terabyte dataset – sometimes it's cheaper to use an offline transfer appliance (such as AWS Snowball) if that's an option for you. Also, make sure your network can support the transfer. This isn’t always due to the clouds, however — it might be your office internet.
-
Use Names and Indices: This is a bit more general of a process tip. Track where files exist across clouds as well. Using the same bucket/container names or folder prefixes can be useful. Some teams keep an index of facts or a spreadsheet of data locations across AWS/Azure/GCP to avoid the mess from everything getting lost. It’s not sexy high tech, but it sure simplifies matters when you have to track down “Project X files” a year down the road and can’t recall which cloud you dumped them in.
Tools like this one enable IT pros to handle files on various cloud services without ever leaving the same screen. That lets you stop jumping between consoles, AWS, Azure, and other consoles whenever you need to transfer data. No one tool or tip is a silver bullet for the multi-cloud file management problem, but combining these suggestions can help make life bearable.
For day-to-day chores, I frequently keep CloudBerry Explorer running on my desktop – it’s the Windows Explorer of the cloud. And when you do need automation, a good rclone script will always be a more reliable option than a drag-and-drop. The trick is to minimize context-switching and let the tools take on the real work wherever feasible.
Toward a Unified Cloud Experience
Until that day comes and all cloud providers magically decide to use a common storage interface (I’m not holding my breath), we must be pragmatic. The state of multi-cloud file management may feel broken, but we’ll do our best to work around the cracks with a good, old-fashioned toolkit of solutions.
We’ve gained an amazing amount of flexibility in the cloud era — the power to pick the right service for the job — but we’ve inherited the headache of too many isolated environments, too. The more you know about the tools you have access to and the more you prepare to manage the data, the better, as an IT pro.
If you anticipate working in more than one cloud, take the time to put a half-decent file-management system in place early. Maybe that means sussing out a third-party file manager, grabbing a dash of rclone syntax, or writing a few sanity checks around your file-move scripts. Your cloud file management doesn’t have to be a horror show every time.
With a little forethought and the right helpers, that messy situation makes for a manageable workflow. It’s about popping the seamless bubble of sameness between platforms. And who knows — perhaps in a few years we’ll be looking back on these days the way we look back on dial-up internet, asking ourselves how we ever tolerated something so slow and clunky (and yet so quaint) for moving files from here to there.
Until then, keep those scripts at the ready, and your backup everywhere as close as possible — because the multi-cloud is an imperfect world, and it’s the world we live in.