Storage is an interesting beast—often thought of as something that can be solved by simply increasing capacity which will then make all your problems disappear. But storage is never the real problem and throwing money and resources at it will never solve the underlying deeper issues. In fact, in most cases, workflow is the root cause, not where you are storing files.

Simply put, end users should not be focused on storage, rather they should be focused on solving problems within their workflows where storage is an important component. 

For instance, one of the greatest challenges media professionals face is duplications. And when you are dealing with terabytes of data at a time, the idea that somehow a storage arms race will fix workflow issues is not going to get anyone anywhere other than just a bigger server room.

It’s here that the workflow must be addressed. We all know that shared storage is essential for every workgroup. Throw in the events of early 2020 and that need is now mission critical. But the question becomes how do you provide file accessibility across an entire network ecosystem? More so, how do you build the ecosystem to enable simultaneous access by multiple users without the need to duplicate files or version control? 

The right approach is to look at file storage in a very different way. It’s not just about file size, it’s about enabling simultaneous access by multiple subscribers without the need to duplicate files to their computers. If this is corrected, so is the storage issue.

Okay, so now that we have identified the solution, how do we get there? The first thing to tackle is the way the infrastructure is built and configured—ensuring that it’s deployed in a way that fosters a collaborative post-production infrastructure that goes beyond shared storage. This will ultimately solve the problems of large groups of editors working from the same-source materials and in nonlinear environments.

But there is more to the solution than just the right-here-right-now. It’s one thing to create an environment that fosters better workflow, but it’s another to ensure that same environment will work further down the road. And though I know the words “future proof” get thrown around a lot, perhaps that is for a good reason. 

All too often I see people throwing money at problems for immediate fixes—never calculating the risks associated with that fix in future scenarios. It’s this exact position that gets people into precarious positions when it comes to their media infrastructure. 

For instance, with growing file sizes, increased video resolution, and high-quality visual effects requiring more storage, network and server performance, post-production environments need a high degree of parallel access between work groups, as well as higher storage capacities to handle higher volumes of recorded media.

These are the situations that lead to challenges including everything from uptime and availability, planning for infrastructure growth and capacity, the adaptability for growing file resolutions, data protection and archiving issues, and of course data migration to new technologies. 

So, what’s the real course of action here? If the problem isn’t storage, but more so the architecture and design of the infrastructure and how it must be purpose-built to address all factors, the question becomes who actually solves the problem?

It’s one thing to bolt in a new storage device, it’s another issue entirely to turn yourself into a media workflow architect all while trying to do your day job. The best advice here is to speak with professionals—like us—who can solve the issue once and for all. Media file size is not shrinking, nor is the impact of new technology and digital transformation initiatives. Therefore, preparing now for a constant and never-yielding evolution is the only defense. That or build a datacenter the size of a football field—but even that will eventually outgrow itself.