
VDURA, a software-defined storage company that develops data infrastructure for AI and HPC (high-performance computing), has made a major catch on the executive front. They have hired Garth Gibson, who co-invented RAID (Redundant Array of Independent Disks), pioneered parallel file systems and co-founded Panasas, the HPC company that became VDURA. Most recently, Gibson had been President and CEO of the Vector Institute. He now returns to VDURA on a unique mission – to reinvent the storage stack for AI.
Gibson’s four-decade career defined the foundations of modern data infrastructure. As a co-inventor of RAID, he set the standard for reliability at scale. Deeply interested in parallelism, his pioneering work on parallel file systems at Carnegie Mellon University’s Parallel Data Lab and as co-founder of Panasas enabled the scale-out architectures that power today’s HPC and AI pipelines. At the Vector Institute, Gibson helped shape the future of generative AI, advising on infrastructure that spans from research clusters to billion dollar-scale AI facilities.
Garth Gibson is the first Chief Technology and AI Officer [CTAIO] at VDURA. So why was Garth’s position created in the first place? Basically, it’s because AI is reshaping infrastructure needs faster than anyone expected.
“Storage can no longer just be about capacity and basic connectivity, as it has to fuel GPUs for training, support inference at scale,” Gibson said. “It has to AI truly run as an efficient factory. That requires vision at the highest level.”
The company emphasized that Gibson isn’t just any technologist; he co-invented RAID which underpins every storage system and cloud in the world today, he is the Father of Parallel File Systems that are now the foundation for AI storage and of course he founded Panasas, the company that has since evolved into VDURA. After spending the last several years leading the Vector Institute and advancing global AI infrastructure, his return is both a homecoming and a catalyst. Across four decades, he’s influenced how data is stored, protected, moved, scaled, and used for HPC and now AI. VDURA said that they were excited to have his legacy and leadership back at VDURA to help them reinvent the storage stack for the AI era.
“My past work gave the world the foundations of high-performance storage,” Gibson stated. “As CTAIO, the focus now is on reinventing those foundations for the AI era. AI is creating demands that no one has fully solved yet, from feeding GPUs at scale to collapsing time to results and making inference both fast and economical. There are significant opportunities for gains in areas where others have not even tried, and that is where I will concentrate my energy.”
Some of this work builds naturally on his legacy of RAID and parallel file systems, but much of it is entirely new. This is an evolution that goes far beyond what the industry has seen before. It is about moving storage from being a passive layer to becoming an active enabler of AI at every stage of the lifecycle. While VDURA cannot share every detail today, stay tuned. The work ahead will be very different from what exists now.
“Today at VDURA I see a once in a generation opportunity: to reinvent the storage stack for AI,” Gibson stressed.
Jensen Huang from NVIDIA recently issued a challenge to the storage ecosystem to significantly evolve, stating at GTC that storage has to be completely reinvented. Rather than a retrieval-based storage system, it’s going to be a semantics-based retrieval system. For the very first time, your storage system will be GPU-accelerated.” With NVIDIA’s 12-month refresh cadence, Garth believes that focused, purpose-driven innovation is needed on how we store data for AI.
When Gibson talks about reinventing the storage stack, he means evolving it for the efficiency of the AI factory.
“Training is an investment, but inference is where revenue is generated,” he said. “Both depend on keeping GPUs fully fed with data, and if storage stalls, the entire AI pipeline stalls with it.”
Gibson’s vision is to make storage operate as part of the AI supercomputer itself.
“That means massively parallel throughput that scales with compute, deterministic latency that sustains GPU utilization, failure recovery that is fast and online, and yearly adaptability that keeps pace with each new generation of accelerators and networks,” he noted. “This is not about retrofitting yesterday’s systems. It is about creating storage that accelerates time to insights and accelerates time to revenue.”
Another part of Gibson’s vision is that storage has to be adaptive.
“It can’t just hold data; it has to fuel AI in real time,” he said. “That means architectures that scale linearly, performance that doesn’t bottleneck GPUs, and durability that gives customers confidence their AI work won’t be lost. The vision is about turning storage from a constraint into an accelerator.”
Literally dozens of agentic AI platforms are now coming to the market, so how will this be different?
“We are building the data storage infrastructure that helps AI models run efficiently,” Gibson stated. “Our platform enables the compute infrastructure that powers the very models agentic AI platforms depend on. While others focus on the agent layer, we ensure the foundation is strong enough for them to perform, scale, and deliver in production.”
So how will all of this impact the VDURA channel?
“It’s a big opportunity,” Gibson said. “Customers want to work with trusted partners to help them navigate a level of change in how their IT infrastructure works that we have never seen before. There is a critical role here for the channel to act as that trusted advisor to give their customers access to the best of breed innovators that will enable their customers to best meet the challenge of today and adapt as that challenge continues to evolve. AI is moving fast, and partners want to bring real solutions to customers that drive innovation and bring quicker time to value. We’ll give them a platform story that’s differentiated, defensible, and in high demand.”
VDURA thinks that this means adding, and expanding the channel. The ecosystem around AI is massive, and storage is a critical piece. Gibson’s role is focused on making sure the technology vision aligns with operational AI factories and driving value to enterprise.
“The channel is central at VDURA,” he emphasized. “The channel is how we scale our impact globally and driving integration across technology partnerships is how we do it.”
So what does Gibson think that this will all look like in 6 months, or a year?
“We’re already seeing momentum with key AI and HPC, and AI deployments that prove the value of our platform in real-world environments,” he indicated. “Over the next six months, that momentum will build into broader proof points on a massive scale.”
