Dell releases AI data platform advancements

Varun Chabra, Senior Vice President of ISG Product Marketing, Dell Technologies

Today, Dell Technologies is announcing Dell AI Data Platform advancements designed to help enterprises turn distributed, siloed data into faster, more reliable AI outcomes. The new announcements include the deep embedding of the Elasticsearch technology inside the Dell AI data platform, which is intended to power enterprise-scale, unstructured data discovery. The second announcement is a turnkey integration of QVS, where customers get pre-tested, validated, and pre-integrated NVIDIA hardware. The third announcement is Dell’s next evolution on the AI data platform, which is the introduction of an agentic layer on top of the data analytics engine. Other Dell AI Data Platform advancements help customers break down data silos to unlock deeper business insights and accelerate AI outcomes. Dell PowerScale and Dell ObjectScale, the Dell AI Data Platform’s storage engines, deliver enhanced performance and scalability for demanding AI workloads. Finally, deepened collaborations with NVIDIA, Elastic and Starburst expand Dell data engines capabilities, enabling faster, real-time insights from structured and unstructured data.

“One of the most common things that I hear when I talk to customers, enterprises that are in the process of moving from AI POCs to AI in production, is that, data is a very, very critical part of making sure that there’s success for AI,” said Varun Chabra, Dell Technologies’  Senior Vice President of ISG Product Marketing, “You can see this in the recent innovation catalyst study that we did, where 82% of respondents cited data as a differentiator for their business.   But turning this data into a competitive differentiator, realizing the business value of the data through AI, is not trivial. This is what customers are finding out. Customers are telling us that their efforts to deliver ROI from their AI efforts are hampered by bottlenecks with getting access to the right data, managing rapid data growth.”

At this point, I think it’s fair to say that almost every vendor, especially in the storage world, has announced an AI data platform strategy or approach,” said Vershank Jain, Dell Technologies’  Director of AI Data Platform Product at Dell Technologies.” But if you take a step back and you look at them together, you realize that most of them are rebranding their storage platform into a data platform. Their approach starts with the storage layer and they’re layering on technologies such as a unique file format, a proprietary set of engines, both for metadata extraction and enrichment, as well as vector search, and SQL queries, and caching for advanced data handling. One of the biggest assumptions made in this approach is that the data can get centralized into a single location.Which we have realized from our discussion with a vast variety of customers, that is simply not ever going to be the case. If your storage platform is your data platform, it becomes the slowest part of your AI pipeline.

The Dell AI Data Platform, a critical component of the Dell AI Factory, delivers an open, modular foundation to create value from scattered data silos. By decoupling data storage from processing, it eliminates bottlenecks and provides the flexibility needed for AI workloads like training, fine-tuning, retrieval-augmented generation (RAG) or inferencing.

“Everything that we do on the AI front really starts with the Dell AI factory,” Chabra said. “It’s the core of our approach towards delivering AI value for enterprises. he AI Factory is our turnkey. And the amount of momentum the Dell AI factory has had over the last couple of years since we announced it has been nothing short of staggering. Today, we are the world’s number one provider of AI infrastructure.”

The platform, integrated with the NVIDIA AI Data Platform reference design, is powered by four core building blocks:

  • Storage engines for smart data placement and seamless data movement
  • Data engines to turn data into actionable insights
  • Built-in cyber resiliency
  • Data management services

Together, they create a scalable, flexible foundation for customers to realize AI’s full potential.

Dell PowerScale and Dell ObjectScale, the Dell AI Data Platform’s storage engines, offer the performance, security and multi-protocol access essential for AI data. Dell PowerScale delivers NAS (network-attached storage) simplicity and parallel performance for AI workloads like training, fine-tuning, inferencing and retrieval-augmented generation (RAG) pipelines. With new integration of NVIDIA GB200 and GB300 GPU NVL72 and ongoing software updates, Dell PowerScale delivers reliable performance, simplified management at scale and seamless compatibility with applications and solution stacks.

Other PowerScale assets include PowerScale F710, which has achieved NVIDIA Cloud Partner (NCP) certification for high performance storage, and delivers 16k+ GPU-scale with up to 5X less rack space, 88% fewer network switches and up to 72% lower power consumption compared to competitors. Dell ObjectScale, the industry’s highest-performing object platform,  provides extremely performant, scalable S3-native object storage for massive AI workloads. ObjectScale is available as an appliance or through a new software-defined option on Dell PowerEdge servers that is up to 8 times faster than previous-generation all-flash object storage. New advancements improve ObjectScale’s speed, scalability and efficiency..It will offer up to 230% higher throughput, 80% lower latency and 98% lower CPU usage compared to traditional performance.

Dell is also expanding its data engines, the specialized tools in the Dell AI Data Platform that organize, query and activate AI data. Dell’s data engines are built in collaboration with trusted AI leaders like NVIDIA, Elastic and Starburst.

The new Data Search Engine, developed in collaboration with Elastic, speeds decision-making by allowing customers to interact with data as naturally as asking a question.

The Data Analytics Engine, developed in collaboration with Starburst, enables seamless data querying across spreadsheets, databases, cloud warehouses and lakehouses. The new Data Analytics Engine Agentic Layer transforms raw data into business-ready products in seconds, using LLMs to automate documentation, glean insights and embed AI into SQL workflows. It also unifies access to vector stores, enabling RAG and search tasksacross Iceberg, Dell’s Data Search Engine, PostgreSQL + PGVector and more.

The new MCP Server for Data Analytics Engine enables multi-agent and AI application development.

Dell AI Data Platform integration with NVIDIA cuVS delivers the next major leap in vector search performance and turnkey deployment for enterprise AI environments. The integration brings GPU-accelerated hybrid (keyword + vector) search to Data Search Engine, delivering faster, more efficient insights with full on-prem control. Powered by NVIDIA cuVS and Dell’s secure infrastructure, IT teams can enjoy a fully integrated, turnkey solution to deploy and scale GPU-powered search out of the box.

“Data holds the key to incredible breakthroughs and our collaboration with Dell Technologies makes it easier than ever to unlock that potential. By fully integrating the Elasticsearch context engineering platform into the Dell AI Data Platform, we are providing a powerful engine for search and discovery, said Ajay Nair, GM of Platform Engineering, Elastic. “This collaboration empowers organizations to accelerate everything from semantic search to complex generative AI pipelines, turning large amounts of unstructured data into critical insight.”

“What does this mean in practice?” Chabra asked. “What are we actually doing here? We’ve actually worked very closely with NVIDIA  to deliver validated performance for Blackwell GPUs, deliver higher density per rack and simplify the scale-out operations as organizations are looking to scale their AI workloads. So, organizations can get more throughput with fewer overhead, or less overhead in terms of fewer switches, or fewer racks to manage.

And on the software side, we have worked closely with NVIDIA so that PowerScale software integrates, very, very neatly with NVIDIA’s AI stack, whether it’s Nemo,  Titan, or Rapids, so that data can flow efficiently from end to end without customers having to do retooling or custom plumbing for their NVIDIA infrastructure.

Operationally, these reference designs give customers a proven, pre-validated, and pre-tested deployment guidance, so that organizations can stand up their clusters faster, they can scale predictably, and then maintain performance as their datasets and models grow.

“Our designs are really, for PowerScale, are really meant to help customers go start small with their AI workloads, and  to scale all the way to the top end of NVIDIA’s large-scale benchmarks,” Chabra stated. We are able to support performance of tens, hundreds of thousands of GPUs if organizations need it. And as I mentioned, the testing and the validation we’ve done is a big part of this, so that they can plan for growth without having to guess, and with the confidence that Dell and NVIDIA have worked together, to streamline the use of PowerScale for NVIDIA environments.

And as I mentioned, the testing and the validation we’ve done is a big part of this.

Our testing reveals a few interesting things, so we’re actually able to support clusters of up to 16,000, When you look at the validation results we have, we’re able to deliver the performance and scale that the customers are looking for with NVIDIA environments, with 80% less rack space. as well as 72% lower energy use compared to… up to 72% lower energy use, compared to competitors like Pure and Vast. You know, let’s go a little bit deeper into this, so VAST requires almost 2x more rack space, and Pure Storage requires almost 5x more rack space to achieve the same benchmarks compared to PowerScale.    And then power is obviously a really, really big part of concerns for organizations as they look at adopting AI, and power is increasingly becoming an important bottleneck and a consideration for organizations as they look to adopt AI at scale.     Power scale has massive advantages there as well. 41% less power than Vast Data, and 72% less power than Pure Storage, significantly lowering operational costs.

 “Let’s move on from, from PowerScale to our object S3-based platform, storage platform.,” Chabra stated. “A lot of applications, for new applications for AI, are being written not just with NFS or NAS access in mind, but are also increasingly being written for object-based storage or object-based data access. So our software-defined object-scale platform, which we announced at Dell Technologies World is,  really aimed at helping customers with their object-based, applications or object-based data access for their workloads.    The software-defined ObjectScale platform is available on the latest PowerEdge technology, and it leverages NVIDIA’s Connect X8 NICs, as well as the Spectrum 4 latest on the Spectrum 4 Ethernet capabilities. ObjectScale is really built for all sizes of AI deployments, whether the largest CSPs, AI deployments, all the way down to smaller footprints and lower costs that enterprises, small enterprises or large, medium-sized businesses would need. We now deliver a 40 gigabits per second ingest per node, which is up to 8 times faster than our previous all-flash generation of object storage.   The bottom line with Object Scale is the latest generation of Object Scale is that it allows organizations to process more data, scale confidently, drive more accurate insights, while planning for a modern   AI-based application, AI application future that is looking more and more for object-based, S3-based, data access.

Today, we’re really excited to announce a major milestone in the Dell AI data platform, and that is the deep embedding of the Elasticsearch technology inside the Dell AI data platform, and this is intended to power enterprise-scale, unstructured data discovery,” Jain said.

And as enterprises move from AI experimentation to production, they’re realizing that 80% of the data that they really wanted to get their hands on is actually unstructured files, documents, logs, media, and it’s spread across multiple storage systems and multiple data sources. And until now, this data has been very hard to find, hard to trust, and it’s very hard to operationalize for AI. So with this integration, Dell customers can now search explore, retrieve unstructured data, using Elastic’s proven search technology directly inside the Dell AI data platform. It’s fully managed, secure, and optimized for Dell infrastructure.    At the heart of this announcement, obviously, is the native Elasticsearch keyword search, as well as vector search technologies, which is built directly into the platform.   Elastic’s proven semantic technology and ranking models also deliver accurate and context-aware results out of the box. And when you pair this with the Dell optimized infrastructure and GPU-accelerated pipelines, this really means fast and intelligent data discovery from day one.

And we’re also integrating the Elasticsearch inside the Dell AI Data Platform with NVIDIA QVS for GPU-accelerated vector search and analytics.  And it’s built on top of the NVIDIA CUDA software stack. So for businesses, this means faster decision-making, because it’s shortening the amount of time it takes to ingest new data and make your vector database updated.This is where Dell stands apart. So, with this turnkey integration of QVS, customers get a pre-tested, validated, and pre-integrated NVIDIA hardware, and a Dell, and Elastic software stack.They get end-to-end security,  and support from Dell, and updates and upgrades, as these technologies continue to evolve and improve over time.

So, for faster similarity search, faster index builds, higher accuracy, and more of a turnkey experience, this starts to become really the best of everything. The proven search relevance that we get from Elastic.  and the GPU power that we get from NVIDIA, but all in something that’s simple to deploy, simple to use, with the Dell engineering backing the entire integration.

The next announcement is our next evolution on the AI data platform, which is the introduction of an agentic layer on top of the data analytics engine. Now, this is going a little bit back into the structured world. Structured data continues to hold a significant amount of value for enterprises, because a lot of high-quality, trusted insights rely inside structured data. And being able to make that data come to life more easily using AI is really what this agentic layer is built to do. So this layer starts to move beyond the search, and it connects data, SQL, and AI tasks end-to-end so that the customers can go from what was traditionally a very SQL-oriented experience to a much more natural language-oriented experience, to something that’s more governed and auditable.

Now, at the core of this layer is the new MCP server for the Dell Data Analytics Engine. This is both an MCP server as well as an agent API.

And this multi-agent runtime supports a model-agnostic architecture, which means customers can choose which LLM to use, and they can even switch those things around very, very easily.  And it also automates one of the most tedious tasks of analytics, which is documentation and curation.  Agents are now able to automatically generate context and usage metadata, so that they can improve the amount of information that you can provide to your end users with better curated data products.

The other, really interesting thing here is the ability to control and observe. So with Elastic and QVS, we’ve really made it easy to find and accelerate a lot of unstructured data. With the agentic layer inside the analytics engine, we’ve now made it really easy for customers to act on that structured data as well.

Availability

  • Dell PowerScale NVIDIA GB200 and GB300NVL72 integration with NCP validation is

available now.

  • Dell ObjectScale S3 over RDMA will be available in Tech Preview in December 2025.
  • Dell ObjectScale software updates will be available in December 2025.
  • First release of Dell Data Analytics Engine Agentic Layer will be available in February 2026.
  • MCP Server for Dell Data Analytics Engine will be available in February 2026.
  • Data Search Engine in the Dell AI Data Platform will be available in 1H 2026.
  • NVIDIA cuVS integration in the Dell AI Data Platform will be available in 1H 2026.