Pure announces the FlashArray//C for Tier 2 storage, and DirectMemory cache to chop latency and accelerate applications on their FlashArray//X.
AUSTIN – Pure Storage is expanding the capabilities of their FlashArray Portfolio. On one end of the spectrum, they are reaching own into QLC media and Tier 2 storage with their new FlashArray//C. At the other end, they have announced DirectMemory Cache, which accelerates their FlashArray//x models. Both sets of announcements were made Tuesday at their Accelerate customer event here.
“When Pure Storage first launched, our marketing slogan was ‘a 10x improvement,” said Matt Kixmoeller, Pure Storage’s VP of Strategy. “That was from flash and while the 10x simplicity was as or more important than performance, it was performance that let us break into the market.
“Now we are now on cusp of another 10x improvement, and part of that is the proliferation of solid state memory,” Kixmoeller continued. “At the top of the market, you have Storage Class Memory from Intel with DRAM-like performance, and at the bottom you have QLC flash emerging to go after new workloads. We are excited about both of these evolutions.”
Pure’s FlashArray//C, a brand new offering, is aimed at the lower end of the market, at Tier 2 data rather than Tier 1, mission-critical data. It is optimized for next-generation QLC media.
“While the X in FlashArray//X stands for really frigging fast, the C in FlashArray//C stands for Capacity optimized, for Consolidation and for finally Crushing the SATA disk array,” Kixmoeller told his keynote audience. “There are some performance tradeoffs – but it’s still all-flash fast, especially compared to the SATA arrays it is designed to replace.”
While Pure’s attention has been focused on the challenges faced by customers around their mission-critical Tier 1 storage, Tier 2 storage has significant challenges of its own.
“Tier 2 data tends to be even more complex,” Kixmoeller said. “It is often at the departmental level, it is not as well managed, and it is variable in performance. So Tier 2 data is a significant source of pain.
FlashArray//C comes in three sizes, all of them large: 366 TB raw, 878 TB raw, and 1.4 PB raw, or 4.2 TB effective capacity.
The same Purity software powers both the //C and the higher end //X models, enabling customers to link their Tier 1 and Tier arrays, so workloads can easily be moved between them.
“We expect to see use cases where customers leverage FlashArray//X and FlashArray//C together,” Kixmoeller said. “One is tiering of VMs. Many customers run large VM farms, so they can do policy-based tiering and let VMware auto-migrate them. Another use case would be Disaster Recovery for Tier 2 apps. They couldn’t typically fit economically on the premium array, but using FlashArray//C as a DR target allows that to happen. Snapshot consolidation onto FlashArray//C for long-term retention would be another use case.”
While customers often use a less expensive system to do things that should really be handled by the more expensive, Kixmoeller doesn’t think that’s likely to happen here.
“Theres a 10x latency compared to FlashArray//X, so we don’t see the use cases overlapping, FlashArray//C cannibalizing FlashArray//X for mission-critical applications,” he said.
The channel opportunity with FlashArray//C is enormous, Kixmoeller stated.
“This is very much a mainstream product,” he said. “This will allow partners to architect a complete data centre with all-flash. It’s ideal for partners who like to focus on new product innovation.
Pure’s announcement for the high end is DirectMemory Cache. Pure Storage DirectMemory Modules plug directly into FlashArray//X70 and //X90, and powered by DirectMemory Cache software and Intel Optane storage class memory, speed up OLTP and OLAP results instantly.
“DirectMemory Cache allows up to a 2x acceleration of applications, and we deliver it in an Evergreen way,” Kixmoeller said. “We analyzed our install base globally, and found 80 per cent of arrays can achieve 20 per cent improvement in latency. That pays for itself. But 40 per cent of arrays can actually get 30-50 per cent lower latency. That’s very compelling for high-end users.” It also delivers up to 25 per cent lower CPU utilization.
Another advantage of DirectMemory is that it makes the server tier more efficient.
“You need less servers, less DRAM in them and possibly, fewer licenses as well,” Kixmoeller said. “With SAP HANA, we can achieve up to 90 per cent of HANA in memory at 65 per cent lower cost. That’s a dramatic savings at a cost of 10 per cent in performance by sharing memory across all nodes.”
Kixmoeller also stressed that DirectMemory Cache is faithful to Pure’s core tenet of easy upgrading.
“DirectMemory brings cost-effective read caching to Flash Array X – and we will deliver this in typical Evergreen fashion,” he said. “It just slides right next to your existing X70s and X90s. You just plug it in and go – faster.”
All of these products are now in general availability.