ZeroStack plans to roll more services out in the year, indicating some likely possibilities, as part of their strategy of providing MSP partners with more services they can sell on the scale-out private cloud they enable customers to create.
Mountain View CA-based ZeroStack, which makes a self-driving cloud platform that facilitates simple installation and operation of a scale-out private cloud, continues to deepen their strategy of providing additional services that their partners can sell. Earlier in the fall, they introduced Big Data-as-a-Service, GPU-as-a-Service, DevOps-as-a-service and Inter-Cloud VPN-as-a-service. Now they are announcing their latest – AI-as-a-Service.
The new services are the product of ZeroStack’s new channel program introduced this year, which emphasized creating more services opportunities for partners.
“I see our services strategy as very similar to a vending machine,” said Steve Garrison, ZeroStack’s vice president of marketing and business development. “Each of these new services will not be attractive to each of our MSP partners. However, we want to be the service template vending machine for MSPs. They can pick and choose from among services that fit their model and which they believe will appeal to their customer base. Partners with customers who would see a need for these kinds of services should be able to compete with them effectively against AWS, because of their ability to provide custom capabilities for them and other white glove services that a large provider can’t deliver.”
The new AI service provide access to a series of deep learning frameworks like TensorFlow, Caffe, PyTorch, and MXNet.
“These are all just different AI algorithms, with the right one for each specific use case defined by what it is that you want to do,” Garrison said. “For the user, the challenge has been how to manage their operational use because they are complex, and can be difficult to work with.” These issues involve deploying, configuring, and executing the tools, as well as managing their interdependencies and versioning and compatibility with servers and GPUs.
“This service dumbs down their use for the customer, allowing them to use the apps while getting the ease of use of the public cloud experience with single click-deployment,” Garrison added. This includes taking care of all the OS and CUDA library dependencies, so that users can focus on AI development. Users can also enable GPU acceleration with dedicated access to multiple GPU resources for an order-of-magnitude faster inference latency and user responsiveness. GPUs within hosts can also be shared across users in a multi-tenant manner. Cloud admins can automatically detect GPUs and make them available for users to run their AI applications. They can then configure, scale, and allow fine-grained access control of GPU resources to end users.
“Many regional MSPs can also benefit by having access to such a toolkit – so we effectively automate it for them as well,” Garrison said. “Our AI-as-a-service automates the provisioning of AI tool sets and GPU resources for DevOps organizations – with either the MSP performing this service for customers, or just handing over a tenanted service to the customer can play with it on their own. It really depends on whether or not the MSP wants and is able to offer professional services. We don’t really have a religion here, and make it possible for them to do either approach.”
Garrison said that much of these services are still in the early adopter phase across the industry, but that it still makes sense for ZeroStack to continue to enable them for their partners to offer.
“We definitely have a slew of new ones coming out,” he said. “We have been validated to run OpenShift by Red Hat, and don’t be surprised to see us offer IoT as a service as well. We are all hearing the same things from our customers.”