ServiceNow, NVIDIA announce major partnership around generative AI

ServiceNow will use NVIDIA AI Foundations cloud services and the NVIDIA AI Enterprise software platform to develop new workflows using custom large language models, which use a customer’s own data, and which is much more accurate than gleaning it from the public domain.

At the ServiceNow Knowledge 2023 event in Las Vegas, ServiceNow and NVIDIA have jointly announced a partnership to build generative AI across enterprise IT. The partnership will see ServiceNow use NVIDIA software, services and accelerated infrastructure to develop custom large language models trained on data specifically for its ServiceNow Platform.

“We are very excited that NVIDIA and ServiceNow are announcing a partnership to develop generative AI models,” said Rama Akkiraju, VP AI/ML for IT at NVIDIA. “ServiceNow will be building customized fine-tuned AI models using NVIDIA.”

Akkiraju stressed that custom large language models are the key to enterprise generative AI, rather than the use of public domain data as is the case with ChatGPT.

“To bring generative AI to data requires customizing the model to teach them the language of the specific enterprise, so that they will be able to answer questions accurately,” she said. “Accuracy requires not answering using public domain knowledge, but using data from the Intranet of the specific company involved. That is the purpose of building customized generative AI models.”

Akkiraju said that the two companies will start with the IT Domain using NVIDIA foundational models. ServiceNow will use NVIDIA AI Foundations cloud services and the NVIDIA AI Enterprise software platform, which includes the NVIDIA NeMo framework, for this.

“We will bring our two platforms together to build customized models starting with the IT Domain, so these problems can be more effectively addressed using generative AI,” she said. They will be delivered to customers through ServiceNow’s Now platform, extending the automation of enterprise workflows that ServiceNow provides.

ServiceNow is also helping NVIDIA streamline their IT operations with these generative AI tools, using NVIDIA data to customize NVIDIA NeMo foundation models running on hybrid-cloud infrastructure, and using NVIDIA DGX Cloud and on-premises NVIDIA DGX SuperPOD AI supercomputers.

ServiceNow and NVIDIA are exploring a number of generative AI use cases to simplify and improve productivity across the enterprise. This includes developing intelligent virtual assistants and agents to help quickly resolve a broad range of user questions and support requests with purpose-built AI chatbots that use large language models and focus on defined IT tasks.

“We are starting to share our own IT data with ServiceNow in this way, as we are a customer of theirs,” Akkiraju noted. “We will automate it and apply it to IT ticket summarization, which we estimate will save up to 78 minutes time for each agent.”

To simplify the user experience, enterprises can customize chatbots with proprietary data to create a central generative AI resource that stays on topic while resolving many different requests. Customer service agents will be able to prioritize cases with greater accuracy, saving time and improving outcomes. They will also be able to use generative AI for automatic issue resolution, knowledge-base article generation based on customer case summaries, and chat summarization for faster hand-off, resolution and wrap-up.

“NVIDIA NeMo provides the fastest path to custom LLMs,” Akkiraju said. NeMo includes  prompt tuning, supervised fine-tuning and knowledge retrieval tools to help developers build, customize and deploy language models for enterprise use cases. NeMo Guardrails software is also included. It lets developers easily add topical, safety and security features for AI chatbots.