SAN DIEGO — At this week’s Cisco Live event here, Cisco showed off the latest developments for its collaboration product line, most notably a new machine learning initiative that aims to put more information about attendees in a meeting into users hands, even as the session is ongoing.
Dubbed “cognitive collaboration,” the idea is to surface information on the people in a meeting, the company they work for, the user’s interactions with other attendees, and mutually-common contacts to the user and other attendees, automatically presented on-screen along with the meeting.
It’s based on the efforts of Accompany, from which Amy Chang, senior vice president and general manager of collaboration at Cisco came to the networking giant, and the company says it has the connections in place for some 300 million people worldwide today. The core of the system is a data ingestion engine that’s crawls publicly-available information about people and companies continuously and looks for context and connections. Chang says at this point; it’s about 97 percent accurate at connecting contextual information to the right person, which she described as the minimum level required for users to have faith in the technology.
“It’s all about context and intelligence slipstreamed into the user experience without the user having to lift a finger,” Chang said in a keynote presentation.
Sri Srinivasan, senior vice president and general manager of team collaboration at Cisco, called the ability to layer on functionality like cognitive collaboration “a moment in history that’s laced with opportunity” for the company’s collaboration business.
“For far too long, we’ve had technology that impedes how people work. By bringing these capabilities into interactions, what we’re doing is allowing technology to fade into the background and helping people do what they’re good at – connecting and interacting,” Srinivasan said.
The contextual data joins other AI and ML capabilities the company is building into its collaboration tools, including using facial recognition capabilities to offer “subtitles” offering meeting attendees’ names under their pictures to ease identifying everyone in larger meetings.
Chang and her team also pledged a renewed focus on interoperability with other collaboration tools, showing off integrations with Microsoft Teams and Outlook, with Google Cloud, and with Slack. “And it’s just the beginning,” Chang said.
Srinivasan described interoperability as an expectation today — not something that’s “nice to have” but a requirement.
“Knowledge workers have a canvas they work in, and that canvas needs to be rich,” he said.
Chang said the company knows that its users have other pieces of software they work with, and they want those pieces to work together to streamline their work as much as possible.
“We get it, we’re part of an ecosystem,” she said. “Why wouldn’t we want to work well with Microsoft? And Google. And Apple. And Slack. And and and.”
The company said that by August 1, it would make available more APIs and training open to the partner ecosystem to help build integrations and other functionality onto the Cisco collaboration stack. That would appear to be primarily aimed at ISVs, but could quickly provide interesting opportunities to both existing Cisco partners and those new to the ecosystem. For example, Chang outlined plans to work with healthcare organizations to optimize and build features for healthcare collaboration as more regions add support for telehealth.
“We’re putting the APIs and developer tools out there so our partners can put together that experience on a vertical-by-vertical basis for their customers,” Chang said.