Exabeam extends insider threat detection to AI Agents with Google Cloud

Steve Wilson, Chief AI and Product Officer at Exabeam

Today, at Google Cloud’s pioneering Security  Innovation Forum, Exabeam, a global vendor in intelligence and automation that powers security operations, is announcing the integration of Google Agentspace and Google Cloud’s Model Armor telemetry into the New-Scale Security Operations Platform. This integration gives security teams the ability to monitor, detect, and respond to threats from AI agents acting as digital insiders, and provides insight into the behavior of autonomous agents to reveal intent, spot drift, and quickly identify compromises.

“This is a natural evolution of our leadership in insider threat detection and behavioral analytics,” said Steve Wilson, Chief AI and Product Officer at Exabeam. “Exabeam solutions are inherently designed to deliver behavioral analytics at scale. Security operations teams don’t need another tool — they need deeper insight into both human and AI agent behavior, delivered through a platform they already trust. We’re giving security teams the clarity, context, and control they need to secure the new class of insider threats.”

Recent findings in a new study from Exabeam, “From Human to Hybrid: How AI and the Analytics Gap are Fueling Insider Risk,” show that a vast majority (93%) of organizations worldwide have either experienced or anticipate a rise in insider threats driven by AI, and 64% rank insiders as a higher concern than external threat actors. As AI agents perform tasks on behalf of users, access sensitive data, and make independent decisions, they introduce a new class of insider risk: digital actors operating beyond the scope of traditional monitoring. Just as insider threats have traditionally been classified as malicious, negligent, and compromised, AI agents now bring their own risks: malfunctioning, misaligned, or outright subverted.

Kevin Kirkwood, Chief Information Security Officer at Exabeam

“The nature of insider threats is evolving,” said Kevin Kirkwood, Chief Information Security Officer at Exabeam. “Security leaders understand autonomous AI agents are increasingly present in enterprise environments, transforming the way organizations must think about identity, access, and risk. What may be less clear is how rapidly these agents are advancing, how seamlessly they integrate into workflows, and how subtly they can shift from productive contributors to potential liabilities.”

This is more than an evolution of insider threats. It’s the emergence of a new threat class: AI-powered insiders. These are not rogue employees or compromised accounts; they are synthetic identities with operational privileges, autonomy, and in many cases, little to no oversight. The security models in place today were not built to account for their presence. We believe it’s time to recognize and formalize this new class of insider threat and build the frameworks necessary to govern.”

The critical concern is that these agents are granted full access without corresponding layers of oversight or governance. Unlike human employees, they do not pause for approval, and they operate with an efficiency and persistence that can mask subtle boundary violations. This lack of friction creates new exposures, particularly when agents begin to operate across multiple systems or initiate actions based on inferred goals rather than explicit instructions.

“Security leaders cannot afford to take an observational or passive stance,” Kirkwood said. “The introduction of autonomous AI agents into enterprise environments is already reshaping the threat landscape. If you’re deploying or testing AI agents in your environment, you need to treat them as distinct identities. That means monitoring AI agent activity independently of their associated users, applying behaviour-based analytics to detect unusual access patterns or privilege escalations, creating policies to govern where and how agents can operate, who owns them, and how ownership responsibilities are enforced, preventing agent-to-agent communication unless explicitly required and auditable, and logging all agent interactions and mapping them to specific tasks and user requests.”

Kirkwood said that there’s no denying that these tools offer value.

“They generate foundational code, detect documentation inconsistencies, and analyze data sets at scale. However, we documented behaviors that present immediate risk.” This includes seeking private and public repo access without prompt, traversing entire codebases to catalogue internal assets, suggesting policy workarounds that may violate security controls, and connecting to third-party and competitor domains without permission.

“The core problem isn’t malicious design,” Kirkwood said. “It’s the autonomous execution of tasks without built-in ethical boundaries or accountability. These agents can inadvertently create vulnerabilities, misroute data, or facilitate lateral movement simply by following incomplete instructions.”

SIEM and XDR solutions that are unable to baseline and learn normal behavior  lack  the intelligence necessary to identify when agents go rogue. As a pioneer in machine learning and behavioral analytics, Exabeam addresses this critical gap by extending its proven capabilities to monitor both human and AI agent activity. By integrating telemetry from Google Agentspace and Google Cloud’s Model Armor into the New-Scale Platform, Exabeam is expanding the boundaries of behavioral analytics and setting a new standard for what modern security platforms must deliver.

While some are just beginning to consider the implications of autonomous agents, Exabeam says that it is already gathering empirical evidence and building practical detection strategies. They invite the broader community to join the conversation.

The company’s latest innovation, Exabeam Nova, is central to this, serving as the intelligence layer that enables security teams to interpret and act on agent behavior with confidence. Exabeam Nova delivers explainable, prioritized threat insights by analyzing the intent and execution patterns of AI agents in real time. This capability allows analysts to move beyond surface-level alerts and understand the context behind agent actions – whether they represent legitimate automation or potential misuse. By operationalizing telemetry from Google Agentspace and Google Cloud’s Model Armor in the New-Scale Platform, Exabeam Nova equips security teams to defend against the next generation of insider threats with clarity and precision.

“AI agents are quickly changing how business gets done, and that means security must evolve at the same rate,” said Chris O’Malley, CEO at Exabeam. “This is a pivotal moment for the cybersecurity industry. By extending our behavioral analytics to AI agents, Exabeam is once again leading the way in insider threat detection. We’re giving security teams the visibility and control they need to protect the integrity of their operations in an AI-driven world.”

“As businesses integrate AI into their core operations, they face a new set of security challenges,” said Vineet Bhan, Director of Security and Identity Partnerships at Google Cloud. “Our partnership with Exabeam is important to addressing this, giving customers the advanced tools needed to protect their data, maintain control, and innovate confidently in the era of AI.”