Identity vendor Nametag introduces Nametag Signa to ensure actions are by a human, and expands Okta partnership

Aaron Painter, CEO of Nametag

Identity verification platform startup Nametag made two announcements last week at Oktane. First, they unveiled Nametag Signa, which requires that actions performed by AI agents are authorized by a verified human. Secondly, they announced an expanded partnership with identity goliath Okta which integrates Okta’s policy engine for AI with Nametag’s Deepfake Defense identity verification technology. The combination ensures that a Verified Human Signature is behind AI sessions and actions.

Nametag Signa looks to address a growing challenge in artificial intelligence security – how to ensure that actions performed by AI agents are authorized by a verified human. As AI agents are empowered with increasing autonomy and access to sensitive resources, enterprise IT and security teams need to know exactly who is behind particular AI actions, such as authorizing access to confidential information, placing a high-value order, or changing payment instructions. That’s why Signa is introducing the Verified Human Signature as a new tier of enterprise assurance for agentic AI and deepfake identity threats. It provides an auditable proof that an AI action was approved by an authorized person whose identity was verified via cryptographically-attested identity verification.

Enterprise adoption of agentic AI is accelerating, but security practices lag behind. A global Okta survey in August 2025 found that 91% of organizations already use AI agents, but only 10% have mature strategies to secure them, and fewer than a third extend human-level governance to agent identities.

“Effective AI security requires verification of both the human and non-human identities associated with AI,” said Todd Thiemann, Principal Analyst at Enterprise Strategy Group. “Enterprises are eager to deploy AI but typically lack adequate identity guardrails to protect themselves against its misuse. Solutions that embed human identity verification into AI workflows give enterprises a practical way to reduce AI risk without slowing AI adoption.”

Nametag Signa combines Okta’s policy engine for AI with Nametag’s Deepfake Defense identity verification technology to ensure that a Verified Human Signature is behind AI sessions and actions. Using Okta, IT and security teams configure and assign authentication, sign-in and access request policies that require users to authenticate through Nametag in defined AI scenarios. The result is an auditable chain of trust linking AI actions back to the humans responsible for them.

The solution draws on the company’s expanded integration with Okta, aligning with developing AI frameworks like Model Context Protocol (MCP), Agent2Agent, and Agent Payments Protocol (AP2), to address important AI governance needs.

“AI Security is Identity Security.” Todd McKinnon couldn’t have said it better at today’s Oktane CEO keynote,” said Aaron Painter, CEO of Nametag. “That’s exactly why we partnered with Okta to launch Nametag Signa yesterday”

Nametag Signa gives enterprises a way to know when an AI agent takes an action, there’s a verified human behind it. By embedding Nametag’s Verified Human Signature into AI workflows and combining it with Okta’s powerful policy engine, organizations can finally strike the right balance: enable agentic AI without opening the door to deepfakes, scams, or unauthorized activity.

“Agentic AI is a powerful business enabler, but security teams need to know who is behind AI actions,” Painter stated. “Nametag’s introduction of Signa and the Verified Human Signature marks a turning point in the conversation about agentic AI security. CISOs and CIOs no longer have to choose between enabling AI agent efficiencies and preventing the next breach – they can do both.”

Painter also stressed the importance of the alliance with Okta.

“Okta is a fantastic policy and risk engine for AI; Nametag is the best way to verify which human is behind an AI session or action,” added Painter. “Together, Nametag and Okta are enabling the secure adoption of AI across the enterprise.”

Painter explained how the whole process works.

“AI can research, but it can’t be trusted to act without human approval,” he said. AI agents are showing up in critical enterprise systems. They’re placing orders, modifying configs, accessing sensitive data. But one big question still isn’t being asked: Who actually approved the action?

“With Signa, every sensitive agent action carries a clear, auditable record of the human who authorized it. Here’s how it works:

1 – When a human signs into an AI app, Nametag acts as the MFA factor. It verifies the person and creates a cryptographic record tied to that session.
2 – When an AI agent tries to take a high-risk action, Signa steps in. The human must re-verify, and their approval is high-assurance.
3 – We built Signa first to sit alongside Okta. Okta defines the policy. Nametag makes sure the proof is real, and that it actually came from a verified person.

“We didn’t build this to replace your IAM stack,” Painter concluded. “We built it because the current stack doesn’t cover AI. It protects logins, not the agent behind the API call. And that’s exactly where the risk lives now.”

Oktane was a great success for Nametag.

“Enterprises want to embrace AI; security and IAM teams need to protect it,” the company said. “We’ve heard so much excitement around what AI agents can do, and we’ve heard so many extremely valid concerns about how to safely enable them. Tellingly, in our eyes, identity  was at the centre of every conversation – the identity of employees logging in to AI apps, the identity of AI agents acting on an employee’s behalf and the identity of the people granting AIs access to resources. In a world of hashtag#AI, trust will come from knowing which human is behind that AI. At Oktane, Okta and the community showed us all the future of AI security. Now it’s up to all of us – working together – to make it happen.”