The AI Trust label is designed to provide customers with clear information about the way AI functions across AI, and to overcome customer concerns about trusting it.

Aaron Harris, Chief Technology Officer at Sage
“Trust is at the heart of everything we can do,” said Aaron Harris, Chief Technology Officer at Sage.
This was the foundational theory laid out by Harris, who explained the new proof of concept designed to close the trust gap and help businesses adopt AI with confidence.” The new proof of concept was articulated by SMB-focused accounting, financial, HR and payroll technology Sage, for small and mid-sized businesses (SMBs), today announced the development of its to bring greater clarity and accountability to how AI is developed and used in business software.
The AI Trust Label is designed to provide customers with clear, accessible information about the way AI functions across Sage products. It focuses on key trust indicators such as compliance with privacy and data regulations, how customer data is used, the presence of safeguards to prevent bias and harm, and the systems in place to monitor accuracy and ethical performance. This initiative allows SMBs to understand how AI impacts them – without needing a technical background.
“It is grounded in trust, transparency and customer empowerment,” Harris stated. “AI adoption should never come down to blind trust. “Businesses deserve to know how the technology works, how their data is used, and what safeguards are in place. The AI Trust Label is a direct response to that need—for transparency, not assumptions.”
Sage’s research shows a direct correlation between trust and adoption. While 94% of SMBs already using AI report seeing benefits, the majority, 70%, have yet to fully adopt the technology. The difference is trust.
Among those who trust AI, 85% say they actively use it in their business. That drops to just 48% among those who don’t. Additionally, 43% of SMBs say they have low trust in the companies building AI tools for business.
Later this year, Sage will begin rolling out the AI Trust Label across selected AI-powered products in the UK and US. Customers will see the label within the product experience and have access to additional details through Sage’s Trust & Security Hub. The label was designed based on direct feedback from SMBs and reflects the signals they said they need to build confidence in using AI tools.
This announcement follows a series of steps Sage has taken to ensure it develops technology responsibly. In 2023, the company published its AI and data ethics principles. It has also adopted the US NIST AI Risk Management Framework globally to guide the responsible design and use of AI, signed the Pledge for Trustworthy AI in the World of Work to support fairness and inclusion, and implemented emerging standards like the UK Government’s AI Cyber Security Code of Practice. Sage is now calling for collaboration between industry and government to create a transparent, certified AI labelling system that encourages wider adoption of the technology. The company is also exploring opportunities to share its own framework more widely.
“We’re not just building a label for Sage,” said Harris. “We’re building a model for how AI can earn trust across the business software sector. If we want AI to truly empower SMBs, this kind of transparency isn’t optional, it’s essential.”
1,500 SMB decision makers were surveyed by Global Counsel Insight, online, between May 3rd and 19th across the US, UK, France and Spain, with each of the four countries weighted equally in overall results. Respondents were screened in for having decision-making responsibilities and for working at least 20 hours a week at the firm. Within each country results reflect the proportion of sole proprietors in the wider SMB universe and are broadly representative of by sector. In line with official definitions, SMBs were defined as having up to 500 employees in the US, and 250 employees in other markets.