The cornerstone of successful GenAI applications for employees
GenAI – whether a corporately sanctioned solution or unauthorized "shadow AI" – is a technology that employees are determined to use. In fact, McKinsey found that three times as many users were using AI for a third or more of their work than their employers realized. And a separate BCG report found that 54% of employees would use AI tools even if these were not authorized by the company.
So, the need to provide employees with powerful GenAI tools is both a carrot and a stick. The carrot is that employees recognize the value that these tools provide, so providing them with official solutions will enable that value to be both amplified and quantified. The stick is that employees will use them no matter what, introducing the very real security and data leakage risks that are so strongly associated with shadow AI.
The inconvenient truth: value remains elusive
However, amid all the hype, there is one (very) inconvenient truth about GenAI: most organizations that have invested in the technology have yet to realize any value from it. A recent MIT study found that, despite $30–40 billion in enterprise investment, 95% of organizations remain stuck with no measurable P&L impact1. Many theories have been proposed for this lack of value, but the most obvious and plausible is simply that people aren’t using the tools provided to them.
One of the most striking findings from Deloitte’s State of Generative AI report is that just 11% of organizations that have introduced AI tools see these technologies truly integrated into daily workflows (defined as more than 60% of employees using GenAI each day). For companies where daily usage lags below 20%, nearly half experience disappointing returns from their GenAI projects. In sharp contrast, organizations that achieve widespread adoption are reaping rewards—86% of them report meeting or surpassing their ROI goals.
Trust: the essential ingredient for adoption
“So why aren’t employees at some organizations embracing GenAI?” asked the Harvard Business Review when this study was released4. The answer, they concluded, “boils down to a lack of trust” — a perspective shared by Orange Business. As McKinsey noted recently, “Trust… is the foundation for adoption of AI-powered products and services. After all, if customers or employees lack trust in the outputs of AI systems, they won’t use them.5”
When employees lack trust in AI, usage rates plummet, and expected returns on investment fall short. Common barriers include concerns about reliability, data privacy, bias, and job displacement. Conversely, organizations that embed trust-building principles into their AI strategy see higher engagement, faster adoption, and greater business value.
Defining trust in AI: principles and global perspectives
The question, “What do we mean by trust?” has drawn attention worldwide. The EU was among the first to seek an answer, appointing an independent High-Level Expert Group (HLEG) in 2019. The AI HLEG developed seven non-binding ethical principles for AI to ensure it is trustworthy and ethically sound: human agency and oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination, and fairness; societal and environmental well-being; and accountability.
These principles are now embedded in the EU AI Act—a groundbreaking law that addresses risks and vulnerabilities beyond traditional data protection. While the EU leads with comprehensive AI legislation, it is not alone: at least 69 countries, including China and the USA, have proposed over 1000 AI-related policy initiatives and legal frameworks to address public concerns around AI safety and governance.
Trust by Design: building trust into every stage
Organizations face challenges in adopting AI responsibly. Transparency, accountability, and ethical governance must be prioritized to address risks such as bias, security vulnerabilities, and potential loss of user control. Building trust requires a holistic approach that incorporates regulatory compliance, robust data protection measures, and a steadfast commitment to ethical practices.
“Trust by design” means embedding compliance and trust into every stage of GenAI development and deployment—not as an afterthought, but as a foundational principle. AI must be built on trusted foundations, emphasizing high-quality training data. This ensures the AI tools employees use are transparent, reliable, and ethically aligned. When employees trust the systems they work with, they are more likely to engage fully, unlocking GenAI’s true promise for business.
Tales from the frontline: Hélène (Facilitator, Managerial Development, Human Resources - Orange France)
“My message to my colleagues is that AI is not an enemy but an ally. It is essential for everyone, regardless of age or profession, to explore technology, as this will enhance their career prospects. But it’s also necessary for organizations to provide the right support – to demystify GenAI, explain its purpose, and provide people with the technical help they need to use it properly.”
Education: empowering employees for responsible AI usage
Educational initiatives are critical to fostering trust in AI. Businesses should clarify AI fundamentals for their teams, helping employees understand the key dimensions of trustworthy AI—especially within regional regulatory and cultural contexts, such as Europe.
The Harvard Business Review’s trust survey tool aligns core statements with four key characteristics: humanity (meeting individual needs and helping people do their best work), transparency, capability, and reliability8. It encourages employees to introduce trust-enhancing elements in their training by asking questions like, “Is the tool communicating with you in straightforward, plain language? Do you feel its outputs are accurate and unbiased?” If users rate an AI solution poorly on these trust parameters, organizations should investigate and address the causes.
By equipping employees with the knowledge and confidence they need, organizations can create a culture where AI is used responsibly and to its fullest potential.
Orange Business: Trust by Design in action
At Orange Business, "trust by design" is more than a concept—it's an end-to-end framework that weaves security, compliance, ethics, and transparency into every stage of developing GenAI solutions. Our approach tackles GenAI’s unique risks, such as data leaks and hallucinations, to earn the confidence of both customers and users. Our Data & AI Ethics Charter and By Design Governance framework provide a robust, comprehensive approach for managing data and AI responsibly. Additionally, our Responsible AI Design Authority rigorously assesses risk, ensuring AI solutions are both ethical and safe for business use.
Our Trust by Design framework sets a high bar for trustworthy AI by focusing on robust data governance, transparent operations, and ethical practices. Services like "GPU as a service" and Live Intelligence ensure customer data remains secure, compliant, and under user control. Safeguards against data leaks and strict alignment with the EU AI Act demonstrate a strong commitment to protecting sensitive information whilst offering businesses access to powerful AI tools. We also prioritize transparency through explainable AI, offering customers a choice of language models to avoid vendor lock-in and enhance adaptability.
This approach reflects our mission to ensure digital services are well thought-out, available, and used in an inclusive and sustainable way. Importantly, customers rolling out Live Intelligence across their businesses can reassure employees of its ethical foundations. We can also provide sovereign GenAI solutions through a combination of secure data hosting, trusted infrastructure, and strategic partnerships, all managed through our "Live Intelligence" platform. By embedding trust from the outset—rather than reacting to issues after launch—Orange Business ensures every AI project is built on a solid foundation of integrity and accountability.
Designing for trust, delivering value
The data summarized earlier suggests that creating a return on any investment in GenAI depends on adoption, which in turn depends on trust. To flip that around, it’s therefore no exaggeration to say that trust creates the adoption that delivers value. In this context, trust cannot be an add-on or an afterthought, something that "we can worry about later". As Orange Business has found, trust needs to be designed in from the start. By addressing these foundational challenges and committing to ongoing education, businesses can ensure that they can maintain trust at every step and generate the adoption that will finally provide a return on their GenAI investments.
Mathieu Ducrot
Mathieu Ducrot, based in Paris, is Director of AI Products at Orange Business, with experience at Orange Business and Orange. He completed the 2017–2018 IMM Leadership Program and is skilled in Project & Program Management, Team Leadership, and PMP.
Recommended for you
How democratizing GenAI tools delivers value from these investments?
Organizations see GenAI as a powerful genie promising productivity, revenue, and engagement. Yet caution persists over data and security risks. The challenge is balancing potential and protection—letting the genie out without creating chaos.