Balancing structure and innovation in GenAI
Recent analyst research has pointed out something that our team at Orange Business has understood from the start of our GenAI rollout: that there is a profound connection between trust and value. McKinsey recently concluded that, “Trust… is the foundation for adoption of AI-powered products and services. After all, if customers or employees lack trust in the outputs of AI systems, they won’t use them.” And Deloitte found that organizations with high levels of daily GenAI usage are seeing clear benefits—86% of these report they have met or exceeded their expected return on investment.2 So, without trust, there is no adoption; and without adoption, there is no value generation.
A Charter for good
A robust governance framework is essential for ensuring trust in the implementation of GenAI, and while we were careful to ensure that our GenAI implementation was fully compliant with all relevant legislation, we were clear that governance was not just a policy issue. More than that, we considered it a foundational requirement that distinguishes a thriving, innovative environment from the Wild West. Much like the difference between democracy and anarchy, robust governance provides the structure necessary to harness the immense potential of GenAI while managing its unique risks.
We therefore made it a priority to be transparent about our approach to data within Live Intelligence. To begin with, we established clear technical and legal safeguards to ensure that any data sent to a large language model (LLM) is neither retained nor reused – it is processed solely to generate a response. Moreover, we introduced a Responsible AI charter that set out our overarching principles for AI use. This charter not only highlighted the potential risks and benefits of the technology but also detailed our governance framework, outlined measures for personal data protection, and clarified the terms and conditions guiding its responsible use.
As GenAI becomes more accessible, the importance of establishing – and ensuring compliance with – ethical frameworks increases exponentially. Companies should establish clear policies regarding data usage, privacy, and transparency, while also providing training to help employees comply with these standards. Ethical adoption is not merely a compliance exercise though —it really is a commitment to responsible AI that builds long-term trust with employees, customers, and partners.
Bottom-Up Innovation
We believe that innovation flourishes when access to technology is democratized. The employees on the frontlines of business processes are the ones who are best placed to understand where GenAI could add the most value. We therefore opened up Live Intelligence not to a select group but to the entire company. This was a giant experiment in the democratization of access to this new technology, but this was done with very clear rules being put in place.
Governance establishes the guardrails that enable responsible innovation – defining procedures, clarifying accountability, and promoting transparency. Without clear structure and oversight, organizations risk data leakage, reputational damage, and exposure to regulatory penalties. Ultimately, it reassures stakeholders that GenAI is being used safely, ethically, and in alignment with the company’s core values.
Tales from the frontline: Denis (AI Integration Manager – Orange France)
“I love Live Intelligence for two main reasons: its much less expensive, compared to personal use of third-party GenAI tools; and I know that I can use the most interesting data from around the company without worrying about security. Also, based on the complexity of the task I am asking Live Intelligence to work on, I can also choose the right LLM to limit costs and the CO₂ footprint.”
By empowering a broad spectrum of employees – not just a few experts – to leverage GenAI tools, organizations can unlock new ideas, streamline workflows, and drive meaningful value. This bottom-up approach fosters a culture of creativity and continuous improvement. However, to maximize its benefits, this access must be paired with clear guidelines and support, ensuring that innovation does not come at the cost of quality or security.
The Challenge of Shadow AI
One of the most pressing governance issues today is the proliferation of “shadow AI”—the unauthorized use of AI tools by employees eager to harness GenAI’s benefits, often outside established protocols. While this grassroots enthusiasm indicates a hunger for innovation, there can be no alignment to corporate standards that do not exist; and it also introduces significant risks, including inadvertent data breaches, inconsistent quality, and compliance gaps.
There is also, by definition, a complete lack of visibility over employees’ use of shadow AI (one survey found organizations have zero visibility into 89% of AI usage, despite security policies being in place3). Quite apart from the dangers this entails, it also incurs a whole host of missed opportunities.
- Employers have no visibility of the use cases to which these are being put or how much value is being gained from them. While our Live Intelligence holds no personally identifiable information, the dashboard provides visibility of all activity taking place on the platform. As a result, we can see which AI assistants are the most popular or most effective, and these are industrialized by a central team and promoted to the wider company. This is a key way of delivering value from GenAI investments.
- Employees using shadow AI are unlikely to share the successes they are creating with this unauthorized technology, so these cannot be passed on to colleagues. Live Intelligence actively encourages collaboration between colleagues and departments, so productivity hacks can be freely shared around the company.
- Sustainability: the costs – both financial and environmental – of GenAI use are significant. In our implementation of Live Intelligence, multiple Large Language Models (LLMs) are plugged into the platform. These can be easily selected on the click of a mouse, and each has different performance, costs, and carbon profiles, which are clearly published on the tool. This encourages employees to take financial and sustainability costs into account when choosing their LMM and enables a more cost-effective and sustainable outcome for their GenAI use. No such options are available with Shadow AI.
So, it’s clear that there are many carrots – and sticks – that make the provision of powerful, trusted, and corporately-sanctioned GenAI alternatives to shadow AI the logical course of action.
Striking the Balance
By implementing robust yet flexible frameworks, business leaders can manage risks, ensure compliance, and build trust—while also unlocking the full creative potential of their workforce. Strategies such as cross-functional AI governance committees, open innovation platforms, and regular training sessions can help integrate these approaches. By blending structure with flexibility, companies can foster an environment where responsible innovation thrives.
The challenge is not to choose between control and innovation, but to blend them in a way that drives sustainable value for the organization and society at large. At Orange, we are confident that we have found that balance: Live Intelligence typically has between 12,000 and 15,000 daily users and between 44,000 and 48,000 monthly users. And if those adoption rates weren’t sufficient to convince us that our employees are happy with the tool, it also has a very high satisfaction rate of 8,3/10. So, now is the time to act: establish the right guardrails, empower your teams, and build a future where GenAI is both ethical and transformative.