Balancing structure and innovation in GenAI

Recent analyst research has pointed out something that our team at Orange Business has understood from the start of our GenAI rollout: that there is a profound connection between trust and value. McKinsey recently concluded that, “Trust… is the foundation for adoption of AI-powered products and services. After all, if customers or employees lack trust in the outputs of AI systems, they won’t use them.” And Deloitte found that organizations with high levels of daily GenAI usage are seeing clear benefits—86% of these report they have met or exceeded their expected return on investment. So, without trust, there is no adoption; and without adoption, there is no value generation.

 

 

Responsible and Ethical Governance by Design

A robust governance framework is essential for ensuring trust in the implementation of GenAI, and while we were careful to ensure that our GenAI implementation was fully compliant with all relevant legislation, we were clear that governance was not just a policy issue.

At Orange Business, responsibility is meaningful and the architecture behind every AI initiative. Our company has built a robust Responsible & Ethical AI governance model designed to meet the demands of the EU AI Act, while positioning digital trust as a strategic asset rather than a constraint.  

The AI Act has become one of the world’s most influential regulatory frameworks for artificial intelligence. Orange Business not only fully complies but we see it as a strategic market differentiator. Using it, we adapt our offerings, adjust internal processes and strengthen traceability to ensure compliance across all AI solutions, from technical foundations through to deployment.

Our approach began with a governance structure that places ethics at the heart of decision‑making. A dedicated Responsible AI Committee, supported by AI Ethics Officers and experts in legal, cybersecurity, data privacy and CSR, examines each AI project before it ever reaches customers. Their mandate is clear; ensure transparency, mitigate risk and uphold accountability across the entire lifecycle of AI systems.

Every use case, from experimental GenAI features to large‑scale customer deployments, must pass through a mandatory risk assessment. This process determines the level of oversight, identifies potential harms and maps out the required mitigation steps. Projects deemed sensitive or high‑risk benefit from deeper review and tailored guidance, while lighter and low‑risk initiatives can move faster thanks to a simplified evaluation path.

This agile governance model is underpinned by the principles of “Responsible AI by Design.” Human oversight, explainability, bias control, privacy‑by‑design and cyber‑resilience are not optional – they are engineered into the company’s products from the earliest concept phases. Orange Business also applies frugal, energy‑conscious AI methodologies, reflecting a wider commitment to sustainability in digital innovation.

As we prepare for ISO 42001, the upcoming global standard for AI management systems, we’re simultaneously reinforcing our long‑term commitment to trustworthy automation.

Building a responsible AI culture also means investing in people. Thousands of employees across Orange Business complete regular training on ethics ensuring that awareness grows in parallel with technological adoption. We also require our suppliers and technology partners to meet the same ethical and regulatory standards.

A Charter for good

Alignment is built on the Data & AI Ethics Charter, which establishes clear internal rules; transparency, fairness, rigorous governance and human responsibility in all circumstances. More than that, we consider it a foundational requirement that distinguishes a thriving, innovative environment from the Wild West. Much like the difference between democracy and anarchy, robust governance provides the structure necessary to harness the immense potential of GenAI while managing its unique risks.

This charter clearly highlighted the potential risks and benefits of the technology and detailed our governance framework, outlined measures for personal data protection, and clarified the terms and conditions guiding responsible use.

As GenAI becomes more accessible, the importance of establishing – and ensuring compliance with – ethical frameworks increases exponentially. Companies should establish clear policies regarding data usage, privacy, and transparency, while also providing training to help employees comply with these standards. Ethical adoption is not merely a compliance exercise though, it really needs to be a commitment to responsible AI that builds long-term trust with employees, customers, and partners.

Responsibility also means sustainability. Orange Business is committed to reducing the environmental footprint of its AI systems by prioritising eco‑designed models, evaluating energy consumption and raising awareness among teams about more sustainable digital practices.

Bottom-Up Innovation

We believe that innovation flourishes when access to technology is democratized. The employees on the frontlines of business processes are the ones who are best placed to understand where GenAI could add the most value. We therefore opened up Live Intelligence not to a select group but to the entire company. This was a giant experiment in the democratization of access to this new technology, but this was done with very clear rules being put in place.  

Governance establishes the guardrails that enable responsible innovation – defining procedures, clarifying accountability, and promoting transparency. Without clear structure and oversight, organizations risk data leakage, reputational damage, and exposure to regulatory penalties. Ultimately, it reassures stakeholders that GenAI is being used safely, ethically, and in alignment with the company’s core values.

Tales from the frontline: Denis (AI Integration Manager – Orange France)

“I love Live Intelligence for two main reasons: its much less expensive, compared to personal use of third-party GenAI tools; and I know that I can use the most interesting data from around the company without worrying about security. Also, based on the complexity of the task I am asking Live Intelligence to work on, I can also choose the right LLM to limit costs and the CO₂ footprint.”

By empowering a broad spectrum of employees – not just a few experts – to leverage GenAI tools, organizations can unlock new ideas, streamline workflows, and drive meaningful value. This bottom-up approach fosters a culture of creativity and continuous improvement. However, to maximize its benefits, this access must be paired with clear guidelines and support, ensuring that innovation does not come at the cost of quality or security.

The Challenge of Shadow AI

One of the most pressing governance issues today is the proliferation of “shadow AI”—the unauthorized use of AI tools by employees eager to harness GenAI’s benefits, often outside established protocols. While this grassroots enthusiasm indicates a hunger for innovation, there can be no alignment to corporate standards that do not exist; and it also introduces significant risks, including inadvertent data breaches, inconsistent quality, and compliance gaps.  

There is also, by definition, a complete lack of visibility over employees’ use of shadow AI (one survey found organizations have zero visibility into 89% of AI usage, despite security policies being in place). Quite apart from the dangers this entails, it also incurs a whole host of missed opportunities.

  • Employers have no visibility of the use cases to which these are being put or how much value is being gained from them. While our Live Intelligence holds no personally identifiable information,  the dashboard provides visibility of all activity taking place on the platform. As a result, we can see which AI assistants are the most popular or most effective, and these are industrialized by a central team and promoted to the wider company.  This is a key way of delivering value from GenAI investments.
  • Employees using shadow AI are unlikely to share the successes they are creating with this unauthorized technology, so these cannot be passed on to colleagues. Live Intelligence actively encourages collaboration between colleagues and departments, so productivity hacks can be freely shared around the company.
  • Sustainability: the costs – both financial and environmental – of GenAI use are significant. In our implementation of Live Intelligence, multiple Large Language Models (LLMs) are plugged into the platform. These can be easily selected on the click of a mouse, and each has different performance, costs, and carbon profiles, which are clearly published on the tool. This encourages employees to take financial and sustainability costs into account when choosing their LMM and enables a more cost-effective and sustainable outcome for their GenAI use. No such options are available with Shadow AI.  

So, it’s clear that there are many carrots – and sticks – that make the provision of powerful, trusted, and corporately-sanctioned GenAI alternatives to shadow AI the logical course of action.

Striking the Balance

By implementing robust yet flexible frameworks, business leaders can manage risks, ensure compliance, and build trust—while also unlocking the full creative potential of their workforce. Strategies such as cross-functional AI governance committees, open innovation platforms, and regular training sessions can help integrate these approaches. By blending structure with flexibility, companies can foster an environment where responsible innovation thrives.

The challenge is not to choose between control and innovation, but to blend them in a way that drives sustainable value for the organization and society at large. At Orange, we are confident that we have found that balance: Live Intelligence typically has between 12,000 and 15,000 daily users and between 44,000 and 48,000 monthly users. And if those adoption rates weren’t sufficient to convince us that our employees are happy with the tool, it also has a very high satisfaction rate of 8,3/10. So, now is the time to act: establish the right guardrails, empower your teams, and build a future where GenAI is both ethical and transformative.

Our experience proves that AI is not only a technological evolution for the company— it requires a cultural shift. Employee training, risk awareness, understanding regulations and adopting best practices to turn regulatory constraints into a competitive advantage, fueling meaningful, well‑governed innovation. 

Miguel Alvarez

Miguel Alvarez

Chief Data and AI Officer at Orange Business

Recommended for you

Building a Strong Foundation

Implementing GenAI for EX brings key data challenges: overcoming privacy and security concerns to build employee trust, dealing with poor data quality and fragmented sources, and ensuring stronger oversight of Agentic AI to improve reliability.

Read more

Productivity is not the only measure of GenAI value in the workplace

GenAI discussions often fixate on productivity, but it’s only one layer of a broader value story. Our three-tier use case pyramid helps prioritize efforts, measure impact, and unlock GenAI’s full potential.

Read more