Orange Business’s journey with its in-house GenAI platform suggests that GenAI value creation depends upon embedding ethical principles, transparency, and robust data protection into every aspect of AI-enabled initiatives. Ultimately, the path from trust to value in CX starts with a clear commitment to both customer and employee needs and requires the large-scale adoption that can only be achieved by aligning technology with responsible practices.

When Orange Business rolled out Live Intelligence (our GenAI platform), our priority was to embed trust at its heart. This was not only the ethical thing to do, though trust is a core part of our corporate purpose – but we instinctively knew this was also a pragmatic necessity. We believe trust is a core reason for the success of this project, with Live Intelligence having gained between 12,000 and 15,000 daily users and between 44,000 and 48,000 monthly users only two years after launch.  

The importance of trust

Orange Business’s experience demonstrates that generating value from GenAI requires adoption (you can’t get a return on a solution no one is using) and adoption, in turn, relies on trust. This belief is echoed by McKinsey, which noted, “Trust… is the foundation for adoption of AI-powered products and services.” 

This statement is particularly true for the use of AI in CX. A separate McKinsey report said that” The trust issue is often ignored or underestimated and can stealthily derail even the best-intended AI strategies2”. In a CX environment, trust is required from both customers and employees.  

Trust among customers

McKinsey’s 2026 study found that “customer [un]willingness to use new AI-enabled service solutions continues to be a barrier for a majority of companies”. However, the data suggests that customers have a somewhat schizophrenic attitude to the use of AI in contact centers.

A clear majority (as many as 75% according to some research) prefer dealing with humans rather than AI-powered chatbots. The same survey found that nearly half of consumers (48%) say they do not trust information provided by AI-powered customer service bots, and over half (56%) are often frustrated by AI customer-service chatbots. To balance that out, a Zendesk’s report found that 51% of consumers prefer interacting with AI bots over humans when they want immediate service, and 64% said that they trust bots more when they sound ‘human-like’. So, it may well be that these objections will decline as the quality of the bots improves.

AI Learns to Sound More Human

Researchers from Carnegie Mellon University in Pittsburgh, trained AI on natural human voices from the real world using almost 900 hours of YouTube and podcast footage*. They were then able to create a model that produced more natural-sounding speech, complete with ums and hesitations. According to David Beavan of the Alan Turing Institute in London, "They clearly haven't quite got to the point where it is totally human-sounding, but they're absolutely going in the right direction."

Experience is everything. Get it right.

The same McKinsey report found “Trust from customers is often won – or lost – within the first moments of an interaction, and once it is lost, even highly capable AI struggles to recover it.” So, the old adage that you never get a second chance to make a first impression applies strongly to the use of AI in contact centers. If you don’t get it right the first time, then customers will default to speaking to an agent, increasing waiting times and negating the point of those AI investments – and as many as a third of customers will abandon a brand after one bad interaction.

Trust among employees

McKinsey notes that, “[Organizations] need employees to trust the technology and embrace it as part of their day-to-day work.” One of the key CX use cases for GenAI is as an Agent Assist tool: this gathers relevant and context-aware information from around the organization in real-time and provides this to the agent. If employees don’t have confidence in the data AI is providing to them, they will default to (slower) tried and tested workarounds to get the same information. This not only undermines AI investments but will damage the quality of the customer interaction. 

With Live Intelligence, we went the extra mile to ensure that employees could have trust in the platform, creating a charter that set out the general principles on AI use applicable within the company. This legally binding contract defined the governance tools we had put in place, stated how we were protecting personal data, and described the terms and conditions for its use. 

By clearly setting out how our employees should and should not use GenAI, we created a framework that benefitted both our company and our people: we could be confident that employees were using GenAI in a secure, compliant, and responsible manner, and employees could have confidence that the integrity of their personal data was maintained. 

Getting it right

There is no part of the customer journey that doesn’t rely on data for its successful completion. So, putting in place the right data foundations is a prerequisite for any AI in CX project. If customers and employees don’t trust that AI is returning the right information at the right time, then your GenAI investments are unlikely to return any value. 

However, a survey of CX leaders commissioned by Orange Business and conducted by Global Data found that data (cited by 65% of respondents) was listed as the top challenge for deploying AI; and that siloed data (68%) and customer journey tracking (65%) were the top 2 roadblocks to improving CX. This strongly suggests that organizations are struggling to put these foundations in place. 

The legal complexities surrounding the use of data in AI are also vast and are rapidly evolving. Much of this arises from the inherent contradictions between AI's need for massive data ingestion and established laws concerning privacy, intellectual property (IP), and accountability. As of 2026, the legal landscape is transitioning from theoretical discussions to aggressive enforcement: legislation such as the EU’s AI Act now requires strict, high-risk AI documentation, transparent training data sources, and respect for copyright opt-outs. As a result, some of our clients are now claiming (perhaps somewhat tongue-in-cheek) that their legal expertise has overtaken their technical skills!

If your data or legal expertise is being stretched by the demands of AI, then Orange Business can provide you with proven and trusted advice: we are currently supporting our clients across the entire Data and AI value chain, from breaking down information siloes to outpacing compliance regimes worldwide. 

Getting it right for customers

Choosing where in the customer journey to implement GenAI is a crucial decision. As noted earlier, customers are more likely to engage with AI-driven bots for immediate (and more routine tasks) than for more complex interactions, and the McKinsey survey found that almost 70 percent of respondents agreed that ‘empathy and trust will always require human involvement’.

So, AI will inevitably be used at the start of the customer journey, for tasks such as authentication, gathering information about the enquiry, and ticket labeling. For instance, studies have shown that companies applying AI at the start of the CX process can realize a 37% decrease in first response times and a 52% improvement in ticket resolution speed. 

The design of the service is therefore critical to ensure AI can successfully segment individual queries: simple tasks can be undertaken by AI, and more complex tasks can be handed off, with all the appropriate context-related information, to the right agent. 

Overall, as McKinsey observed, “Many customers still see AI-led self-service options as solutions that drive efficiency and cost reduction benefits for companies, rather than as value-adding solutions for themselves.” They will therefore ‘see through’ and resist blatant efforts to deflect calls to chatbots.

CX organizations should therefore resist the temptation to evaluate AI use cases on their ability to cut costs, but rather on their ability to streamline the customer’s journey: if their efforts are successful, customers will use AI-driven services rather than their more expensive human-led equivalents, and the cost savings will accrue naturally.

 

Getting it right for employees

There are widespread reports of employee distrust around their company’s GenAI initiatives, with one recent survey finding that almost one third (31%) of employees are ‘sabotaging’ their company’s GenAI strategies[1].  Less dramatic, but equally concerning, is a different finding that 32% of contact center leaders cite agent distrust in AI as a major issue[2]. 

It’s a truism that ignorance breeds fear, and the introduction of AI into the contact center environment is no exception. Research from a 2025 study showed that education significantly improved (GenAI) acceptance, particularly by increasing perceived ease-of-use and confidence. The study claimed that 30 hours of training can serve as a strong baseline for building beginner AI proficiency[3].

A Boston Consulting Group study also observed a huge difference in attitude to AI agents between, on the one hand, employees who had heard about AI agents but were unsure of what they actually were, and on the other, those who did understand AI agents well and could explain how they work.  When asked if AI agents were a valuable tool that could support and collaborate with human workers, only 25% of those unfamiliar with the tools agreed, compared with 71% of their more informed colleagues. When asked if these same tools posed a potential threat to certain jobs or responsibilities, 16% of the first group agreed, compared to only 2% of the second[4].

Yet, despite all this evidence of the value of education, 59% of organizations fail to provide ongoing coaching and support to help agents adapt to AI-driven workflows[5].

If you are encountering resistance to your GenAI initiative and have failed to invest in training, then that is the low-hanging fruit you should be reaching for. 

 

[1] https://www.cio.com/article/4022953/31-of-employees-are-sabotaging-your-gen-ai-strategy.html
[2] https://www.businesswire.com/news/home/20250325095245/en/98-of-Contact-Centers-Are-Using-AI-and-61-Are-Experiencing-More-Difficult-Conversations-According-to-New-Calabrio-Research
[3]https://www.sciencedirect.com/science/article/pii/S2666920X25000840#:~:text=Overall%2C%20the%20acceptance%20score%20demonstrated,after%20the%20professional%20development%20course.
[4] https://www.bcg.com/publications/2025/agents-accelerate-next-wave-of-ai-value-creation
[5] https://www.calabrio.com/blog/contact-center-ai/

Committing to Customer and Employee Needs

Ultimately, generating value from GenAI in CX depends on a clear, unwavering commitment to both customer and employee needs. By putting trust first – through consistent, reliable experiences and clear communication –businesses can unlock the full potential of GenAI, ensuring it becomes a tool for genuine improvement rather than a source of frustration. 

However, embedding trust at the core of GenAI initiatives is not simply a matter of ethics, but a proven driver of real adoption and sustained value. Organizations that embrace ethical principles and manage technology responsibly will empower their workforce and their customers to use AI-driven solutions. The future belongs to those who recognize that trust is not just a catalyst for innovation but the foundation of long-term success.