Facing the reality of Generative AI with the lessons learned in 2023

2023 was a landmark year where Generative AI (GenAI) emerged as a transformative technology in the fast-evolving field of Artificial Intelligence (AI), visibly driven by the advent of ChatGPT’s meteoric rise. Launched in November 2022, ChatGPT rapidly became a global phenomenon, soaring from 58 million to 1.8 billion monthly visits in just a month. A new digital era emerged with interesting, inspiring, and at times dramatic moves within the tech sector. Many tech giants tried to claim the top innovator spot.

GenAI is a set of technologies that can generate content such as text, images, audio clips, videos, and even computer code. For instance, the latest announcements from OpenAI offer “multi-modal” capabilities where a single AI model is able to work across language, audio, vision, video, and 3D, alongside complex reasoning. Just think of an assistant that can understand video + 3D + audio & speech – if you were to mix them in a video, imagine how much productivity gain you could achieve?

Simply put, GenAI is an assistant which augments our ability to be more efficient in the way we do things. We need to learn how to effectively use our assistant; the more we understand, the better outcomes we achieve. If every doctor had a GenAI assistant to help with complex diagnoses through analysis of clinical notes, medical images and patient history; how much healthier might we become ?

In this article, I intend to debunk the myths around GenAI and focus on its practical reality based on the first set of lessons learned at Orange Business in our engagement with customers. Here I summarize six reality checks on GenAI. Understanding them will help enterprises to avoid common pitfalls and accelerate adoption.

1. Generative AI is only the tip of the iceberg

GenAI is a mathematical model based on advanced algorithms and training data. A good model relies on good quality data and lots of data. To process that amount of data you need to have vast compute, storage, and analytics capability – powered by the Cloud/Edge and enabled by high-quality networks. The output of GenAI is based on the data it has been trained on, and its ‘creativity’ is a result of processing and recombining existing information in novel ways. GenAI can demonstrate augmented intelligence with the ability to predict patterns in natural language and use it dynamically to generate new outputs.

In our experience working with enterprise customers, when it comes to using GenAI in a professional context, off-the-shelf GenAI tools are likely to lack the understanding of specific vocabulary, such as acronyms, technical concepts, job roles that might be specific to the company. This is because most GenAI are trained on public sources of data. Therefore, most enterprises realize that they need a GenAI tool to also be customized (or fine-tuned) with their own specific data.

This is where data quality emerges as a bigger challenge than having the right GenAI use case. Enterprises need to invest in building strong data foundations with best-in-class Cloud, Connectivity and Cybersecurity infrastructure to unlock the full potential of GenAI. They also need to put in place solid data governance, that will develop and enforce policies such as data sensitivity, data lifecycle, etc. Especially in large organizations, this can be very challenging. Enterprises need both data quality and data governance to reduce the risk of GenAI producing inaccurate results or violating data privacy.

2. Generative AI is not One-size-fits-all

If you had built a text generation model ten years ago, you would have had to train it from scratch over months. One of the central developments in GenAI is that we have pre-trained foundational models called Large Language Models (LLMs) that are great at many tasks, which can then be prompt-augmented, fine-tuned, and pre-trained on a much smaller dataset to contextualize the business outcome that you want to achieve. These approaches also matter for the economic value and reduce the barrier to entry enabling enterprises to innovate on top of existing LLMs.

In addition to these new techniques to take advantage of foundational models, there are a range of sizes and capabilities of models ranging from big models that cater to the vast majority of needs to many small models called Small Language Models (SLMs) targeted to specific tasks. SLMs are smaller versions of their LLM counterparts and therefore most efficient at specific tasks and also more cost-effective. They have significantly fewer parameters compared to LLMs with hundreds of billions or even a trillion parameters, but the trade-off is that they are not nearly as good at wide ranging ‘generalized’ questions and may only support one type of input like text or images as opposed to the multi-modal models I mentioned earlier.

In our engagement with customers, we learned that it is critical to take an outcome-based approach toward finding the right solution. Often the business need can be solved with an existing model, an SLM, or chaining multiple small models together. In many cases we don’t need GenAI at all or only for a small part of the overall solution.

3. From ‘Bigger the Better’ to ‘Contextual LLMs’

In 2023 we have seen the rise of LLMs and their dominance over AI debates with magical ability to solve any and every problem. While data is the oxygen of AI, the focus should be on providing the AI with as much relevant and unbiased context as possible to achieve the right business outcomes. It’s like finding a needle in a haystack; to find a needle more quickly, either you alter the color/shape of the needle (or the haystack) or use a magnet or coordinate multiple parallel searches.

This realization drives the advancements of industry specific LLMs augmented with business context which are specialized to solve specific problems. For instance, in HR, Mercer is using LLM to automate the recruitment process. In law, Harvey AI and CaseHOLD revolutionized the management of legal tasks performing contract analytics and legal compliance summaries. In finance, BloombergGPT analyzes financial data, and in healthcare, Google DeepMind’s Med-PaLM processes clinical notes, lab results and medical images, etc. I foresee that shortly Contextual LLM marketplaces will emerge where you can choose the type of model you need based on the business context and the problem you want to solve.

4. Are Networks ready for GenAI?

GenAI doesn’t exist in a vacuum. As much as it needs a solid cloud and data foundation, it also requires a high-performing intelligent network that supports complex data mesh architectures from edge to cloud for prompt request/response, model fine-tuning, inference, and training. Omdia predicts that by 2030, nearly two-thirds of network traffic will involve AI, driven by AI-generated content such as video and images. This requires global networks to be ready for carrying ZettaBytes (as in 1,000 ExaBytes, or 1,000,000 PetaBytes) and support existing applications shifting to AI, as well as entirely new AI applications.

Initially, LLMs were designed to run on powerful centralized cloud infrastructure. Last year multiple LLM variants emerged that can run in the Cloud, at the network edge, as well as on individual devices. For instance, Google Gemini introduced three variants: Ultra, Pro and Nano – Gemini Nano can natively run on Android devices. We also expect that Apple and many other consumer electronics companies will produce and embeded LLMs into their products. Running these very small AI models on the devices creates new use cases that make GenAI highly customized and personalized and reduce data privacy risks, but will not be nearly as sophisticated as those in the cloud. Such diverse and complex architectures for GenAI deployments put a spotlight on today’s mobile and fixed networks and introduce new needs that require network design re-thinking.

5. GenAI will augment jobs or take-over?

One of the most astonishing predictions for generative AI in 2030 is its potential to automate up to 30% of the hours currently worked across the US economy. A study conducted by MIT researchers regarding GenAI’s impact on highly skilled workers finds that it can improve worker’s performance by as much as 40% compared with workers who don’t use it. The same study found more GenAI gains for lower-skilled resources in comparison with higher-skilled groups. A McKinsey study shows that software developers can complete coding tasks up to twice as fast with GenAI. Knowledge workers like lawyers and scientists could also substantially speed up their workloads by using AI to analyze mountains of data in an instant.

While the jury is still out on this topic, my conviction is that GenAI is automating tasks, not jobs, and hence AI-enabled employees will ultimately replace those who are not AI-enabled. This conviction has proven true for the disruptive technology cycles we have seen in last three decades with computers, internet, mobile etc… and the emergence of the Industrial Age before them.

6. Trust remains a key condition to drive large-scale adoption

With all the misconceptions about its practical realities, it’s no surprise that GenAI is currently to be found at the very summit of what Gartner calls ‘the peak of inflated expectation’. AI presents its share of challenges: environmental impact, influence on productivity, return on investment, skills management, regulation and ethics. I am convinced that there will be no mass adoption without trust.

Trust is more crucial at a time when only a third of the French population believes that AI presents more advantages than disadvantages. A global survey of 11,000 employees conducted by BCG found that the more you use it, the more you trust it but also fear it. A significant percentage of employees see AI as a threat, with the highest percentages among those who already use the technology. At Orange we took part in Microsoft Copilot for M365 Early Access Program (EAP), and data compliance as well as a steep learning curve were big barriers before meaningful productivity gains could be achieved.

Governments have rushed to take a position on how to govern AI use through regulations. The Biden administration issued rules for GenAI in October through an executive order that outlines eight goals including safety standards and responsible use of AI. In December, the European Commission issued the first version the AI Act to regulate the use of AI systems in the European Union. Putting the European AI Act into practice, researchers from Stanford University studied 10 popular Large Language Models (LLMs) and discovered that only four models received satisfactory scores, with the open-source model Bloom standing out with a score of 36 out of 48. Lack of transparency among providers, who do not disclose enough information about their models and the risks associated with their use was highlighted as a key concern.

Copyright infringement and intellectual property is a growing concern – recently the New York Times sued OpenAI and Microsoft over the ‘unlawful use’ of its content to train GenAI and LLM systems, as allegedly big portions of New York Times material were found to be used in ChatGPT output. GenAI raises significant concerns around copyright issues and surely, we have a lot to learn in this space as these are entirely new technologies and we will need the law as well as our understanding of their place in society to adapt.

To conclude, yes there is a lot of hype around GenAI however we can’t deny the incredible potential of this new tech to transform our digital ways of life. As a technology veteran, having seen many hype cycles - internet, mobile, cloud, IoT, digital twin, blockchain etc., I have learned that hype is a great ‘business mechanism’ that enables us to make the transition from art-of-possible to art-of-practical. While the large-scale adoption of GenAI is still 2 to 5 years away when every product and every piece of software will have some kind of GenAI functionality, it’s key to get started with small and simple experiments while avoiding the common pitfalls.

usman-javaid_nb.png (
Usman Javaid

Chief Products and Marketing Officer at Orange Business, Usman Javaid, PhD brings a broad knowledge of the enterprise market, from both the telco and digital worlds. Previously Managing Director of Professional Services Delivery at AWS, Usman has more than 20 years of experience centered around building innovative technology products and driving large-scale business transformation.