Ethics: a vital part of responsible AI deployment

Artificial intelligence (AI) is considered by many to be one of the great transformative technologies of all time. It's already present in virtual assistants, chatbots, risk assessments, network operations and more. Future possibilities are both huge and cause for potential concern. AI systems could one day drive us around, do our laundry, paint portraits, monitor cities for safety and security and even fight wars. But should they?

AI describes the processing abilities that mirror cognitive functions associated with humans. This typically involves learning, language comprehension, sensory capabilities, analysis, decision making and problem solving. Our reasons for pursuing AI include improving efficiency, driving down costs and accelerating research and development.

According to IDC, worldwide business spending on AI was expected to reach $50 billion in 2020 and exceed $110 billion in 2024. IDC found that retail and financial services were the biggest current investors in AI tools but expects media and the public sector to be the largest spenders by 2023. IDC posits that AI will be "the disrupting influence changing entire industries over the next decade."

As with so many digital tools in the past couple of decades, the technology has developed at a much faster rate than the legislation to regulate it. Governments have very little oversight of AI, allowing private companies to use AI for determinations about people's health and medicine, credit ratings, employment and even criminal justice with little to no state supervision.

In 2020, the Black Lives Matter protests in the U.S. prompted Microsoft, Amazon and IBM to announce that they would no longer give police departments access to their facial recognition technology. The tech companies cited concerns that AI is still prone to errors in recognizing people of color as unfair discrimination.

Concerns cover three core areas

There are three primary areas where AI ethical concerns occur, according to political philosopher Michael Sandel, Professor of Government at Harvard University. These are privacy and surveillance, bias and discrimination, and the role of human judgment. "Debates about privacy safeguards and about how to overcome bias in algorithmic decision-making in sentencing, parole and employment practices are now familiar, but are certain elements of human judgment indispensable in deciding some of the most important things in life?" says Sandel.

What is an appropriate response to these challenges for technology companies? How do you ensure there are not structural biases or discrimination built into these systems? There needs to be a discussion that is much broader than just the functional capabilities of AI. It needs to address the ethics of how and where AI is used from a human perspective.

It is important to acknowledge that regulation or oversight must encompass the whole value chain: you can't regulate companies and not governments, or some users and not others. To be ethical means to cover every instance and usage of AI. From vendors developing the technology to governments and enterprises deploying it to consumers and citizens affected by it.

Take steps to create an ethical AI policy

The more AI takes a bigger role in decision making, the more ethical concerns are likely to arise. It's a snowball effect. According to Benjamin Hertzberg, Chief Data and Analytics Officer Research Team at Gartner, "Public concerns about AI dangers are warranted. An external AI ethics board can help embed representation, transparency and accountability into AI development decisions."

If governments are struggling to keep pace with AI advances, then industry self-regulation could be a solution. Gartner has proposed that an external AI ethics board should be based around three key learnings: representation, transparency and accountability. AI ethics boards need to be entirely transparent and have all the information necessary to make informed recommendations on developing AI projects.

In addition, when they make recommendations, stakeholders need to respond to them promptly and publicly. This highlights that your organization or initiative is committed and accountable when it comes to AI. The process must be absolutely independent, whether setting standards, measuring compliance or potentially punishing transgressors.

Responsibility also comes from within

While external AI boards are a useful step, they are not all that can or should be done. AI must be developed ethically and responsibly, and stakeholders and project owners can commit to other approaches to achieve that. Honesty also counts: it is OK to admit that ethical AI is a hugely complicated task and that you might need help to ensure your organization gets it right.

In France, Orange Group works with Impact AI, an organization that pulls together stakeholders from private companies, public entities, research institutes and educational partners. It addresses two core AI objectives: ethical and social challenges and innovative projects for tomorrow's world. Orange also participates in several forums and think tanks working along similar lines, including Digital Society Forum, Impact AI, and Global Network Initiative (GNI). Furthermore, Orange recently became the first company to be awarded the GEEIS-IA label, recognizing its commitment to non-discriminatory HR processes and promoting diversity in AI professions.

AI can be the future but must be ethical

There is a significant trust factor inherent to the future of AI. Citizens are trusting governments to use AI responsibly, and consumers are trusting corporations to use AI ethically. If we do not start on a path that builds AI ethically, the implications further along the journey will grow exponentially. If data is biased to begin with, errors will be compounded down the road as algorithms go on learning from initially flawed data. Companies and governments must take an ethical approach to AI if the world is to reap its benefits fully.

Orange recently created a data and AI ethics council, comprising 11 independent experts, which will develop ethics guidelines for the responsible use of data and AI at Orange and monitor its implementation within the Group's entities. Read more here. Orange is also a founder of an international charter for inclusive AI, through the Arborus Endowment Fund. Read more here.

Steve Harris

I’ve been writing about technology for around 15 years and today focus mainly on all things telecoms - next generation networks, mobile, cloud computing and plenty more. For Futurity Media I am based in the Asia-Pacific region and keep a close eye on all things tech happening in that exciting part of the world.