Abhishek Sharma
Guest Article

Ethical AI: Beyond transparency and fairness

The journey towards ethical AI is a shared responsibility, and everyone must play their part in shaping this technology’s future, says our guest author.

Artificial intelligence (AI) is a hot topic. It’s causing quite a stir, and raising a few eyebrows. Some people are making calls for halting AI experiments, worried about existential threats. But there is another side to this story: AI’s incredible potential to transform industries. Also, its potential involvement in treating diseases like cancer.

Let’s clarify what AI is. It’s not about creating artificial brains in a lab. AI systems emulate humans learn, reason, and act on real-world tasks. Sure, AI technologies, like ChatGPT and Bard, are dominating the headlines. But AI is everywhere, from our smartphones to our cars. And as AI permeates our lives, it brings challenges that include issues of transparency, fairness, and bias.

These challenges underline the importance of ethical and responsible development of AI systems. While we have made progress towards fair and explainable AI, it is critical to expand the discourse on AI ethics beyond these initial steps. Achieving explainability and reducing bias are crucial, yet they represent just a piece of the ethical AI puzzle. A comprehensive understanding of ethical AI practices, requires a deeper approach, examining projects and implementations through several distinct lenses, like:

The people lens

Centralised solutions often stem from AI systems developed by a small group of individuals. Ensuring fair AI practices, involves considering the legitimate moral interests of all stakeholders impacted directly or indirectly. Implementing a mechanism for appeal is also stated in widely accepted Santa Clara Principles on transparency and accountability in content moderation. These principles are widely relevant for AI solutions as well.

The design process should be democratised, and the circle of AI ethics expanded to a diverse array of impacted individuals. This requires fostering diverse teams to build and evaluate AI systems. Having a globally inclusive ethics board for solution assessment, helps. We need to ask ourselves: Who is building these systems, who is using them, who owns the data, and how will the data be used?

The process lens

The AI design process needs to incorporate more than transparency and fairness. We need to put in place actionable remedies and ways for people to contest AI-based decisions, reducing the risks of AI. The process lens requires us to identify the need to assess the secondary and subsequent effects of the systems we build.

In addition, we need to build in guardrails to contain negative consequences, as well as procedures to swiftly make amends. We should also focus on the man and the machine synergies, enhancing human capabilities, rather than replacing them. This is where augmenting human capabilities should be the main goal of the AI systems, not complete automation.

The technology lens

Effective creation of ethical AI systems, relies heavily on the technology itself. Tech teams must engage with all stakeholders to comprehend the implications of the systems they design. They must grasp the full extent of transparency and potential biases in the datasets, and be prepared to manage feedback loops and other challenges. This often involves extensive documentation of datasets, and models detailing their intended use, potential pitfalls, and limitations.

They need to be ready to handle feedback loops and other challenges, understanding the wider effects of AI, such as AI-amplified bias and misinformation. Feedback loops arise when tech teams fail to recognise the secondary effects of user behaviour and consumed by a deployed system. It is commonly observed in recommendation systems that fill your social feed with more of what you click on, in effect, reinforcing beliefs.

As we venture into a future increasingly influenced by AI, it’s crucial that developers and organisations remain cognizant of the effects these systems can have. This includes understanding how these systems change the power dynamics and impact individuals’ lives. Often, the most affected individuals, are the first to identify potential risks. It underscores the vital importance of participatory and democratic processes in AI design and implementation.

If AI is to be the ‘new electricity’, it must illuminate the world for everyone, not just a select few. If data is the ‘new oil’, it should generate shared prosperity, not conflict. As governments step in to regulate this rapidly-evolving landscape, the industry must continue to foster dialogues and expand the conversation on ethical AI. The journey towards ethical AI is a shared responsibility, and everyone must play their part in shaping this technology’s future.

(Our guest author is Abhishek Sharma, data scientist, analytics senior manager at Merkle.)

Have news to share? Write to us atnewsteam@afaqs.com