The Journey to Ethical Artificial Intelligence

Declan Foster • 2 September 2022

Somebody once quipped that ethics is like the company dishwasher. Everyone is responsible for emptying it, and we all benefit from it. But the person who cares most about it will end up doing it! This is true of ethics but perhaps even more so for the more specific domain of ethics in artificial intelligence.

Ethics is a system of moral principles that affect how people lead their lives and make decisions. Ethics can provide us with useful moral maps, or frameworks that can guide us through conversations on complex issues. Ethics is more than just 'being good’. It's about fairness, treating people and other entities with dignity and respect, and behaving in ways that don't harm others. Ethics in AI's concern, according to the EU, "is to identify how AI can advance or raise concerns to the good life of individuals, whether in terms of quality of life, or human autonomy and freedom necessary for a democratic society".

I define AI as machines acting to mimic human cognition to solve problems. The most common components are Machine Learning (ML), Natural Language Processing (NLP), and Robotics. AI has made huge strides in recent years due to the availability of increased computing power, including GPUs, the exponential rise in availability of big data and the development of new algorithms. In 1996 the world was amazed when
Deep Blue, an IBM supercomputer, beat Garry Kasparov, then world champion, in a game of chess. A short 20 years later, AlphaGo, a program from Google Deep Mind, defeated Lee Sedol, the strongest player in the world in the game of Go. This game originated in China thousands of years ago and was considered the Holy Grail for AI because of the infinite number of board positions. In recent years we have seen the phenomenon of large language models that can write articles, news reports, poetry and even produce computer code. The most prominent of these models are GPT-3, BERT and OPT-175B.

Parallel to the rise in the technical abilities of AI has been society's concern with whether an AI will ever become sentient or self-aware. We have come along way from ELIZA, the first chatbot developed at MIT in 1996 and intended to emulate a psychotherapist that fooled many people into believing she was human. MIT's Joseph Weizenbaum programmed ELIZA to respond to specific keywords in the typed input text. Earlier this year, a Google engineer, Blake Lemoine, claimed that Google's AI tool Lamda had become sentient.

All of the above leads to the natural conclusion that we need to focus more on AI's ethical implications in the coming years. In this article, I will consider AI ethics from four perspectives: individual, product or company, the tech industry, and society.

Individual

Whether you are a consumer using a product that uses AI or someone who is developing these tools, we all have a role to play in ethical AI. For those working in technology, I can suggest the trolley problem as a mechanism to start the conversation on their approach to ethical dilemmas. The trolley problem has been a mainstay of philosophy lectures for decades and is a thought experiment used to explore ethical dilemmas. In the scenario, a runaway trolley, or tram, is heading toward five people. You can pull a lever to divert the trolley onto a separate track, saving the five people but killing one person on the second track. How would you respond in this situation? Is doing nothing or refusing to make a decision the same as taking action? I think we can also see some obvious parallels to ethical choices faced by the AI industry when self-driving cars become commonplace.
 
 

Let's consider for a moment the example of a Machine Learning engineer who is gathering and building a data set as input into a machine Learning model. They are trying to determine how to abstract from reality, and 'abstraction from reality is never neutral, and the abstraction itself is not reality; it is a representation’1. They need to consider if the data set represents the community that it is being built for.


If you are an individual working on AI products, you might consider using the Data Ethics Canvas from the Open Data Institute. This is a tool you can download and use for free. It provides considerations and questions for any data project under the headings of data, impact, engagement and process.

Project or Product

Those working in the Tech sector also need to consider the ethics of the project they are working on or the product being built. One of my favourite books from recent years is Nir Eyal's Hooked2. In this book, he presents his Manipulation Matrix, which allows 'entrepreneurs, employees and investors to answer the question; should I hook my users on this product'? Although aimed at a general business audience, I think there are obvious applications for those working in AI.


The Manipulation Matrix is a two-by-two matrix. On the X axis, we consider whether the maker would use the product; on the Y axis, we consider whether the product improves users' lives. We then see those creators of products can fit into four categories.


Facilitator (I think this is what most of us aspire to)

Something that you would use and improve user's lives, e.g. an online education tool, an app that helps you save money or a fitness tracker, device for measuring blood sugar or blood pressure.


Entertainer

If you use the product but can't claim it improves people's lives, it is probably just entertainment. And there certainly is a place for that; think Angry Birds or even Netflix.


Peddler

When pressed, would you use your product, even though it doesn't do any harm? Think advertising or even some fitness apps, as there are so many available.


Dealer

Interestingly, in the Netflix documentary - The Social Dilemma, some Big Tech execs freely admitted that they don't let their children use social media. If you wouldn't use your product or wouldn't want your children to use it and it doesn't improve people's lives, then you may be in this category. Think of casinos, big tobacco or online gambling. Do you think they fit into this category?


Those working in AI or big tech should consider where their project or product falls on the matrix. Does it make them feel proud? Does it influence positive or negative behaviours?


The Tech Industry

Big Tech is often guilty of using the approach that it is easier to beg for forgiveness than to seek permission. A great example of this was in 2010 when Facebook controversially changed the privacy settings for its 350 million users. Mark Zuckerberg once said, "we decided that these would be the social norms now, and we just went for it."


Can we afford to leave AI ethics up to Big Tech? I think not and feel that we all have a duty to ensure we have ethics in artificial intelligence. This includes citizens, legislators, organisations and employees working in big tech.


AI will have a massive impact on society in the coming years. It is up to all of us to ensure that it is a positive impact.

In Cathy O'Neill's Algorithms: Weapons of Math Destruction3, she discusses the impact of algorithms on society,e.g., those used for recruitment or loan applications. There is an inherent unfairness in how these algorithms are applied. "The privileged, we'll see time and again, are processed by people, the masses by machines”. Consider the example of applying for an entry-level job at Walmart in the US, where an AI algorithm will most likely screen you. Compare this to your experience as a senior executive who has been headhunted for a Wall Street position where you will most likely get the personal and human touch. Cathy O'Neill describes the three components of a WMD:

•            Opacity

•            Scale

•            Damage


Opacity

Even if the person knows they are being modelled, do they know how the model works or how it willbe applied? Sometimes companies claim it is their secret sauce or IP or claims the black box effect.


Scale

Does the WMD impact one use case or population, or do they have the potential to scale exponentially and impact society?


Damage

Compare the impact of an algorithm that suggests an item to buy (Amazon) or a program to watch (Netflix) with an algorithm that determines whether you get a job, qualify for a loan or even the length of prison sentence you get.


At a broader level, some claim that the ethics of the business model of some big players in Big Tech needs to be examined.

In her book Surveillance Capitalism4, the author Shoshanna Zuboff discusses what she calls "behavioural futures markets," where surveillance capitalists sell certainty to their business customers, e.g., Google Ad Words. She describes this as "a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction and sales”. Zuboff sees this as a significant threat to modern society that she compares to industrial capitalism's impact on the natural world throughout the 19th and 20th centuries.

How the World is Responding

In November 2021, 193 countries adopted the first-ever global agreement on the Ethics of Artificial Intelligence at the United Nations. The EU formed the High-Level Expert Group on AI in Europe in 2019. Many tech companies, facing both external and internal pressure from employees, have begun a system of self-regulation and formed internal AI ethics initiatives. And some big players have drawn a line in the sand about what they will and will not do. For example, Google has claimed it will not 'sell facial recognition services to governments.

Case Study

Before I wrap up this article, let's look at a brief case study.


Facial recognition systems, in general, have led to wrongful arrests. Clearview trawled social media sites and obtained pictures of people without their consent. Recently, Ukraine has been using Clearview AI to vet people of interest at checkpoints and identify dead Russian soldiers' bodies so they can inform their families. So, overall, are they behaving ethically? Does the context matter? Is it an ethical product?

The importance of ethics in AI will only increase in the coming years as AI becomes even more pervasive. I believe we all have a role to play in ensuring that AI is a force for good in society. What role will you play?


References
1
Coeckelbergh, Mark. AI Ethics. Cambridge, Massachusetts, The MIT Press, 2020.

2 Nir Eyal, and Ryan Hoover. Hooked : How to Build Habit-Forming Products. Penguin, 2014.

3 O’Neil, Cathy. Weapons of MathDestruction : How Big Data Increases Inequality and Threatens Democracy. Broadway Books, 2017.

4 Zuboff, Shoshana. The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. New York, Public Affairs, 2019.

by Joanne Griffin 13 June 2024
Dublin Tech Summit: 29th May 2024 This thought-provoking session examines the evolving dynamics between AI technology and human interaction. From UNICEF's innovative approaches to Humalogy's human-centric solutions, gain insights into how AI is reshaping our world. With perspectives from Logitech's CIO, delve into the opportunities and challenges presented by AI integration.
A picture of an eye surrounded by the words Impatience, Inattention, Impressionability, Irrationalit
by Joanne Griffin 13 June 2024
The symbiotic dance between humans and technology has delicately unfolded over millenia: an intricate choreography that reflects our relentess pursuit of progress, of innovation, and of connection. At times we take the lead, and sometimes we follow. And as this timeless dance continues, it weaves the threads of our shared story. To truly understand our relationship with technology, we must look inward, to better understand ourselves. When we better understand ourselves — our desires, our vulnerabilities, our motivations — we can unlock profound insights into how we are shaping the world around us, and how we are adapting to the changes we have put in motion.
Declan Fosteron The Ryan Tubridy Show
by Declan Foster 14 October 2023
The meteoric rise of AI has put the fear of Skynet into all of us since the launch of ChatGPT and the sudden mainstreaming of the alarm around artificial intelligence, bots, and automation. How can we put humans back into the heart of technology is the objective of our guest this morning, Declan Foster who will help us unpack, debunk and humanize this complex world of technology that now swirls around us.
by Joanne Griffin 12 June 2023
Humology is awarded a 5-star review in the Reader's Choice Awards in 2023!
by Joanne Griffin 24 February 2023
Since the beginning of this year, ChatGPT has taken the world by storm, setting a record for the fastest user growth in January (reaching 100 million active users two months after launch!) The implications of this tool are far-reaching, with universities around the world scrambling to find a way to detect ChatGPT use in submitted essays, and tech giants from Google to Meta rerouting entire teams to focus on commercializing their own AI efforts. ChatGPT is first and foremost a conversational AI — so I thought “why not have a conversation about how it might impact us already-overwhelmed users!” Just a casual conversation between a curious human and the latest record-breaking emerging tech to disrupt the world as we know it.
Robot at whiteboard
by Declan Foster 26 January 2023
In my latest article, I will explain what ChatGPT is and the underlying technology in what I hope are easy-to-understand terms. I will also look at the implications for the organisational change management professions, including how we might utilise this new tool. There are, of course, some potential downsides to this technology, including ethical and copyright concerns, which I will also discuss.
by Declan Foster 11 January 2023
Technology has been an undeniable force in modern society, revolutionising how we work, shop, communicate and entertain. However, many groups lack access to the tools and resources needed to take advantage of digital technology. We are all probably familiar with the digital divide. This is the notion of a gap between those digitally included and those digitally excluded from the benefits of the digital revolution.
by Joanne Griffin 4 January 2023
In this episode, Dr Jack G. Nestell leads a fun, thought-provoking, and highly relevant discussion around business systems and technology in general. This episode explores how rapid technological advances impact the way we work, our personal lives, and society in general. Please join us as we discuss “being in tune with technology”, technology adaption, emerging technology, data overload, and The Capacity Gap, technology versus human psychology, and much more!
by Joanne Griffin 4 January 2023
We're thrilled to announce that Humology has won two major book awards in the US. We are so proud of the impact that Humology is having. We hope that it will continue to inspire readers to prioritize the human experience in a world dominated by technology.
More posts
Share by: