By Edwin Colyer, Founder & Impact Lead
9 March 2023
AI tools like ChatGPT are just too useful. That’s why we need ethical practices at the heart of their development.
In less than one month from launch, the new AI-based chatbot ChatGPT created by OpenAI was firmly entrenched in popular culture at global scale. This friendly-yet-authoritative sounding chat engine draws on a large language model to generate clear, coherent text in response to prompts. Through dialogue with the user, the engine can revise previous responses, challenge premises, even admit and apologise for its mistakes. It is engrossing and compelling.
The ChatGPT platform hit 1 million users in just five days, and reached 100 million in two months. Forbes recently listed 14 future uses of ChatGPT in medicine and wellness and Interesting Engineer provided a long list of ChatGPT hacks, from writing CVs to software coding, copywriting to text summarisation.
Be in no doubt. This perky little chatbot (more specifically the AI system behind it) will seduce you. It’s guile? It is just so incredibly useful, and the current mass market testing is only scratching the surface of its utility. More innovative uses of ChatGPT and large language models are yet to come, perhaps in entirely unexpected ways. After all, the technology will only improve over time, and similar systems are set to follow (Google has its own competitor product Bard, although its launch didn’t get off to a great start).
Ethics at the heart of AI
But alongside the hype, you’ll find swathes of articles expressing concern about the ethics, which boil down to this: we can develop and deploy this AI for many excellent purposes, but should we? As Reid Blackman makes clear in his must-read book Ethical Machines, ethics is ultimately about right and wrong. Opinion on right, wrong and morals may shift, but this doesn’t change the facts, Reid says. Yes, entire societies and global economies can be unethical: slavery was just as wrong in 1723 as 2023, irrespective of national policy or public opinion.
Ethics needs to be at the very heart of all AI development and deployment. With regards to deployment, it is essential to understand and anticipate the impact of an AI system – and mitigate or prevent potential harms. How are individuals, communities, economies or the environment affected by these decisions? Are decisions informed by AI just and fair? Do they promulgate bias and discrimination? What unintended consequences arise from the implementation of an AI system?
This last question highlights why the ethics legwork must really happen at the research and development stage. Ethics needs to be embedded within the systems and decisions that build the AI.
Just look at the A-Level grades fiasco in the UK and you’ll understand why. The grade biases and outliers generated by the examination prediction algorithm during the 2020 Covid crisis was certainly unintended, but realisation came far too late. Politicians had to perform a humiliating U-turn and eventually sided with the students, awarding the higher of either the algorithm- or teacher-predicted grade.
Wired magazine ran an article in 2022 that doesn’t mince its words on where responsibility for AI ethics lies. Author Rachel Botsman writes: ‘“Unintended” suggests consequences we simply can’t imagine, no matter how hard we try… [It] distances entrepreneurs and investors from responsibility for harmful consequences they did not intend. I like the term “unconsidered consequences,” because it puts the responsibility for negative outcomes squarely in the hands of investors and entrepreneurs.’
Address the practice gap
Sadly, most AI developers – often fast-moving start-ups hoping to corner a market and attract investors – lack the specialist knowledge, skills and bandwidth to embed good ethics practices within their operations. This is known as the ethics practice gap and has been documented in academic research and more qualitative, localised studies such as this snapshot of AI company perspectives in Greater Manchester, UK.
There’s plenty of help and support out there for AI developers and users. A recent study analysed over 70 toolkits against 11 core principles of AI ethics. Some toolkits focused on specific areas, others covered almost all principles, but all these toolkits help people in the AI sector make a good start.
Is the real problem a lack of desire? Or lack of business incentive? The sector is entirely self-regulated. Reid Blackman makes it patently clear that you ignore ethics at your peril; ethical practice is the key to managing the huge risk of harms and reputational damage that comes with AI. Even the creator of ChatGPT, Mira Murati acknowledges that AI should be regulated (her interview with Time magazine is essential reading). But whilst we await legislation such as the EU AI Act and national regulation, the gloves are off in the fierce competition to win hearts, minds and AI market share. The Silicon Valley philosophy of move fast and break things lives on!
Critical thinking
Unfortunately, this puts most of the onus on users, ethics champions and investigative journalists to poke around, analyse the facts and form opinions. Well researched stories are welcome, such as the shocking piece by Time on the outsourced Kenyan workers who were paid less than $2 per hour to support ChatGPT development. These workers were subjected to all kinds of toxic text – hate speech, sexual abuse, violence – as they labelled data to train an AI-powered safety system so ChatGPT would keep its users safe.
On the surface, this looks like excellent ethics in action, rigorously addressing the unintended consequence of unwittingly exposing users to harmful content. Yet the article highlights several areas for ethical concern.
First, is the issue of financial exploitation. The article suggests that many of the junior members of the data labelling team might have earned wages close to the minimum wage of a Nairobi receptionist. Was OpenAI exploiting people’s hardship or lack of privilege to do things they’d not normally choose to do for limited financial reward?
Second, this outsourcing by OpenAI (and many other AI companies) has a distinct whiff of neo-colonialism. It makes business sense to outsource many tasks to suppliers in countries where labour rates are low. Yet in January 2023, OpenAI received an additional $10 billion investment from Microsoft. Even with the rosiest view of trickle-down economics, will any of that reach Kenya in a meaningful way that supports economic and skills development of its citizens, including those workers who suffered during that data labelling process? Just for reference: $10 billion is approximately one-tenth of Kenya’s GDP. According to an MIT Technology Review series last year, the AI industry is repeating colonial history.
Finally, there’s the serious issue around inclusivity and diversity. Many documents on high level principles for ethical AI emphasise the importance of transparency and diverse involvement in AI development. The aforementioned analysis of ethical AI toolkits however revealed that diversity was the least considered practice.
Good ethical practice requires diversity within development teams, but also inclusive involvement from across a business and from outside. It is important that development and deployment processes are scrutinised by a wide range of people with different lived and learned experiences. We need people ready to probe, critique and hold others to account. This is how you best ensure that no consequences are unconsidered.
We need citizens and users of AI to start asking difficult questions. And AI companies to be transparent and divulge. For example, to what extent does diversity play a part in OpenAI? Is the large language model adequately informed and scrutinised by diverse communities? How well-informed is ChatGPT by seldom-heard and vulnerable communities, who may not create much content, but who may be disproportionately impacted by decisions informed by chatbot output?
These are difficult questions to answer, even for experts; it is unfair that end users have little to go on except a company’s word, occasional journalistic investigation, and the court of public opinion from social media.
Until ethics is embraced, embedded and exposed for all to see, utility will win and we will turn a blind eye to current levels of ignorance in the AI tech community. Before we know it, we will transform into an AI-dependent society. We won’t know the extent of our dubious, ignorant and harmful practices until it is far too late.