7 Things to consider when it comes to AI and ethics

We live in a time of great change, fuelled by rapid growth and unprecedented access to technology. Even Artificial Intelligence (AI), which seemed like science fiction just a couple decades ago, is now accessible in the palm of your hand through applications like ChatGPT for content creation, or Midjourney for image generation.

With all these new technology capabilities available thanks to generative AI, there are certain ethical issues and responsibilities we should be considering when using these tools. More specifically, AI is fuelled by data – and as its use expands across multiple industries, the question of ethical AI becomes more and more prominent. Here are seven things to think about when it comes to ethics in the AI era.

1. AI can have inherent bias within it

From creating new marketing initiatives, new products, and services through to driving iOT decisioning, there’s no limit to how we use AI. While it’s intended to create customised value to individuals, it’s important to realise that AI can have built-in biases. This bias starts with how data is selected in building an analytical model, the factors used in decision engineering and any other form of advanced analytics. These inherent biases can have a significant impact on individuals – for example, it can influence an individual’s credit profile and affordability assessment, which then affects their buying power.

2. Legislation should be leveraged to define and govern AI

South Africa’s Protection of Personal Information Act (POPIA) is a first key step in protecting personal information and how it is used. However, much more needs to be done to define and govern AI. The EU has recently taken a strong stand in exactly this area with the EU AI Act recently approved by the European Parliament. The bill aims to protect European consumers from potentially dangerous applications of AI by requiring the analysis and classification of AI systems according to the risk they pose to users. The passing of this bill means there’s now a recognition that AI can be used to sway decisions, which can lead to things like discrimination.

3. Embedding AI ethics within organisations is challenging

While most companies intend to use AI ethically, putting this into practice can be a challenge. One example of this is the 2021 Google case which involved the termination of employment of Timnit Gebru from her role as lead of the ethical AI Team. Timnit, along with co-lead Margaret Mitchelle, was working on a paper on the dangers of large language processing models, when a department at Google asked that that article be retracted. After push back from Timnit, she was let go. This example shows that while organisations may have good intentions when it comes to the use of AI, the actual practical steps involved can be very challenging.

4. Diversity is key to AI ethics

Varsha Ramesar, Head of Data Management and Commercialisation at Tesserai, a local business intelligence and analytics company, says that the more diverse the members of the team are, the more likely it is that AI bias will be reduced. “We need more rigour in the creation of our training data sets,” she says. “We need to ask the hard questions, such as: is gender being used as a predictor or a bias? Is the dataset diverse enough? The more we strive towards a culture of asking the hard questions the closer we move to achieving ethical AI.”

5. Education and socialisation are important

Premlin Pillay, Group Executive of Strategy, Data and Analytics at Mettus says that education and socialisation are key to advancing the idea of AI ethics within an organisation. “It is important that companies working with analytics educate consumers to know their rights when it comes to their personal data,” he says. “We’re an ethical company, and believe it is our duty to educate South Africans about their data and how it is used.”

6. Ethics should be a skill within teams

Building analytical models is a team sport – and the team should not only have technical skills in the form of data scientists, but also other skills including a good grasp of ethics. Given that these ethics skills are scarce globally, this role may need to be taken on by professors and PhD students from academic institutions.

7. AI ethics apply to small enterprises too

While we may think AI is only used in large multinational companies, it’s now becoming more accessible to a wider range of businesses. For a few thousands of rands a month, any business can sign up and gain access to a range of powerful AI tools. However, these tools are only as powerful and ethical as the people who designed them. “Smaller teams may not have teams that are large enough to test the diversity of data – but this is where online forums and community organisations can help to steer the direction,” says Varsha.

Advancements in AI aren’t slowing down, and we need to be deliberate in how we embrace this responsibly. More specifically, we need to think about how we apply ethics to the AI space, and what guardrails are needed to do so. “While AI may seem like a technical solution, issues of ethical AI are actually more about people and culture than they are about technology,” says Varsha. “Above all, it’s about creating a culture of ethical AI and doing the right thing, even when no one is watching.”