Social Mixer 2024 Singapore
Ethics and AI: Where are we when it comes to bias and responsibility?

Ethics and AI: Where are we when it comes to bias and responsibility?

share on

Companies and consumers are both enjoying the advent of artificial intelligence (AI), relishing in the increasing convenience it brings, such as the streamlining of workflow processes, voice assistants, or in some markets, self-driving cars. The rise in the popularity of AI, however, has brought about the issue of the ethical use of AI, which is an area that is of growing concern among consumers.

Quoting the European Commission, a new study by the Capgemini Research Institute (CRI) – “The ethical use of AI is becoming fundamental to winning people’s trust” – defined ethical AI as being ethical in its purpose, design, development and use. It has to be transparent, explainable, interpretable, fair and auditable.

About 62% of respondents in the study said they would place a higher trust in a company whose AI interactions they perceived as ethical. Meanwhile, 61% said they would share positive experiences with friends and family, while 59% said they would have higher loyalty to the company.

On the other hand, when consumers’ AI interactions lead to ethical issues, they threaten companies’ reputations and bottom lines. About 41% of respondents said they would complain in the event an AI interaction resulted in ethical issues, while 36% would demand an explanation and 34% would stop interacting with the company.

In total, the study surveyed 1,580 executives from large organisations across 10 countries, and more than 4,400 consumers across six countries, including China, India, the US and the UK.

Deploying AI without anchoring it to core principles and robust requirements may expose a business to significant risks, including data privacy, employment or HR, as well as health and safety issues, Joon Seong Lee, managing director – applied intelligence, ASEAN, Accenture, told Marketing. The potential fines, sanctions and loss of brand reputation and trust can be detrimental to businesses.

These issues have set the importance for what Accenture has termed as “Responsible AI”, which delves deeper into more than just the ethical use of the new technology. Responsible AI, in Accenture’s terms, is the practice of using AI with good intentions to empower employees and businesses, and fairly impact customers and society, allowing companies to engender trust and scale AI with confidence.

“AI has the potential to drive significant business, economic and social transformation. But while unlocking value, AI can introduce new risks and challenges. Unintended, but harmful consequences can lead to a consumer backlash and legal problems,” Lee explained.

Industry players take action

Globally, technology giants such as Microsoft, Google and Facebook have taken a serious look at the ethical use of AI in its systems. Microsoft has launched several internal working groups focused on AI ethics, including fairness, accountability, transparency and ethics in AI. It also has an advisory board known as “AI ethics and effects in engineering and research”.

As for Google, it rolled out an ethical charter, AI principles, in June 2018, to guide the responsible development and use of AI in its research and products. It also formed the Advanced Technology External Advisory Council (ATEAC) in 26 March 2019 to oversee some of Google’s most complex challenges that arise under its AI principles.

However, ATEAC was shut down on 4 April 2019, less than two weeks after its launch due to a spat with the LGBT community. Google faced backlash when it appointed Kay Coles James, president of thinktank The Heritage Foundation, to the council due to her anti-LGBTQ and anti-immigrant comments.

Even so, Kent Walker, senior VP, global affairs said in a blog post that it will continue to be responsible in its work on the important issues that AI raises and will find different ways of getting outside opinions on those topics.

Meanwhile, Facebook adheres to a set of ethical principles and operational guidelines when conducting research, evaluating data uses, building its products and engaging in business practices. This includes Facebook’s Research Review Principles and its Privacy cross function review process.

The company also participates in various multistakeholder consortia working on issues of algorithmic fairness, transparency and accountability, including the partnership on AI and the “AI 4 People” initiative. It also uses an internal tool known as “Fairness Flow” to measure its algorithms’ fairness across a growing number of parameters to help identify potential biases in AI for areas such as its job recommendation algorithms.

Speaking to Marketing, Facebook’s spokesperson said that there are myths that AI is self-sufficient and can run entirely on its own. But in actuality, many times, human oversight is necessary as “AI today is about as smart as a cat or a dog”, the spokesperson explained.

The spokesperson added that companies still have a long way to go when it comes to AI and they are just scratching the surface. As the company continues to leverage AI to help consumers perform daily tasks and make decisions, Facebook’s spokesperson said it is essential that the company prevents bias and discrimination from becoming embedded.

“Biases can be hidden in the data, and if the systems are built on that data, that can lead to actions and recommendations that replicate those biases,” she added. In order to be fair and ethical, it is therefore important to have various people involved to help define the principles because AI learns from individuals who are working to develop it.

Preventing algorithmic bias

While there is no single solution to solving algorithmic bias, a great starting point is diversity, said Emil Asmussen, creative director at VIRTUE, VICE Media’s creative agency.

According to him, a diverse team will be much more likely to detect any racial, ageist or sexist biases and change them before they spread.

In March this year, he and his team developed the world’s first genderless AI voice assistant named Q.

According to VICE, technology companies are said to have concluded users’ preferences of a female voice in certain scenarios and male voices in others because they believe it will make people more comfortable adopting and using it. The female voice is used in more service-oriented roles such as Alexa or Siri, while the male voice is used in more authoritative roles such as banking and insurance apps.

Asmussen said before the launch of Q, VIRTUE was very aware the team working on it had to be diverse in multiple ways. As such, VIRTUE made it a point to involve individuals of all types of gender, various professions and areas of research to ensure it tackled the issue of ethical AI from multiple angles, he explained.

“The fact that all major voice assistants are still female by default says a lot of the current state of gender roles in AI. The fact that when people are asked to name a woman in tech, and the majority answer ‘Alexa’, says a lot about the current gender issues in tech,” he added.

In addition to gender issues, he also identified ageism and racism as potential pitfalls for the future of voice assistants. He explained that as voice becomes more widespread, more individuals will start using them and it is crucial for developers to ensure all types of individuals are recognised and included.

Although diversity has a part to play in reducing the possibility of algorithmic bias, it is the moral beliefs of the developer that will define what the algorithmic bias is.

As such, the company that employs the developer becomes responsible, Michel de Rijk, S4 Capital’s APAC CEO, told Marketing.

“In recent times, there has been a movement to create a programmer’s oath, similar to a Hippocratic oath. The concept of creating a developer’s equivalent of ‘do no harm’ is a great attempt to uphold ethical standards in software development and AI,” he explained.

He said this was an interesting method to decrease bias and “deserves more attention”.

Ultimately, consumers are responsible for entrusting these developers with a moral compass, he emphasised, adding that developers were well aware and were increasingly discussing ethics. If anything, AI makes the topic more relevant, pushing the importance of the subject once more, he explained.

While there are countless ways of using AI in unethical ways, he said “deepfake” is currently the best example of the unethical use of AI. Deepfake is an AI-based technology that is used to produce or edit video content to present something that did not occur in reality. The term reportedly first came up when a Reddit user used deep learning technology to alter the faces of celebrities onto individuals in pornographic video clips. Late in 2017, he launched the deepfake subforum on Reddit to showcase the videos.

However, “deepfakes will be recognised as fake by other AI programmes”, he said, adding that AI is equally powerful to be able to battle the tools that cause harm.

Who should take responsibility?

A voice assistant being unable to recognise the accents of non-native English speakers is considered an ethical issue. However, a bigger issue concerning ethical AI that has been in the spotlight over recent years is driverless cars getting into automotive accidents.

Recently, Tesla was embroiled in another car accident when a woman in San Francisco killed a tourist and left his wife critically injured. According to the Associated Press, authorities are investigating if Tesla’s autopilot, a partially self-driving system, was activated when the collision occurred.

Last March, the company’s autopilot feature also resulted in its model X SUV slamming into a concrete highway lane divider and bursting into flames. Meanwhile, Uber has also not been spared from fatal self-driving collisions, pausing its self-driving operations in four US cities last year after a fatal crash involving a self-driving Uber vehicle in Tempe, Arizona.

When it comes to who should take responsibility for such missteps in AI, de Rijk said the onus falls on companies. According to him, companies should, at the very least, show an interest and a willingness in solving the mistakes caused by the software they helped to create. The blame in particular, however, is a complicated question, he said, and could differ case by case.

Using the Boeing 737 MAX crash in Ethiopia in March as an example, he said the plane’s sensor provided a false reading, causing it to nosedive despite the pilots following procedures to rectify the issue. In this case, Boeing accepted responsibility. Meanwhile, Tesla’s and Uber’s self-driving functions also leverage on sensors just like Boeing.

However, the difference between the self-driving collision cases and the Boeing crash is that the individual controlling the car is required to pay attention and must step in to correct the dangerous situation. While the pilots intervened, but were overruled by Boeing’s system, the Tesla and Uber drivers would not have been overruled, but they still did not step in to take control of the situation, de Rijk explained.

Nonetheless, clear accountabilities and liabilities that are part of the greater governance structure for AI implementation within companies should be set. The governance structure should address who bears the risk should the AI make mistakes that cause harm, in accordance to local regulatory framework and guidelines, Accenture’s Lee said. “It is not enough to do the technical work to ensure algorithms are fair. Organisations need to ensure the manner and purpose for which they’re being implemented has undergone the same scrutiny and rigour,” he said.

A key element of governance begins with qualitative questions such as “Did we consult the right groups when setting out to build this tool?” and “Are we using the right data for this solution?” Accenture’s Lee added that creating human-centric systems of governance such as advisory boards and dissent channels are essential to ensuring the right questions are raised early in the process, and that there are avenues to address issues that may arise later in the life cycle of these deployments.

Ethical accountability at work

While it is a “statistical impossibility” to have absolute “unbiased data” or an “unbiased algorithm”, Lee said companies can take meaningful steps to make AI systems more explainable and transparent, so as to mitigate any unintended bias. Companies should establish principles that address how AI is designed and deployed, and for what purpose, and help ensure the methodology behind each instance of this technology is thoughtful and transparent, Lee said.

For VIRTUE’s Asmussen, ensuring ethical AI accountability in companies is an ongoing process and unforeseen issues will undoubtedly appear as the popularity of voice and AI spreads. While making sure the AI individuals are using is built on a diverse and inclusive data set marks a great start, the best companies can do is to be highly aware of issues and making sure that the teams working on the products include all types of individuals. Meanwhile, S4 Capital’s de Rijk said having an ethics committee is a good start and AI projects, or any other projects for that matter, can only get better with the approval of an ethics committee.

(Read the full article in the August 2019 issue of Marketing magazine Singapore)

share on

Follow us on our Telegram channel for the latest updates in the marketing and advertising scene.
Follow

Free newsletter

Get the daily lowdown on Asia's top marketing stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.

subscribe now open in new window