Singapore is truly on the cusp of a transformation as a nation. Recently named the top digital society in the world, strong public-private sector efforts have been made to nurture innovation and build digital capabilities in frontier technologies, especially in artificial intelligence (AI), with a recent SG$500 million investment, and plans to roll out AI solutions to every business by 2020.
Yet, synonymous with Singapore’s identity, apart from its digital-first and Smart Nation pursuits, is its multicultural heritage and diversity. With recent news of discriminating algorithms making headlines, Singapore needs to double down on ensuring the harmony and cultural fabric it has built up through the years is not compromised. Incidents such as facial recognition software in law enforcement being accurate only for a particular race or gender, for instance, need to be safeguarded against in Singapore. As strides are made toward harnessing AI across business and society, it is imperative the technology and its effects are managed responsibly, by ensuring it is used ethically.
Prioritising ethical AI
At the launch of Salesforce Research Asia in Singapore, Salesforce’s latest AI hub focused on producing cutting-edge AI research, in March, Minister S Iswaran raised a pertinent point that the management of ethics and public interest is a crucial dimension in AI management and application. AI is forcing conversations around corporate trust and ethics because it holds up a mirror to human behaviour; it amplifies preconceptions and biases in the data sets that organisations collect and use to train their AI algorithms, and this can be potentially harmful. For example, if a particular bank has not given as many loans to women or people from certain minorities, these features will be present in that bank’s data set – which could make it easier for an AI algorithm to draw the conclusion that women or people from minority groups are more likely to be credit risks and should therefore not be given loans.
This starts with aligning and incorporating AI within corporate ethics, to prompt employees to be more conscious of creating structures that promote responsible and transparent AI use. For such conversations around AI ethics to be impactful, it should be reinforced everywhere in the organisation as well as among partners and community members. An important place to start is documenting data sources and assessing how people could be impacted by AI applications at all stages of development. Some impacts of AI are bigger than others, of course. Machine learning models that determine outcomes – such as which job applicants are chosen for interviews – deserve greater scrutiny than those used to help Alexa answer questions about the weather.
For example, we scheduled an internal hackathon recently where data scientists and engineers were planning to train AI models for detection and analysis of emotions. They discovered, however, that the training data was based on stock images of human faces that didn’t represent an accurate cross-section of ethnic diversity. This exemplifies the value of questioning AI projects at every stage to uncover problematic uses, long before the technologies are released into the corporate world or to the public. Even simple errors in using data can create algorithms that generate sexist or racist outcomes. To begin addressing these biases ethically, frameworks should account for examining results for values-based bias.
All in all, the interplay of AI, trust and culture must be recognised. AI needs to respect and account for racial, religious and linguistic diversity for the full benefits of the technology to be reaped in the best and most responsible way. The discussion around ethical AI ultimately needs to involve everyone – the government, businesses and consumers. Only then can public trust be built, and the full potential of AI be realised as a competitive differentiator for businesses and further economic growth in Singapore.
The writer is Rob Newell, vice president, solution engineering APAC, Salesforce. The article first appeared in Marketing’s August print edition.