Twitter has unveiled a "Responsible Machine Learning" initiative to assess "unintentional harms" in the algorithms it uses. In an official blog post, the social media giant said it will be conducting in-depth analysis and studies to assess the existence of potential harms in the algorithms it uses. In the upcoming months, it will be providing a gender and racial bias analysis for its image cropping algorithm, a fairness assessment of its Home timeline recommendations across racial subgroups, as well as an analysis of content recommendations for different political ideologies across seven countries.
The initiative will consist of four main pillars: Twitter taking responsibility for its algorithmic decisions, equity and fairness of outcomes, transparency about its decisions and how the tam arrived at them, as well as enabling agency and algorithmic choice. As part of the initiative, Twitter said it will be building explainable machine learning solutions so users can better understand its algorithms, what informs the algorithms, and how they what users see on Twitter. Similarly, algorithmic choice will allow people to have more input and control in shaping what they want Twitter to be for them, the blog post added. Twitter said it is currently in the early stages of exploring this feature.
To carry out these inspections, Twitter will be forming a machine-learning, ethics, transparency and accountability (META) team. The team will comprise engineers, researchers, and data scientists collaborating across the company. It aims to identify the harms and help Twitter prioritise which issues to tackle first.
In the name of being transparent, Twitter will share its learnings and best practices after its various assessment. This is to improve the industry’s collective understanding of machine-learning solutions, and also to help the platform improve its approach, and hold it accountable. This sharing may come in the form of peer-reviewed research, data-insights, high-level descriptions of Twitter's findings or approaches, and even some of its unsuccessful attempts to address these emerging challenges. "We’ll continue to work closely with third party academic researchers to identify ways we can improve our work and encourage their feedback," Twitter said in its blog post.
Twitter's initiative comes shortly after a Twitter user pointed out how its image cropping algorithm tended to favour white people instead of black people. In a tweet, the netizen placed pictures of Barack Obama and Mitch McConnell, who is a White politician. The tweet preview saw Twitter featuring McConnell instead of Obama, no matter how the two pictures were positioned. The netizen then inverted the colour on the pictures, which then saw Obama appearing in the image preview first.
Twitter is not the only company looking to be more inclusive with its practices. McDonald’s has also recently unveiled a new set of global brand standards, which is aimed to further a culture of physical and psychological safety for employees and customers through the prevention of violence, harassment and discrimination. The new brand standards prioritise actions in four areas: harassment, discrimination and retaliation prevention; workplace violence prevention; restaurant employee feedback; as well as health and safety. These standards, which will apply to 39,000 restaurants in over 100 countries, were informed by a cross-functional global team, reviews of global market practices and perspectives from across the McDonald’s system.
(Photo courtesy: 123RF)
Twitter gets a brand makeover, keeps iconic bird logo
NBCUniversal further invests in Twitter ad partnership to create live events and localised content
Twitter rolls out 'fleets' globally to make tweeting less intimidating