A new analysis published by Twitter has confirmed that the company’s automatic photo cropping algorithm discriminates based on ethnicity and gender . If an image featuring a black man and a white woman is uploaded, the algorithm will choose to show the woman 64% of the time and the man 36%, as Twitter researchers have found.
In the comparisons of men and women, there was an 8% difference in favor of women. The algorithm also showed an overall 4% bias aimed at displaying images of white people rather than black people. In response, the social network said it would remove the feature and replace it with new tools that would allow users to see a “real preview” of images added to tweets, without any race or gender bias.
Discrimination based on gender and skin color
From the company, they immediately went on to declare that “one of our conclusions is that not everything that is uploaded to Twitter is easily identifiable for an algorithm , and in this case, how to crop an image is a decision that is better for people to make “wrote Twitter’s director of software engineering, Rumman Chowdhury, on a blog post when the controversy broke out.
Nothing similar happened with the ‘male gaze’, which aimed to find out if the algorithm tended to focus on different parts of female bodies, found no evidence of bias. When applied to technologies such as facial recognition, the consequences of algorithm bias could go far beyond an unfairly cropped photo.
Experts agree that algorithms are known to depict people with darker skin tones as more violent and close to animals, based on old racist conventions. This is very likely to have a direct effect on racialized people when such systems are used to detect abnormal or dangerous situations, as is already the case in many parts of Europe.
How does the Twitter algorithm work?
Until recently, images posted to Twitter were automatically cropped using an algorithm programmed to focus on “salience,” a measure of how likely the human eye is to be drawn to a particular part of an image.
Areas of high prominence in an image typically include high-contrast people, text, numbers, objects, and backgrounds. However, a machine learning algorithm like the one used by Twitter is only as unbiased as the information that the algorithm is tested against.
The problem with the algorithm that automatically crops Twitter photos was the subject of various debates throughout the past year, when Canadian PhD student Colin Madland noted that he always chose to show him in the preview rather than his colleague, a boy. black.
Madland’s tweet about this discovery went viral, prompting other users to post pictures with multiple people in an effort to see which one would choose to display the algorithm.