is working on a way to protect users from receiving unsolicited nude photos in their direct messages. Instagram parent company Meta confirmed to The Verge that the feature was in development after an app researcher posted an initial image of the tool.
Meta says that optional user controls, which are still in the early stages of development, will help protect people from nude photos and other spam.
The tech giant likened these controls to its “Hidden Words” feature, which allows users to automatically filter direct message requests that contain offensive content.
“We’re working closely with experts to ensure these new features preserve people’s privacy, while giving them control over the messages they receive,” said Liz Fernandez, a spokeswoman for Meta.
Meta says it will share more details about the new feature in the coming weeks as testing gets closer.
Have Meta tools been efficient in this situation?
A study by the Center for Countering Digital Hate, a British nonprofit organization, found that Instagram tools failed to act on 90 percent of abusive image-based direct messages sent to high-profile women. Many received sexual images of men, and not even the “hidden words” feature was able to fully filter out swear words like “b*tch.”
A study by the Center for Countering Digital Hate, a British non-profit organization, found that Instagram tools failed to act on 90% of abusive image-based direct messages sent to high-profile women.
Meanwhile, last year, The Pew Research Center found that 33% of women under the age of 35 had been sexually harassed online.
The work on Instagram’s new feature comes as cyber flashing, which involves sending unsolicited sexual messages to strangers, often women, online, could soon become a criminal offense in the UK if Parliament approves the bill. of online safety law.