Mark, a father in the United States, denounced that Google’s Artificial Intelligence marked the images of his sick son as child sexual abuse (MASI), in addition to closing his accounts and filing a report with the National Center for Missing Children and Exploited (NCMEC, for its acronym in English), which led to a police investigation against him.
This case was made public thanks to a recent New York Times report, however, it happened in February 2021, when the offices were still closed due to covid-19. The father noticed a swelling in his son’s genital area and, at the request of a nurse, sent images of the problem for a video consultation.
Although the doctor prescribed an antibiotic to cure the infection, a couple of days later, the father received a notification from Google that his account had been blocked for “harmful content”, which was a “violation”. serious part of Google’s policies and could be illegal.”
Google announced the launch of its content safety AI toolkit in 2018, with the goal of “proactively identifying never-before-seen child sexual abuse material that can be reviewed and, if confirmed as such, removing and reporting it.” as fast as possible”.
The company has explained that they make use of specialists and “state-of-the-art technology” to generate matches between the new images and a repository of information that they have in their system so that when they find any MASI, they inform the NCMEC and take action together with agencies. police around the world.
A Google spokesperson told the Times that the company only scans users’ images when they take “affirmative action” to do so, which apparently can include backing up their photos to Google Photos.
During 2021, Google sent 287,368 reports of possible child sexual abuse to NCMEC, according to its transparency report, while this organization alerted the authorities to about 4,260 possible victims, including Mark’s son.
Although the father tried to appeal Google’s decision, Google denied his request and instead ended up losing access to his emails, contact, photos and even his phone number, as he used Google Fi mobile service.
Separately, the San Francisco Police Department opened an investigation into Mark last December and determined that the incident “did not meet the elements of a crime,” according to the Times.
Google spokeswoman Christa Muldoon told The Verge in a statement that they are committed to preventing this type of content from spreading on their platform. “Our team of child safety experts review flagged content for accuracy and consult with pediatricians to help ensure we can identify instances where users may be seeking medical advice,” it said.
And while protecting minors is a critical issue, online safety advocates criticized the photo scanner, calling it an unreasonable invasion of privacy.
Jon Callas, director of technological projects at the Electronic Frontier Foundation (EFF), told the NYT that these types of practices are “precisely the nightmare that worries us all. They’re going to scan my family album and then I’m going to get in trouble.”
It should be remembered that last year, the EFF criticized Apple’s plan to include a photo scanner to find images of child sexual abuse, because they considered that it opened a back door to the lives of users and represented “a decrease in privacy for all iCloud Photos users.”
Although Apple discontinued that feature, in version 15.2 of its operating system it included an optional tool for child accounts, where if opted in, the Messages app “scans image attachments and determines if a photo contains nudity, while maintaining End-to-end encryption of messages.
If it detects these types of images, it blurs the content and displays a warning to the child, as well as presenting resources to improve their online safety.