Instagram will test features that blur messages containing nudity to safeguard teens and prevent potential scammers from reaching them, its parent Meta said on Thursday as it tries to allay concerns over harmful content on its apps.
The tech giant is under mounting pressure in the United States and Europe over allegations that its apps were addictive and have fueled mental health issues among young people.
Meta said the protection feature for Instagram’s direct messages would use on-device machine learning to analyze whether an image sent through the service contains nudity.
The feature will be turned on by default for users under 18 and Meta will notify adults to encourage them to turn it on.
“Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won’t have access to these images – unless someone chooses to report them to us,” the company said.
Unlike Meta’s Messenger and WhatsApp apps, direct messages on Instagram are not encrypted but the company has said it plans to roll out encryption for the service.
Meta also said that it was developing technology to help identify accounts that might be potentially engaging in sextortion scams and that it was testing new pop-up messages for users who might have interacted with such accounts.
In January, the social media giant had said it would hide more content from teens on Facebook and Instagram, adding this would make it more difficult for them to come across sensitive content such as suicide, self-harm and eating disorders.
Attorneys general of 33 US states, including California and New York, sued the company in October, saying it repeatedly misled the public about the dangers of its platforms.
In Europe, the European Commission has sought information on how Meta protects children from illegal and harmful content.
© Thomson Reuters 2024
Affiliate links may be automatically generated – see our ethics statement for details.