Instagram today unveiled two new AI tools designed to protect young users on the platform.
The first feature prevents adults from messaging people under 18 who don’t follow them. It then sends the adult user a notification saying they can’t DM the account.
Instagram has provided little detail on how the system works, but parent company Facebook said in a blogpost that it uses AI to infer the users’ ages:
This feature relies on our work to predict peoples’ ages using machine learning technology, and the age people give us when they sign up. As we move to end-to-end encryption, we’re investing in features that protect privacy and keep people safe without accessing the content of DMs.
The second feature sends teenage users prompts encouraging them to be careful when interacting with adults to whom they’re already connected.
The system first detects potentially suspicious behavior, such as an adult sending a large number of friend or message requests to children. It then inserts a safety notice within the recipient’s DMs, and gives them the option to immediately end the conversation, or block, report, or restrict the adult.
Instagram said the system will launch in some countries this month and roll out globally “soon.”
The company will also encourage teens with public profiles to make their accounts private by sending notifications “highlighting the benefits of a private account and reminding them to check their settings.”
The new features aim to make young people safer onInstagram, which research suggests was the most used platform for child grooming crimes during the first lockdown in England and Wales.
Instagram said it’s now assessing further safety measures, including additional privacy settings, and will provide more detail on them in the coming months.
Published March 16, 2021 — 17:22 UTC