Enhancing Safety Features for Teens and Children in Accounts

NewsEnhancing Safety Features for Teens and Children in Accounts

In a bid to enhance the online safety of young users, Meta, the parent company of Instagram, has introduced several new updates aimed at safeguarding teenagers from potential threats. These improvements are part of Meta’s ongoing commitment to protect young people from both direct and indirect harm. The company has been dedicated to creating safe online environments, particularly through initiatives like the Teen Accounts, which ensure age-appropriate experiences and minimize unwanted interactions. As part of these efforts, Meta is also leveraging advanced technology to identify and remove exploitative content effectively.

### Strengthening Teen Safety on Instagram

In a recent announcement, Meta revealed a series of updates designed to further bolster the safety of teenage users on Instagram. One significant enhancement includes new safety features in Direct Messages (DMs) within Teen Accounts. These features provide teenagers with more context about the accounts they are engaging with, thereby enabling them to identify potential scammers more easily. Now, when teens initiate a conversation with a new account, they will see options to view safety tips, block the account, and access information about when the account was created, all displayed prominently at the top of the chat interface.

Furthermore, a new ‘block and report’ feature has been introduced in DMs. This integrated option allows users to simultaneously block and report an account, simplifying the process for individuals who wish to take action against potentially harmful interactions. Previously, users were encouraged to both block and report problematic accounts, but the new combined feature streamlines this process, ensuring that accounts violating community guidelines are promptly flagged for review.

These newly introduced features complement existing Safety Notices, which remind users to remain vigilant while conversing in private messages and to block or report any interactions that make them uncomfortable. The response from teenagers has been encouraging; in June alone, they blocked accounts one million times and reported another million after receiving a Safety Notice.

### Introducing Location Notices and Nudity Protection

Meta has also rolled out a Location Notice on Instagram, which has been viewed one million times by teenagers and young adults in June. This feature alerts users when they are communicating with someone who may be in a different country, a measure designed to protect against sextortion scams where individuals misrepresent their location. About one in ten users engaged with this notice to learn more about how they can protect themselves from such scams.

In addition, Meta’s nudity protection feature, introduced globally, has been overwhelmingly retained by users, including teenagers, with 99% keeping it activated. This feature automatically blurs images suspected of containing nudity when received in DMs, significantly reducing exposure to unwanted explicit content. In June, over 40% of blurred images remained unviewed, demonstrating the feature’s effectiveness in shielding users from inappropriate content. This protection is enabled by default for teen accounts, encouraging users to reconsider before forwarding suspected nude images. In May, users opted not to forward such images 45% of the time after seeing a warning.

### Extending Protections to Adult-Managed Accounts Featuring Children

Recognizing the need for broader protection, Meta is extending some of its Teen Account protections to adult-managed accounts that primarily feature children. This includes accounts run by parents or talent managers representing children under 13. While Instagram’s policy requires users to be at least 13 years old, adults are permitted to manage accounts for younger children, provided it is clearly stated in the account’s bio. If it is discovered that a child is managing their own account, Instagram will remove it.

Despite the majority of these accounts being used appropriately, there are instances where individuals attempt to exploit them, often leaving sexualized comments or requesting inappropriate images. To combat this, Meta is implementing several protective measures. Accounts primarily featuring children will automatically be placed into the strictest messaging settings to prevent unwanted messages and will have the Hidden Words feature enabled to filter offensive comments. Notifications will be sent to these accounts, prompting them to review their privacy settings.

Additionally, Meta aims to prevent potentially suspicious adults from locating these accounts by not recommending them to such users and making it more difficult for them to find each other through search functions. Comments from suspicious adults on these accounts will be hidden, building on last year’s policy update that stopped allowing accounts primarily featuring children to offer subscriptions or receive gifts.

### Taking Action on Harmful Accounts

Beyond these preventive measures, Meta is actively taking action against accounts that violate its community standards. Earlier this year, specialized teams removed nearly 135,000 Instagram accounts for inappropriate interactions with adult-managed accounts featuring children under 13. Further, an additional 500,000 Facebook and Instagram accounts, linked to the original offenders, were also removed. Users were notified when an account interacting inappropriately with their content was removed, encouraging them to remain cautious and to utilize the block and report functions.

Understanding that child exploiters often operate across multiple platforms, Meta has shared information about these accounts with other technology companies through the Tech Coalition’s Lantern program. This collaborative initiative aims to enhance child safety online by sharing insights and resources among industry leaders.

Meta’s latest updates underscore its commitment to creating a safer digital environment for young users. By continuously evolving its safety features and taking decisive action against rule-breaking accounts, Meta is working proactively to safeguard the well-being of teenagers on its platforms. As these measures roll out over the coming months, they are expected to significantly enhance the protective framework for young users and adult-managed accounts featuring children, fostering a safer and more secure online experience.
For more Information, Refer to this article.

Neil S
Neil S
Neil is a highly qualified Technical Writer with an M.Sc(IT) degree and an impressive range of IT and Support certifications including MCSE, CCNA, ACA(Adobe Certified Associates), and PG Dip (IT). With over 10 years of hands-on experience as an IT support engineer across Windows, Mac, iOS, and Linux Server platforms, Neil possesses the expertise to create comprehensive and user-friendly documentation that simplifies complex technical concepts for a wide audience.
Watch & Subscribe Our YouTube Channel
YouTube Subscribe Button

Latest From Hawkdive

You May like these Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.