*
Friday: 24 April 2026
  • 24 April 2026
  • 11:55
Meta launches a new tool that allows parents to monitor their childrens conversations with artificial intelligence

Khaberni - "Meta," the parent company of "Facebook" and "Instagram," has announced a new tool that gives parents the ability to view the topics of their children's conversations with its smart chatbots. 

Parents were already receiving alerts when their children talked about serious topics such as suicide or self-harm, but the new tool will provide a more comprehensive and detailed look at those conversations.

This service started on April 23, where parents using supervisory tools on "Facebook," "Messenger," and "Instagram" will be able to access a new tab named Insights, which contains an option titled "Their Interactions with AI." This option displays a list of topics their children have discussed with "Meta" robots over the past seven days.

The topics include broad main categories such as school, travel, writing, entertainment, lifestyle, health, and wellness, in addition to subtopics under each category. For example, the wellness subtopics include mental and physical health, while lifestyle includes topics such as fashion and food.

However, the use of this feature requires that the children be using "Teen accounts" available on Meta platforms, according to PC Mag. The tool will initially be available for parents in the United States, the United Kingdom, Australia, Canada, and Brazil, with a global version to be launched in the coming weeks.

The launch of this tool comes shortly after "Meta" was fined $375 million in a lawsuit for failing to prevent the exploitation of children on its apps.

To enhance the safety of teenagers, "Meta" also announced the formation of an "AI Wellness Expert Board," a group of specialists who will provide ongoing feedback on the AI experiences of teenagers, to ensure they remain safe and age-appropriate. Employees of "Meta" working on artificial intelligence projects are expected to hold regular meetings with the board to discuss feature updates and listen to opinions on products.

It should be noted that the issue of child safety on social media has become prominent in recent months. Last March, a court in California awarded a woman $6 million after she proved that "Meta's" (Facebook) and "Google's" (YouTube) apps caused her depression and anxiety, claiming that their products were designed to be addictive and kept her captive since her childhood. This ruling marks the first time social media companies have been convicted due to the harmful impact of their products on individuals, particularly children and teenagers, as the jury decided that these apps did not include appropriate measures to protect younger users.

Topics you may like