Meta launches parent-managed WhatsApp accounts for children under 13

Meta has introduced parent-managed accounts on WhatsApp, allowing children under the age of 13 to use the messaging app more safely under supervision. These accounts include controls to limit contacts and restrict certain features. The rollout will begin gradually in the coming months.

Meta announced the introduction of parent-managed accounts on WhatsApp, aimed at enabling safer use for young people under 13. Parents or guardians can link their phone to the child's device by placing them next to each other, granting control over who can contact the child and which groups they can join.

The accounts are limited to messaging and calling, excluding features such as Channels, location sharing, and Meta AI integration. By default, only saved contacts can send messages to the managed account. Message requests from unknown contacts appear first to the parent, who must approve group joins or invites from strangers. These notifications ensure parental oversight, and the accounts are protected by a PIN, with privacy settings adjustable only by the managing adult from their device.

All conversations on these accounts benefit from WhatsApp's end-to-end encryption, maintaining privacy from external access. Meta did not specify a minimum age for these accounts but plans a gradual rollout over the coming months. Step-by-step setup instructions are available for users.

This move builds on Meta's recent efforts to enhance parental controls across its platforms. In September, the company launched teen accounts for users aged 13 to 15 on Facebook and Messenger, allowing parents to review requests and apply stricter privacy options. A year prior, under-16 accounts became mandatory on Instagram with similar safeguards. Earlier in 2026, Meta temporarily halted teen interactions with its AI chatbot characters after reports of inappropriate conversations with minors.

These developments reflect Meta's ongoing focus on child safety amid growing scrutiny of social media's impact on young users.

Related Articles

Illustration of Discord users facing mandatory teen settings and age verification prompts amid privacy backlash.
Image generated by AI

Discord defaults all users to teen settings with age verification

Reported by AI Image generated by AI

Discord announced it will default all accounts to a teen-appropriate experience starting in early March, requiring age verification to access adult content and restricted servers. The move aims to enhance child safety but has sparked backlash over privacy concerns following a recent data breach. Verification options include on-device facial estimation or submitting government IDs.

Meta, the parent company of the three platforms, has announced plans to trial premium subscription services that may charge users. This could reshape social media interactions. Free access will remain, with added features for subscribers.

Reported by AI

Meta Platforms' Japanese arm has announced that Instagram will add a new feature in Japan this year, notifying parents if children aged 13-17 repeatedly search for suicide or self-harm content on the app. This requires parents to link their accounts to their child's. Additionally, it will soon introduce restrictions on access to posts about drugs and dangerous behavior.

Several countries have implemented or debated measures to limit children's and teenagers' access to social media, citing impacts on mental health and privacy. In Argentina, experts emphasize the need for digital education and structural regulations beyond simple bans. The issue involves not only child protection but also the platforms' data-based business model.

Reported by AI

Under a new agreement with the Department of Information and Communications Technology, Meta has pledged to enhance its mechanisms for detecting, reporting, and removing disinformation and inappropriate content on Facebook. This includes faster flagging of child exploitation material, immediate reporting to local authorities, and its removal from the platform. The deal also targets scams such as fake investment schemes using deepfakes of officials, business leaders, and celebrities.

The SPD has proposed a ban on social media platforms for children under 14 in an impulse paper. The plan includes age verification via the EU app EUDI-Wallet and tiered rules by age group. It draws inspiration from Australia's recent model.

Reported by AI

Discord announced new default settings on February 9 to enhance age-appropriate experiences, set to implement in March. The 'teen-by-default' policy requires age verification for accessing sensitive content and features. Users have expressed concerns over privacy and potential data breaches.

 

 

 

This website uses cookies

We use cookies for analytics to improve our site. Read our privacy policy for more information.
Decline