
The company says the goal is to make age-appropriate defaults the baseline for everyone, then allow adults to unlock additional capabilities by confirming their age group.
Discord previously launched age assurance and teen-focused defaults in the UK and Australia late last year, partly to meet UK regulatory requirements, and now plans to expand those protections worldwide.
class="wp-block-heading" id="h-what-s-changing-in-march">What’s changing in March
Starting next month, users will only be prompted to verify their age group when attempting to access specific age-restricted content, settings, or features.
Once age assurance is completed, Discord says the user typically won’t be asked again for future age-restricted actions. The age checks are positioned as “progressive gating,” meaning the platform remains accessible for general chat and community participation without forcing universal verification.
Age assurance prompts may appear when users try to:
- Unblur media flagged by Sensitive Content Filters or switch filter settings to “Show” sensitive content
- Turn off Message Requests (enabled by default to screen unknown DMs)
- Access age-restricted channels and servers
- Speak in a Stage channel
- Enable age-restricted app commands
Discord’s “teen-by-default” approach sets robust safety settings across accounts unless an account is confirmed as an adult. In practice, this creates a security baseline designed to reduce exposure to sensitive media, unsolicited direct messages, and higher-risk interaction surfaces (like public speaking on Stages). Verified adults can opt to change these defaults; unverified users cannot.
The defaults include content blurring and tighter controls over sensitive content settings, restrictions on access to age-gated spaces (including age-restricted commands), Message Requests routed to a separate inbox by default, warning prompts on friend requests from people users may not know, and Stage speaking limited to age-assured adults.
From a cybersecurity and trust-and-safety perspective, these settings act as guardrails against common abuse patterns: grooming attempts via unsolicited DMs, social engineering through friend-request funnels, and exposure to explicit or otherwise sensitive content that may be used for coercion or harassment.
Discord says it will use a toolkit rather than a single verification method. A notable addition is “age inference,” which aims to predict whether a user is an adult with high confidence based on account and behavioral signals.
If successful, inference could reduce verification prompts for many adults, though any inference system raises questions about false positives/negatives, explainability, and what signals are considered permissible.
For cases where more certainty is required, Discord will continue offering explicit methods via partners, including facial age estimation and ID document scanning.
The company emphasizes privacy protections: on-device processing for video selfies used in facial age estimation, rapid deletion of identity documents by vendors (often immediately after confirmation), and verification status visible only to the user.
Discord also cautioned users that it will only prompt age assurance within the app when attempting restricted actions, and it will not email or text age assurance results, an anti-phishing detail that signals attackers may try to imitate verification workflows.
Alongside the technical rollout, Discord is launching a Teen Council (recruiting ages 13–17 in the US through May 2026) to incorporate teen feedback into future safety and wellbeing features, and an acknowledgement that safety controls must balance risk reduction with usability and user agency.
Follow us on Google News, LinkedIn, and X for daily cybersecurity updates. Contact us to feature your stories.
The post Discord to Age-Restrict User Access to Key Features Starting Next Month appeared first on Cyber Security News.
Discover more from RSS Feeds Cloud
Subscribe to get the latest posts sent to your email.
