PlayWise MENA Roblox Access & Protection Brief
Updated Feb 2026

Child protection is the goal.
The question is what works.

Roblox has faced access restrictions in several MENA countries, often citing harassment and safety risks. This page summarizes the main cases and highlights how restrictions were later eased through concrete safety measures.

Context: This is a readable brief with sources. Read the full PlayWise research on Flonga →

Quick Takeaways

  • Restrictions are often followed by attempts to reduce risk surfaces (chat features) and strengthen moderation.
  • Where access was restored, it was typically conditional.
  • PlayWise Focus: A cooperation path that lifts restrictions responsibly via measurable safety goals.

MENA Overview: Restrictions & Outcomes

Dates reflect public reporting at the time.

Blocked / Restricted

Egypt Blocked
Feb 4, 2026

SCMR announced block citing child safety risks. Roblox indicated openness to talks.

Iraq Blocked
Oct 2025

Banned over exploitation and cyber extortion risks.

Qatar Blocked
Aug 2025

Inaccessible via web/app; reporting centered on safety.

Oman Blocked
June 2025

Regulator action after reports of inappropriate content.

Algeria Blocked
Sep 2025

Citing insufficient tools to protect children from scams.

Restored / Mitigated

Jordan Conditional
Dec 2025

Restored with constraints (disabled chat, hidden content).

Kuwait Restored
Nov 2025

Restoration after safeguards: no chat, term removal, monitoring.

UAE Cooperation
Nov 2025

Focused on "work with platform" approach and Arabic moderation.

Regional Feature Limits
Sep 2025

Temporary suspension of chat features while engaging regulators.

Turkey Unban Possible
Ongoing

Stated that access could return if specific content requirements are met.

Cooperation Checklist

How restrictions can be lifted responsibly.

A. Safety Controls

  • Communication: Safer defaults, limited discovery, clear minor/adult boundaries.
  • Parental Controls: Simple setup, Arabic guidance, protection-first defaults.
  • Fraud/Scam: Better detection and anti-scam education prompts.
  • Reporting: Fast response targets for child-harm escalation.

B. Credibility & Verification

  • Milestones: Written "fix list" with dates.
  • Verification: Regulatory review or third-party audits.
  • Transparency: Periodic reporting on removals and response times.
  • Phased Re-access: Restore by age group or feature set.
Workable Start: A time-limited pilot with tightened communication features for minors and verified response-time commitments.

If access returns, what should the conditions be?

When restrictions are eased, outcomes tend to be better when the conditions are specific, measurable, and time-bound. This section is a short “what to include” list that can make any reopening safer and easier to verify.

Recommended conditions

  • Safer defaults for minors: Under-13 and teen accounts start with the safest communication and discovery settings by default.
  • Communication safeguards: Stronger filters for Arabic, limited contact discovery, and clear boundaries between minors and adults.
  • Faster child-safety reporting: A dedicated escalation route for child-harm reports with response-time targets.
  • Fraud/scam enforcement: Quicker takedowns for impersonation, extortion patterns, and scam networks.
  • Review date: A public check-in date where outcomes are assessed and adjustments are made.

Suggested pilot model

  • Phase 1 (pilot): Limited re-access with tighter communication for minors + upgraded moderation capacity.
  • Phase 2 (verify): Independent review or regulator verification that response targets and enforcement are actually happening.
  • Phase 3 (expand): Restore broader access only if safety indicators improve and remain stable.
  • Rollback rule: If severe harm patterns spike and are not addressed, features or access can be tightened again.
Why this matters: clear conditions make it easier for parents, platforms, and regulators to understand what changed and what success looks like.

How to measure safety improvements

Safety commitments become meaningful when they can be tracked over time. These are example indicators that fit the main concerns cited in MENA cases.

Operational indicators (platform side)

  • Child-safety report response time: median/average time to act on high-risk reports.
  • Repeat offender action rate: how often repeat abusers are escalated to stronger actions (restrictions, bans).
  • Scam takedown time: speed of removing impersonation/scam accounts and associated content.
  • Arabic moderation coverage: staffing/coverage commitments and evidence of enforcement in Arabic contexts.

User-facing indicators (family side)

  • Parental controls adoption: percentage of minor accounts with protection settings enabled by default or through setup prompts.
  • Risk education prompts: in-app reminders about scams, reporting, and safer communication.
  • Transparency updates: periodic summaries of top harm categories and enforcement outcomes shared with regulators/public.
  • Complaint trend direction: whether reports of harassment/scams are trending down after changes.

Common Questions

Why do bans focus on communication features?

Many concerns involve unsafe interactions (grooming, extortion, scams). Restricting communication is a direct way to reduce risk surfaces while broader moderation improvements are implemented.

What does “conditional restoration” look like?

Patterns include disabling chat, tightening filters, and improving monitoring. Access is restored only when specific safety expectations are verified.

Where will the full research live?

On Flonga.org. This page is a summary brief. The full article will include deeper citations and structured proposals.

Sources & Verification

Public reports used for this brief.

Link copied to clipboard