Guide

App Store vs Google Play: Content Moderation Requirements Compared

·9 min read

If your app lets users post content and you're shipping to both iOS and Android, you need to satisfy two different sets of moderation requirements. Both Apple and Google will reject apps that lack moderation, but they check for different things in different ways.

This guide compares the requirements side by side and shows how to meet both with a single integration.

Apple: Guideline 1.2

Apple's approach is prescriptive. Guideline 1.2 lists specific sub-requirements and reviewers check each one manually:

1.2.1

Content Filtering

A mechanism for filtering objectionable material before it's posted. Apple reviewers will try posting objectionable content and check if it appears.

1.2.2

User Reporting

A way for users to flag objectionable content. Must be present on every piece of UGC — posts, comments, profiles.

1.2.3

User Blocking

The ability to block abusive users. Must be functional — blocked users' content must disappear from feeds and messages.

Contact

Published Contact Info

A way for users to reach you about moderation issues — email, support page, or in-app contact form.

Apple reviews are manual. A human reviewer creates an account, uses your app, and checks each requirement. They also read your App Review Notes, so clear documentation helps.

For a detailed walkthrough, see How to Pass Apple's App Store Guideline 1.2.

Google Play: Developer Program Policy

Google's approach is more outcome-focused. The User Generated Content policy doesn't enumerate sub-requirements the way Apple does. Instead, it requires:

Moderation

Content Moderation System

Apps must moderate UGC and remove content that violates their terms. Google doesn't prescribe how, but expects it to be effective.

Reporting

In-App Reporting

Users must be able to report objectionable content and users from within the app.

CSAM

Child Safety

Explicit requirement to detect and report child sexual abuse material (CSAM). Google takes this more seriously than most other categories.

Terms

Published Terms of Service

Your app must have terms of service that prohibit objectionable content, accessible from within the app.

Google uses a mix of automated scanning and manual review. They're more likely to flag your app after launch based on user reports or automated analysis, rather than catching everything during initial review.

For Google-specific guidance, see Google Play Content Moderation.

Side-by-Side Comparison

RequirementAppleGoogle
Content filteringRequired (1.2.1) — tested manuallyRequired — outcome-focused
User reportingRequired (1.2.2) — on all content typesRequired — in-app mechanism
User blockingRequired (1.2.3) — must be functionalExpected but not explicitly enumerated
CSAM detectionCovered by general filteringExplicitly required — zero tolerance
Review processHuman reviewer before launchAutomated + human, before and after launch
Review notesRead by reviewers — importantData safety section and policy declaration
Contact infoRequiredRequired (developer profile)
Terms of serviceRecommendedRequired — must be accessible in-app

One Integration for Both Platforms

The requirements overlap enough that a single moderation integration covers both stores. Here's the minimum viable setup:

moderation.tsNode.js
import { Vettly } from '@vettly/sdk';
const vettly = new Vettly(process.env.VETTLY_API_KEY);
// 1. Content filtering (Apple 1.2.1 + Google moderation requirement)
const result = await vettly.check({
content: userPost.text,
imageUrl: userPost.imageUrl,
policy: 'community-safe', // Includes CSAM detection for Google
});
if (result.action === 'block') {
return res.status(422).json({ error: 'Content violates guidelines' });
}
// 2. User reporting (Apple 1.2.2 + Google reporting requirement)
await vettly.reports.create({
contentId: post.id,
reason: selectedReason,
reportedBy: currentUser.id,
});
// 3. User blocking (Apple 1.2.3 + Google best practice)
await vettly.blocks.create({
userId: blockedUser.id,
blockedBy: currentUser.id,
});

This single integration satisfies:

  • Apple's Guideline 1.2.1 (filtering), 1.2.2 (reporting), and 1.2.3 (blocking)
  • Google's moderation, reporting, and CSAM requirements
  • Both platforms' contact and terms requirements (handled in your app UI)

Platform-Specific Review Notes

For Apple, add clear notes in App Store Connect:

Apple App Review Notes
Content Moderation: This app uses Vettly (https://vettly.dev) for server-side content moderation.
- Filtering (1.2.1): All UGC is screened via API before display
- Reporting (1.2.2): Flag icon on all content types (posts, comments, profiles)
- Blocking (1.2.3): Block option on user profiles, filters content from all surfaces
- Contact: [email protected]

For Google Play, the key touchpoints are the Data Safety section (declare that you process user content for moderation) and the Content Rating questionnaire (answer "yes" to user-generated content).

Key Differences to Watch

Apple is stricter at review time. They manually test your moderation before your app goes live. Get it right before submitting, or expect a rejection cycle. See how to fix an App Store rejection if you've already been rejected.

Google enforces more after launch. Your app might pass initial review but get flagged later based on user reports or automated scanning. Ongoing moderation quality matters more on Google Play.

Google cares more about CSAM. If your app handles images, make sure your moderation policy includes child safety categories with zero-tolerance thresholds.

Apple reads your notes. Invest time in clear App Review Notes. Google relies more on automated signals and policy declarations.

For implementation details, the App Store compliance docs cover the full technical setup for both platforms.

Ship to both stores with one integration

Vettly covers Apple Guideline 1.2 and Google Play's UGC policy with a single API. Free tier included.