Comparison
Beyond OpenAI Moderation: When You Need More Than a Score
OpenAI's moderation endpoint is the quickest way to check text for harmful content. It's free, fast, and returns category scores in milliseconds. If you're already using the OpenAI SDK, adding a moderation check is a few lines of code.
But there's a ceiling. When your app grows beyond simple text checks — when you need images, policies, user management, or compliance documentation — the free endpoint stops being enough. This post covers where that ceiling is and what to do when you hit it.
What OpenAI Moderation Does Well
- ✓Free: no per-request cost, included with any OpenAI API key
- ✓Fast: sub-100ms for most text inputs
- ✓Simple: one endpoint, clear category scores
- ✓Good text classification: covers hate, harassment, self-harm, sexual, violence, and more
For an MVP or internal tool where you just need to catch obviously harmful text, it works.
Where It Stops
Text only
No image moderation, no video moderation. If users upload photos or video, you need another service entirely.
No policy customization
You get OpenAI's categories and thresholds. You can't define your own policies, adjust sensitivity per context, or use different rules for DMs vs public posts.
Scores change between model versions
OpenAI updates the underlying model periodically. The same input can return different scores after an update. If your thresholds are finely tuned, updates can cause unexpected behavior.
No user management
No reporting, no blocking, no appeals. These are your responsibility — and they're required for App Store and Play Store compliance.
No audit trail or decision history
OpenAI doesn't store your moderation requests. If a user asks "why was my post removed?", you need your own logging.
No webhooks or async workflows
Every check is synchronous. No way to get callbacks, trigger downstream workflows, or integrate with your moderation queue.
When to Graduate
You've outgrown OpenAI moderation when any of these are true:
- Your app accepts images or video
- You need different moderation rules for different parts of your product
- You're submitting to the App Store and need Guideline 1.2 compliance
- A regulator or legal team needs audit trails of moderation decisions
- Users need to report content, block other users, or appeal decisions
- You want a dashboard to monitor moderation trends without building one
Side-by-Side Code
import OpenAI from 'openai';const openai = new OpenAI();const moderation = await openai.moderations.create({input: userPost.text,});const result = moderation.results[0];if (result.flagged) {// Which category? Check result.categories manually// What action to take? Your code decides// Audit trail? You log it yourself// Images? Not supported}
import { Vettly } from '@vettly/sdk';const vettly = new Vettly(process.env.VETTLY_API_KEY);const result = await vettly.check({content: userPost.text,imageUrl: userPost.imageUrl, // Text + image in one callpolicy: 'community-safe', // Your rules, not a black box});// result.action: 'allow' | 'flag' | 'block'// result.decisionId: stored, searchable, exportable// Webhooks fire automatically if configured
The API call is similar in complexity. The difference is what you get back and what happens next.
What You Gain
Switching from OpenAI moderation to a production API like Vettly adds:
Policies as code. Define moderation rules in YAML, version them in Git, and deploy through CI/CD. Different policies for public feeds, DMs, profile photos, and chatbot output. See the Policies documentation for the full schema.
Multi-modal coverage. Text, image, and video moderation through the same endpoint. No stitching together multiple services.
Built-in workflows. User reporting, blocking, and appeals are API endpoints — not features you build from scratch.
Decision history. Every check is stored with the full context: input, policy version, scores, action, and timestamp. Searchable from the dashboard or the API.
Webhooks. Get notified in real time when content is flagged or blocked. Route events to Slack, your moderation queue, or any HTTP endpoint.
Migration Path
Since both APIs are synchronous, you can run them in parallel during migration:
- Keep your OpenAI moderation call
- Add a Vettly call alongside it
- Compare results for a week
- Once confident, remove the OpenAI call
For a detailed feature comparison, see Vettly vs OpenAI Moderation.
Outgrowing OpenAI moderation?
Vettly adds images, policies, webhooks, and audit trails on top of what you're already doing. Free tier included — no credit card to start.