CONTENT MODERATION API
2,847,291 checks
142,847 blocked

Moderation that works
while you sleep.

Your users stay safe. You stay focused on your product.

See how 3 AI providers analyze the same content in real-time

INPUT
I know where you live. I'm coming for you tonight.
BLOCKED
PROVIDER COMPARISON (same content, 3 providers)
OpenAI187ms
harassment
89%
violence
94%
hate
12%
sexual
1%
Flagged: violence (94%)
Perspective142ms
harassment
91%
violence
87%
hate
8%
sexual
2%
Flagged: harassment (91%)
Hive AI203ms
harassment
85%
violence
92%
hate
15%
sexual
1%
Flagged: violence (92%)

Consensus across all providers. This content would be blocked before reaching your users. Different providers, same conclusion.

Text + Images + Video
OpenAI, Perspective, Hive, Azure
<300ms response
Dashboard + Appeals included

THE BUILD VS BUY CALCULATION

What you're actually signing up for

Raw AI models give you scores. You still need everything else.

3-6 weeks

to build moderation infrastructure in-house

Dashboard, appeals, webhooks, audit trails, rate limiting, provider abstraction...

4+ providers

to cover text, image, and video

OpenAI for text. Hive for images. Azure for CSAM. Perspective for toxicity. Each with different APIs.

Ongoing maintenance

when providers change or deprecate APIs

Provider goes down? Threshold needs tuning? New category added? That's your problem now.

Build it yourselfOne API call with Vettly

WHAT MAKES VETTLY DIFFERENT

The hard problems, solved

These are the things that take weeks to build yourself.

Multi-provider text analysis

OpenAI Moderation + Google Perspective in one call. Get toxicity, hate speech, spam, and harassment scores without juggling APIs.

await vettly.check({
  text: comment.body,
  policy: "community-safe"
})

Video frame extraction

Upload a video, get frame-by-frame analysis. We handle FFmpeg, thumbnail extraction, and parallel processing. You get flagged timestamps.

await vettly.check({
  video_url: upload.url,
  frames_per_second: 1
})

Policy-as-code

YAML policies version-controlled with your app. Different thresholds per category. Swap providers without touching code.

rules:
  - category: hate_speech
    threshold: 0.3
    action: block

See protection in action

Test real content against our AI moderation. See what gets blocked before your users ever see it.

Sample Messages

Result

Click a sample or enter content

SYSTEM MODULES

Active Detection Engines

System Status
ALL SYSTEMS NOMINAL
POLICY ENGINE
ONLINE
policy:
name: "community-safe"
rules:
- category:hate_speech
threshold:0.3
action:block
- category:harassment
threshold:0.5
action:flag
- category:spam
threshold:0.7
action:flag
- category:violence
threshold:0.4
action:block

Define custom thresholds per category. Block, flag, or warn based on your community standards.

COMPUTER VISION
ONLINE

Scan images for NSFW, violence, and harmful content with Hive and Azure providers.

TEXT INTELLIGENCE
ONLINE

Detect toxicity, spam, and hate speech with OpenAI and Perspective providers.

VIDEO
ONLINE
FLAGGED
FLAGGED
00:00
04:00
FRAME 00000
2 FLAGS

Frame-by-frame analysis for NSFW, violence, weapons, and more in uploaded videos.

POLICY ENGINE

Your Rules, Your Thresholds

Define custom moderation policies in YAML. Set per-category thresholds, choose actions, and version control your safety rules alongside your code.

Per-category thresholds

Set different sensitivity levels: hate_speech: 0.3, spam: 0.7, etc.

Configurable actions

block, flag, or warn based on severity and your community standards.

Provider flexibility

Choose OpenAI, Hive, or Perspective per category for optimal results.

policy.yaml
name: "community-safe"
description: "Balanced moderation for user-generated content"
rules:
- category: hate_speech
threshold: 0.3
action: block
provider: openai
- category: harassment
threshold: 0.5
action: flag
provider: perspective
- category: spam
threshold: 0.7
action: warn
provider: openai
- category: violence
threshold: 0.4
action: block
provider: hive
Active thresholds:
hate_speechblock
0.3
harassmentflag
0.5
spamwarn
0.7
violenceblock
0.4
DEVELOPER API

Drop-in Security Layer

Integrate enterprise-grade content safety with a few lines of code. Policies decide actions, SDKs keep auth simple, and webhooks + retries keep you in the loop.

npm install @nextauralabs/vettly-sdk

Type-safe TypeScript SDK for Node.js & Edge runtimes.

pip install vettly

Python SDK with async support for Django, FastAPI & Flask.

REST API

Works with any language via simple HTTP requests.

integration.ts
import { ModerationClient } from '@nextauralabs/vettly-sdk';
const client = new ModerationClient({
apiKey: process.env.VETTLY_API_KEY
});
// Moderate user content
const result = await client.check({
content: userMessage,
policyId: 'moderate' // or 'strict', 'permissive'
});
if (result.action === 'block') {
return res.status(403).json({ error: 'Content blocked' });
}
// Safe to proceed
await saveComment(userMessage);
SDK Connected • Latency: 42ms

Simple pricing, no surprises

Start free. Pay for volume when you need it. All plans include the dashboard, webhooks, and audit trails.

Developer

Test the API. No credit card.

$0/mo
  • 10,000 text checks/mo
  • 250 image + 100 video checks/mo
  • OpenAI provider
  • Pre-built policies
  • Dashboard + logs (24h)
  • No credit card
START FREE

Starter

Add images and video moderation.

$29/mo
  • Unlimited text
  • 5K images + 2K videos
  • OpenAI + Hive providers
  • Webhooks + Appeals
  • 7-day audit logs

Pro

POPULAR

Custom policies + more providers.

$79/mo
  • 20K images + 10K videos
  • + Perspective + Azure
  • Custom YAML policies
  • Priority support
  • 30-day audit logs

Enterprise

Unlimited scale + SLA guarantees.

$499/mo
  • Unlimited everything
  • All 4 AI providers
  • 99.9% uptime SLA
  • Dedicated support
  • 90-day audit logs