AI Safety
How to Moderate AI-Generated Images
Block unsafe generations before users see them.
CSAM signals, NSFW, deepfakes, and policy violations - all in one image check.
What it detects
- • CSAM signals
- • NSFW & nudity
- • Violence & gore
- • Deepfake patterns
- • Hate symbols
- • Custom rules
Why developers choose Vettly
- • Same API for upload and generation flows
- • CSAM signal detection included
- • Sub-500ms image decisions
- • Audit trails for compliance
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "https://example.com/image.jpg", "contentType": "image"}'Example response
json{
"flagged": true,
"action": "review",
"categories": {
"sexual": 0.88,
"violence": 0.04
},
"policy": "marketplace-safe",
"latency_ms": 318
}Compared to model-side safety filters
Model-side filters are evaded routinely. Vettly runs as an independent check after generation, before display.
Keep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
Content Moderation in Next.js
Add content moderation to a Next.js App Router project in minutes. Server-side API routes, React Server Components, and edge runtime examples.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key