NSFW
NSFW Content Detection API
Detect NSFW content across every media type with one API.
Tunable thresholds, clear actions, and review queues.
What it detects
- • Sexual content & nudity
- • Suggestive imagery
- • Explicit text
- • Sexual self-harm signals
- • CSAM signals
- • Custom rules
Why developers choose Vettly
- • Same NSFW policy across text, image, and video
- • Tunable thresholds per category
- • Soft-queue with the review action
- • CSAM signal detection included
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "https://example.com/image.jpg", "contentType": "image"}'Example response
json{
"flagged": true,
"action": "review",
"categories": {
"sexual": 0.88,
"violence": 0.04
},
"policy": "marketplace-safe",
"latency_ms": 318
}Compared to image-only detectors
Vettly classifies NSFW across every media type and returns actions, not just probability scores.
Keep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
AI Chatbot Moderation API
Moderate inputs and LLM outputs in real time. Block prompt injection, NSFW content, and policy violations before users see them.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key