Image Uploads
Image Upload Moderation API
Block unsafe images before they reach your storage bucket.
Synchronous checks at upload time with category-aware decisions.
What it detects
- • Sexual content & nudity
- • Violence & gore
- • Self-harm
- • CSAM signals
- • Hate symbols
- • Custom rules
Why developers choose Vettly
- • Sub-500ms image decisions
- • Block at upload to keep storage clean
- • Category-aware policies tuned per surface
- • Evidence + audit trail for every decision
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "https://example.com/image.jpg", "contentType": "image"}'Example response
json{
"flagged": true,
"action": "review",
"categories": {
"sexual": 0.88,
"violence": 0.04
},
"policy": "marketplace-safe",
"latency_ms": 318
}Compared to AWS Rekognition
Vettly returns decisions and actions, not just labels - and pairs image checks with text and video under shared policies.
Keep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
AI Chatbot Moderation API
Moderate inputs and LLM outputs in real time. Block prompt injection, NSFW content, and policy violations before users see them.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key