Profiles
Profile Moderation API
Stop bad profiles at signup, not after they are reported.
Display names, bios, photos, and links - all under one API.
What it detects
- • Slurs in display names
- • Impersonation patterns
- • NSFW profile photos
- • Suspicious link signatures
- • Underage profile signals
- • Custom rules
Why developers choose Vettly
- • Run on signup and every edit
- • Image checks for profile photos
- • Custom rules for impersonation lists
- • Audit trail for every reject
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "You are terrible.", "contentType": "text"}'Example response
json{
"flagged": true,
"action": "block",
"categories": {
"harassment": 0.93,
"hate": 0.02
},
"policy": "default",
"latency_ms": 142
}Compared to wordlist-only checks
AI categories catch novel slurs and adversarial spelling that static lists miss.
Keep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
AI Chatbot Moderation API
Moderate inputs and LLM outputs in real time. Block prompt injection, NSFW content, and policy violations before users see them.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key