Social Media
Content Moderation for Social Media
Trust-and-safety infrastructure for social platforms.
Per-surface policies, multi-modal coverage, and audit trails.
What it detects
- • Hate & harassment
- • NSFW media
- • Bullying campaigns
- • Coordinated inauthentic behavior
- • Spam & scam DMs
- • Custom rules
Why developers choose Vettly
- • Per-surface policies (posts, DMs, profiles)
- • Image and video coverage in the same API
- • Appeals + audit trails for T&S teams
- • Scales to millions of decisions per day
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "You are terrible.", "contentType": "text"}'Example response
json{
"flagged": true,
"action": "block",
"categories": {
"harassment": 0.93,
"hate": 0.02
},
"policy": "default",
"latency_ms": 142
}Compared to per-surface tooling
One policy system across every UGC surface keeps user experience consistent and platform rules legible.
See the social app patternKeep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
Content Moderation for EdTech
Keep classrooms, tutoring chats, and student forums safe. COPPA-aware policies, parental review flows, and audit trails for school administrators.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key