Social Media

Content Moderation for Social Media

Trust-and-safety infrastructure for social platforms.

Per-surface policies, multi-modal coverage, and audit trails.

What it detects

  • Hate & harassment
  • NSFW media
  • Bullying campaigns
  • Coordinated inauthentic behavior
  • Spam & scam DMs
  • Custom rules

Why developers choose Vettly

  • Per-surface policies (posts, DMs, profiles)
  • Image and video coverage in the same API
  • Appeals + audit trails for T&S teams
  • Scales to millions of decisions per day
Example request
bash
curl -X POST https://api.vettly.dev/v1/check \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "You are terrible.", "contentType": "text"}'
Example response
json
{
  "flagged": true,
  "action": "block",
  "categories": {
    "harassment": 0.93,
    "hate": 0.02
  },
  "policy": "default",
  "latency_ms": 142
}

Compared to per-surface tooling

One policy system across every UGC surface keeps user experience consistent and platform rules legible.

See the social app pattern

Get an API key

Start making decisions in minutes with a Developer plan and clear upgrade paths.

Get an API key