AI Safety

How to Moderate AI-Generated Images

Block unsafe generations before users see them.

CSAM signals, NSFW, deepfakes, and policy violations - all in one image check.

What it detects

  • CSAM signals
  • NSFW & nudity
  • Violence & gore
  • Deepfake patterns
  • Hate symbols
  • Custom rules

Why developers choose Vettly

  • Same API for upload and generation flows
  • CSAM signal detection included
  • Sub-500ms image decisions
  • Audit trails for compliance
Example request
bash
curl -X POST https://api.vettly.dev/v1/check \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"content": "https://example.com/image.jpg", "contentType": "image"}'
Example response
json
{
  "flagged": true,
  "action": "review",
  "categories": {
    "sexual": 0.88,
    "violence": 0.04
  },
  "policy": "marketplace-safe",
  "latency_ms": 318
}

Compared to model-side safety filters

Model-side filters are evaded routinely. Vettly runs as an independent check after generation, before display.

Get an API key

Start making decisions in minutes with a Developer plan and clear upgrade paths.

Get an API key