Gaming
Content Moderation for Gaming
Keep games safe without slowing down matches.
Sub-300ms text decisions, image checks for player content, and child-safety presets.
What it detects
- • Toxicity in chat
- • Hate speech & slurs
- • Doxxing & PII leaks
- • NSFW player avatars
- • Cheating chatter
- • Custom rules
Why developers choose Vettly
- • Streaming endpoint with sub-300ms target
- • Child-safety presets for under-13 lobbies
- • Per-region and per-platform policies
- • Webhooks for matchmaking penalties
Example request
bashcurl -X POST https://api.vettly.dev/v1/check \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"content": "You are terrible.", "contentType": "text"}'Example response
json{
"flagged": true,
"action": "block",
"categories": {
"harassment": 0.93,
"hate": 0.02
},
"policy": "default",
"latency_ms": 142
}Compared to wordlist filters
AI categories catch evasion (l33t, adversarial spelling, novel slang) and adapt as players invent new evasions.
See the game chat patternKeep exploring
Content Moderation API
One endpoint for text, image, and video moderation.
Image Moderation API
Policy-driven image checks with clear allow, review, and block actions.
Video Moderation API
Async video moderation without stitching together multiple vendors.
Content Moderation for EdTech
Keep classrooms, tutoring chats, and student forums safe. COPPA-aware policies, parental review flows, and audit trails for school administrators.
Get an API key
Start making decisions in minutes with a Developer plan and clear upgrade paths.
Get an API key