Guide
Building a Content Moderation Pipeline for Marketplace Listings
Marketplaces have a unique moderation challenge. Unlike social apps where the primary concern is harassment or hate speech, marketplaces must also catch fraudulent listings, counterfeit goods, misleading descriptions, and pricing manipulation. A single moderation check isn't enough — you need a pipeline.
This guide covers how to build a multi-stage moderation pipeline for marketplace listings using the Vettly API.
Why Marketplaces Need a Pipeline
A marketplace listing has multiple surfaces that need moderation:
- Title and description — could contain hate speech, scam language, or misleading claims
- Images — could show prohibited items, stolen product photos, or explicit content
- Pricing — absurdly low prices for high-value items often indicate fraud
- Seller metadata — new accounts with dozens of listings may be spam
Checking all of these in a single pass is possible, but staging the checks gives you better control over what gets flagged, what gets auto-blocked, and what needs human review.
Pipeline Architecture
The pipeline has three stages:
Pre-Publish Check
Synchronous. Runs before the listing is saved. Catches obviously prohibited content (hate speech, explicit images, scam patterns). Blocked listings never reach the database.
Async Enrichment
Asynchronous. Runs after the listing is saved but before it's visible to buyers. Checks images, cross-references seller history, and flags suspicious pricing patterns.
Human Review Queue
Flagged listings route to a moderation dashboard. Reviewers approve or reject with one click. The decision is recorded with a decisionId for audit trails.
Stage 1: Pre-Publish Check
When a seller submits a listing, run the text content through Vettly before saving:
app.post('/api/listings', async (req, res) => {const { title, description, price, images } = req.body;// Stage 1: synchronous text checkconst textCheck = await vettly.check({content: `${title}\n\n${description}`,policy: 'marketplace',});if (textCheck.action === 'block') {return res.status(422).json({error: 'Listing violates marketplace policies',categories: textCheck.categories,});}// Save listing in 'pending' statusconst listing = await db.listings.create({title, description, price, images,status: 'pending',moderationId: textCheck.decisionId,sellerId: req.user.id,});// Trigger async enrichmentawait queue.publish('listing.moderate', { listingId: listing.id });return res.status(201).json({ id: listing.id, status: 'pending' });});
Stage 2: Async Image and Fraud Checks
A background worker picks up the job and runs deeper checks:
queue.subscribe('listing.moderate', async (job) => {const listing = await db.listings.findById(job.listingId);// Check each imagefor (const imageUrl of listing.images) {const imgCheck = await vettly.check({imageUrl,policy: 'marketplace',});if (imgCheck.action === 'block') {await db.listings.update(listing.id, { status: 'rejected' });return;}if (imgCheck.action === 'flag') {await db.listings.update(listing.id, { status: 'needs_review' });return;}}// All checks passed — publish the listingawait db.listings.update(listing.id, { status: 'published' });});
Stage 3: Human Review
Listings with a needs_review status appear in your moderation dashboard. Vettly's dashboard shows the flagged content, the categories that triggered the flag, and the confidence scores. Reviewers approve or reject with context.
For custom dashboards, use the Vettly API to fetch pending decisions and submit reviewer actions:
// Fetch flagged listings for the review queueapp.get('/api/moderation/queue', async (req, res) => {const listings = await db.listings.find({ status: 'needs_review' });return res.json(listings);});// Reviewer approves or rejectsapp.post('/api/moderation/:listingId/decide', async (req, res) => {const { action } = req.body; // 'approve' or 'reject'await db.listings.update(req.params.listingId, {status: action === 'approve' ? 'published' : 'rejected',reviewedBy: req.user.id,reviewedAt: new Date(),});return res.json({ status: 'ok' });});
Handling Buyer Reports
Buyers should be able to report listings they suspect are fraudulent or misleading. This is the same reporting mechanism used in social apps, adapted for marketplace context:
app.post('/api/listings/:id/report', async (req, res) => {await vettly.reports.create({contentId: req.params.id,reason: req.body.reason, // "counterfeit", "misleading", "prohibited"reportedBy: req.user.id,});return res.status(201).json({ status: 'reported' });});
Seller Blocking
When a buyer blocks a seller, that seller's listings should no longer appear in the buyer's search results or recommendations:
app.post('/api/sellers/:id/block', async (req, res) => {await vettly.blocks.create({userId: req.params.id,blockedBy: req.user.id,});return res.status(201).json({ status: 'blocked' });});
Key Takeaways
- Stage your checks: synchronous text filtering first, async image checks second, human review third
- Never publish unchecked listings: use a
pendingstatus and only flip topublishedafter all checks pass - Store decision IDs: every moderation decision should be traceable for dispute resolution and compliance
- Report + block for buyers: give buyers tools to flag bad listings and hide bad sellers
Build your marketplace moderation pipeline
Vettly handles text, image, and video moderation plus reporting and blocking in one API. Free tier includes 15,000 decisions per month.