Skip to main content

Content Moderation Overview

RC RTC service supports automatically or manually sending audio and video data from voice/video calls, online meetings, or live streaming applications to moderation services. This enables you to review user-generated content in your app, enhancing business security and preventing violations that could harm your operations.

RC delivers moderation results to your application server through callback services. By analyzing these results, you can decide whether to take actions like banning users.

tip

Client-side does not provide management interfaces or callback interfaces for this feature.

Audio Stream Moderation provides the following capabilities:

  • Political speech recognition: Accurately identifies over 100 types of political content, including political figures, events, separatist rhetoric, and terrorism in various scenarios.
  • Moaning voice recognition: Leverages Bi-GRU and Attention models to precisely detect inappropriate audio like moaning, groaning, ear whispers, or shout-singing.
  • National anthem recognition: Pioneers NAR models with hybrid deep neural networks to accurately identify standard or distorted renditions of national anthems in complex environments.
  • Pornographic speech recognition: Detects audio containing sexual, vulgar, obscene, or erotic content.
  • Abusive speech recognition: Identifies insults, slurs, defamation, and other abusive content across scenarios.
  • Spam ad recognition: Precisely flags illegal promotional content featuring WeChat IDs, phone numbers, QQ accounts, etc.

Video Stream Moderation provides the following capabilities:

  • Political video recognition: Accurately detects national flags/emblems, political figures, military uniforms, subversive elements, or leader caricatures in videos.
  • Pornographic video recognition: Identifies sexual, suggestive, vulgar, hentai, child exposure, game nudity, or explicit content.
  • Violent/terrorist video recognition: Detects bloody riots, terrorist groups, cults, weapons, and other violent content.
  • Ad video recognition: Real-time identification of variant spam ads containing phone numbers, WeChat/QQ IDs, URLs, QR codes, etc.
  • Logo watermark recognition: High-precision detection of competitor logos or politically sensitive logos to protect brand integrity.

Service Architecture

RC media servers (RTC Server) transcode received audio/video streams into RTMP format required by moderation services, eliminating the need for manual transcoding or frame capturing.

After initiating moderation tasks, we periodically deliver third-party moderation results to you via callbacks, either at scheduled intervals or upon triggering interception events. You can parse these results to determine follow-up actions like kicking users from rooms or banning users.

(height=400)

Enabling the Service

You can activate the service on the Console's IM & RTC Moderation page and configure:

  • Moderation trigger method:

    • Auto-start: Moderation begins when RTC sessions start.
    • Manual start: Initiate moderation via API (supports API termination). See Task Control.

    Moderation automatically stops when RTC sessions end.

  • Callback URL for moderation results:

    Example: http(s)://your.app.server/any-url-path. Once configured, all moderation status changes for rooms in your app will trigger real-time HTTP callbacks to this publicly accessible URL.

Configurations take effect within 15 minutes.