Azure Content Moderator is an Azure Cognitive Service that checks text, image, and video content for material that is potentially offensive, risky, or otherwise undesirable. It combines machine-assisted moderation with human-in-the-loop capabilities to create an optimal moderation process for real-world scenarios.
Case scenarios to analyze content.
- Online marketplaces that moderate product catalogs and other user-generated content.
- Gaming companies that moderate user-generated game artifacts and chat rooms.
- Social messaging platforms that moderate images, text, and videos added by their users.
- Enterprise media companies that implement centralized moderation for their content.
- K-12 education solution providers filtering out content that is inappropriate for students and educators.
What does it include?
Azure Content Moderator service consists of several web service APIs available through both REST calls and a .NET SDK
This API offers:
- Text moderation: Scans text for offensive content, sexually explicit or suggestive content, profanity, and personally identifiable information (PII). That’s what we…
View original post 744 more words