Microsoft is launching a new AI-powered moderation service that it says is designed to foster safer online environments and communities. Called Azure AI Content Safety, the new offering, available through the Azure AI product platform, offers a range of AI models trained to detect “inappropriate” content across images and text.

The models — which can understand text in English, Spanish, German, French, Japanese, Portuguese, Italian, and Chinese — assign a severity score to flagged content, indicating to moderators what content requires action. “Microsoft has been working on solutions in response to the challenge of harmful content appearing in online communities for over two years.

We recognized that existing systems weren’t effectively taking into account context or able to work in multiple languages,” the Microsoft spokesperson said via email. “New [AI] models are able to understand content and cultural context so much better. They are multilingual from the start … and they provide clear and understandable explanations, allowing users to understand why content was flagged or removed.”


Advertisement


During a demo at Microsoft’s annual Build conference, Sarah Bird, Microsoft’s responsible AI lead, explained that Azure AI Content Safety is a productized version of the safety system powering Microsoft’s chatbot in Bing and Copilot, GitHub’s AI-powered code-generating service.

“We’re now launching it as a product that third-party customers can use,” Bird said in a statement. Presumably, the tech behind Azure AI Content Safety has improved since it first launched for Bing Chat in early February. Bing Chat went off the rails when it first rolled out in preview; our coverage found the chatbot spouting vaccine misinformation and writing a hateful screed from the perspective of Adolf Hitler. Other reporters got it to make threats and even shame them for admonishing it.

In another knock against Microsoft, the company just a few months ago laid off the ethics and society team within its larger AI organization. The move left Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.