In today’s digital era, where information and media are easily accessible to a global audience, ensuring content safety has become an essential concern for individuals and organizations in all sectors. With the growth of social media and digital communication, there is a pressing need to provide protection from the harms of offensive, unsafe, or inappropriate content.
The traditional approach to content safety typically involves manual moderation or rule-based filtering. This brings challenges as its resource intensive, can be slower in response and may struggle to keep up as the digital content grows.
AI powered solutions offer a significant advantage in overcoming these challenges posed by the traditional approaches with its advanced algorithms, accuracy, and scalability.
Microsoft recently announced Azure AI content safety, aiming to create safer online spaces. This offering is part of Azure AI product platform, and it leverages different AI models to detect inappropriate content in text and images. Its multilingual model helps to identify text in 8 different languages.
Services can prove useful in various scenarios such as social messaging platforms, companies seeking to implement a centralized moderation of their content, education solutions that require content filtering for students and gaming platforms.
Let’s explore the features offered:
- Content classification for text using language models
- Detect explicit images by leveraging Florence foundation model by Microsoft
- Assign different severity scores for harmful content
- Apply semantic understanding to grasp nuance and context.
- Multilingual support – English, German, French, Spanish, Italian, Japanese, Chinese and Portuguese
- Option to customize moderation by the ability to incorporate organization specific policies or guidelines.
- Azure Content Safety takes charge of data encryption and decryption during transmission, ensuring both security and compliance measures are upheld.
The functioning of the service is as follows:
- When a text is submitted to the model, it undergoes classification into 4 harm categories. Categories supported are Hate, Violence, Sexual and Self-harm. It also supports multi-labeled classification i.e. same text can be classified as self-harm and violence. You also have the flexibility to choose which categories should be considered for classification.
- Next, a severity level is assigned for each category based on the content classified. Safe – level 0, Low – level 2, Medium – level 4, High – level 6
- Content can then be set to either get rejected, send to moderator for review or auto approved based on severity.
Azure offers a Text detection API, Image detection API and a Content Safety studio option to harness these services. The Content Safety studio serves as a valuable tool that allows users to observe how their content is being moderated and make necessary adjustments to align with their company policies. Additionally, users can monitor the usage and trends of their content through this studio.
The existing pricing model entails a cost of $1.50 for processing a batch of 1,000 photos, and a fee of $0.75 for processing 1,000 units of text data.
To sum up, Azure AI provides a strong and trustworthy solution for businesses and individuals aiming to uphold a safe and secure online atmosphere. By making use of Azure AI’s content safety features, businesses can cultivate a positive online experience for their customers, encourage responsible content sharing, and establish a more secure digital ecosystem that benefits everyone involved.