Connect with us

Business

AI, a helping hand for companies in content moderation

blogaid.org

Published

on

In today’s digital age, billions of pieces of content are uploaded to online platforms and websites each day.

In today’s digital age, billions of pieces of content are uploaded to online platforms and websites every day.

Moderating this material has therefore never been more critical and challenging. While the majority of this uploaded content may be positive, we are also seeing a growing volume of harmful and illegal material – from violence and self-harm to extremist rhetoric, sexually explicit images and child sexual abuse material (CSAM).

Tackling this deluge of harmful content is now a decisive challenge for companies, with those unable (or unwilling) to do so exposing themselves to significant penalties and seriously endangering children.

This has been shown by our own research more than a third (38%) of parents have been contacted by their children after seeing harmful or illegal contentwith many gaining access to material as graphic and damaging as CSAM within just ten minutes of going online.

That’s why it’s time for stronger content moderation measures and for companies to look beyond traditional manual moderation methods, which have become impractical and unscalable. Instead, they should leverage the complementary capabilities of AI, which are transforming the content moderation landscape through automation, improved accuracy and scalability.

But as with any new innovation, companies interested in using AI must ensure they implement the technology in a way that ensures regulatory compliance. The decisions companies make today will have a huge impact on their future operations.

The helping hand of AI

AI has dramatically transformed the content moderation landscape by enabling automated scanning of images, pre-recorded videos, live streams and other types of content in no time. It can identify issues such as activities of minors in adult entertainment, nudity, sexual activity, extreme violence, self-harm and hate symbols on user-generated content platforms, including social media.

Trained on large amounts of ground reality data, AI collects and analyzes insights from archives of tagged images and videos, ranging from weapons to explicit content. The accuracy and effectiveness of AI systems are directly related to the quality and quantity of this data. Once trained, AI can effectively detect various forms of malicious content. This is especially important during live streaming scenarios, where content moderation must be feasible across platforms with different legal and community standards.

While an automated approach not only speeds up the moderation process, but also offers scalability – an essential feature in an era where human-only moderation would not be possible with the sheer volume of online content.

A synergy of AI and people

AI automation brings significant benefits, allowing organizations to moderate at scale and reduce costs by eliminating the need for a large team of moderators. But even the most advanced technology requires human judgment, and AI itself is far from perfect. Specific nuances and contextual clues can confuse systems and generate inaccurate results. For example, AI may not be able to distinguish between a kitchen knife used in a cooking video and a weapon used in a violent act, or mistake a toy gun in a children’s commercial for a real firearm.

When AI flags content as potentially harmful or against guidelines, human moderators can step in to review the content and make the final decision. This hybrid approach ensures that while AI expands the scope of content moderation and streamlines the process, humans retain the ultimate authority, especially in complex cases.

The sophistication of AI identification and verification techniques will continue to increase in the coming years. This includes improving the accuracy of matching people appearing in different types of content with their identity documents – a next step in ensuring consent and combating unauthorized distribution of content.

Thanks to its learning capabilities, AI will continually improve accuracy and efficiency, with the potential to reduce the need for human intervention as it continues to evolve. However, the human element will remain necessary, especially in appeals and dispute resolution regarding content moderation decisions. Not only do current AI technologies lack nuanced perspective and understanding, humans can also serve as a check against potential algorithmic biases or errors.

The global landscape of AI regulation

As AI continues to expand and evolve, many companies will turn to regulators to outline their plans for managing AI applications. The European Union is at the forefront of this legislation, with the Artificial Intelligence Act coming into force in August 2024. Positioned as a regulatory pioneer, the law categorizes AI systems into three types: systems that pose an unacceptable risk, systems that are considered high-risk, and a third category with minimal regulation.

As a result, an AI office has been established to oversee the implementation of the law, consisting of five units: regulation and compliance; safety; AI innovation and policy coordination; robotics and AI for social good; and excellence in AI. The agency will also monitor deadlines for certain companies to comply with the new regulations, ranging from six months for banned AI systems to 36 months for high-risk AI systems.

EU companies are therefore advised to closely monitor legislative developments to measure the impact on their operations and ensure that their AI systems comply with regulations within the set deadlines. It is also crucial for companies outside the EU to stay informed about how such regulations could affect their operations, as the legislation is expected to influence policy not only within the EU, but potentially also in Britain , the US and other regions. UK and US AI regulations will follow suit, so companies need to ensure they keep their finger on the pulse and that any tools they implement now are likely to meet the compliance guidelines these countries will roll out in the future.

A joint approach for a safer internet

That said, the successful implementation of AI in content moderation will also require a strong commitment to continuous improvement. Tools will likely be developed before regulations come into effect. It is therefore important that companies proactively check these to avoid possible biases, ensure fairness and protect user privacy. Organizations must also invest in ongoing training for human moderators to effectively handle the nuanced cases flagged for review by AI.

At the same time, given the psychologically taxing nature of content moderation work, solution providers must prioritize the mental health of their human moderators by offering robust psychological support, wellness resources, and strategies to limit prolonged exposure to distressing content.

By adopting a proactive and responsible approach to AI-powered content moderation, online platforms can cultivate a digital environment that promotes creativity, connection and constructive dialogue while protecting users from harm.

Ultimately, AI-powered content moderation solutions provide organizations with a comprehensive toolkit to address the challenges of the digital age. With real-time monitoring and filtering of massive amounts of user-generated content, this cutting-edge technology helps platforms maintain a safe and compliant online environment and allows them to efficiently scale their moderation efforts.

However, when organizations turn to AI, they should keep a close eye on key documents, launch timing, and the implications of upcoming legislation.

If implemented effectively, AI can act as the perfect partner for humans, creating a content moderation solution that keeps children protected when they access the internet and acts as the cornerstone for creating a safe online ecosystem.


Lina Ghazal

Head of Regulatory and Public Affairs at VerifyMy, specialized in online ethics, regulations and security. Previous roles were at Meta (formerly Facebook) and Ofcom.