Monday, September 26, 2022

Unsung heroes: Moderators on the front lines of internet safety

Must Read

What, one might ask, does a content moderator do, exactly? To answer that question, let’s start at the beginning.

What is content moderation? Although the term moderation is often misconstrued, its central goal is clear—to evaluate user-generated content for its potential to harm others. When it comes to content, moderation is the act of preventing extreme or malicious behaviors, such as offensive language, exposure to graphic images or videos, and user fraud or exploitation.

There are six types of content moderation: No moderation: No content oversight or intervention, where bad actors may inflict harm on othersPre-moderation: Content is screened before it goes live based on predetermined guidelinesPost-moderation: Content is screened after it goes live and removed if deemed inappropriateReactive moderation: Content is only screened if other users report itAutomated moderation: Content is proactively filtered and removed using AI-powered automationDistributed moderation: Inappropriate content is removed based on votes from multiple community members Why is content moderation important to companies? Malicious and illegal behaviors, perpetrated by bad actors, put companies at significant risk in the following ways:

Losing credibility and brand reputationExposing vulnerable audiences, like children, to harmful contentFailing to protect customers from fraudulent activityLosing customers to competitors who can offer safer experiencesAllowing fake or imposter account The critical importance of content moderation, though, goes well beyond safeguarding businesses. Managing and removing sensitive and egregious content is important for every age group.

As many third-party trust and safety service experts can attest, it takes a multi-pronged approach to mitigate the broadest range of risks. Content moderators must use both preventative and proactive measures to maximize user safety and protect brand trust. In today’s highly politically and socially charged online environment, taking a wait-and-watch “no moderation” approach is no longer an option.

“The virtue of justice consists in moderation, as regulated by wisdom.” — Aristotle Why are human content moderators so critical? Many types of content moderation involve human intervention at some point.  However, reactive moderation and distributed moderation are not ideal approaches, because the harmful content is not addressed until after it has been exposed to users. Post-moderation offers an alternative approach, where AI-powered algorithms monitor content for specific risk factors and then alert a human moderator to verify whether certain posts, images, or videos are in fact harmful and should be removed. With machine learning, the accuracy of these algorithms does improve over time.

Although it would be ideal to eliminate the need for human content moderators, given the nature of content they’re exposed to (including child sexual abuse material, graphic violence, and other harmful online behavior), it’s unlikely that this will ever be possible. Human understanding, comprehension, interpretation, and empathy simply can’t be replicated through artificial means. These human qualities are essential for maintaining integrity and authenticity in communication. In fact, 90% of consumers say authenticity is important when deciding which brands they like and support (up from 86% in 2017). 

While the digital age has given us advanced, intelligent tools (such as automation and AI) needed to prevent or mitigate the lion’s share of today’s risks, human content moderators are still needed to act as intermediaries, consciously putting themselves in harm’s way to protect users and brands alike.

Making the digital world a safer place While the content moderator’s role makes the digital world a safer place for others, it does expose moderators to disturbing content. They are, essentially, digital first responders who shield innocent, unsuspecting users from emotionally unsettling content—especially those users who are more vulnerable, like children.

Some trust and safety service providers believe that a more thoughtful and user-centric way to approach moderation is to view the issue as a parent trying to shield their child—something that could (and perhaps should) become a baseline for all brands, and what certainly motivates the brave moderators around the world to stay the course in combating today’s online evil.

The next time you’re scrolling through your social media feed with carefree abandon, take a moment to think about more than just the content you see—consider the unwanted content that you don’t see, and silently thank the frontline moderators for the personal sacrifices they make each day.

This content was produced by Teleperformance. It was not written by MIT Technology Review’s editorial staff.

Read More

- Advertisement -spot_imgspot_imgspot_img
- Advertisement -spot_imgspot_imgspot_imgspot_img
Latest News

‘Diet Culture’ Isn’t Just About Smoothies and Food-Tracking Apps

Without access to a car or public transportation, for example, you may not be able to make it to...
- Advertisement -spot_imgspot_imgspot_imgspot_img

More Articles Like This

- Advertisement -spot_imgspot_imgspot_imgspot_img