Content moderation
Template:Short description Template:Use dmy dates
On websites that allow users to create content, content moderation is the process of detecting contributions that are irrelevant, obscene, illegal, harmful, or insulting. The purpose of content moderation is to remove or apply a warning label to problematic content or allow users to block and filter content themselves.<ref name="asm"/> It is part of the wider discipline of trust and safety.
Various types of Internet sites permit user-generated content such as posts, comments, videos including Internet forums, blogs, and news sites powered by scripts such as phpBB, a wiki, PHP-Nuke, etc. Depending on the site's content and intended audience, the site's administrators will decide what kinds of user comments are appropriate, then delegate the responsibility of sifting through comments to lesser moderators. Most often, they will attempt to eliminate trolling, spamming, or flaming, although this varies widely from site to site.
Major platforms use a combination of algorithmic tools, user reporting and human review.<ref name="asm"/> Social media sites may also employ content moderators to manually flag or remove content flagged for hate speech or other objectionable content. Other content issues include revenge porn, graphic content, child abuse material and propaganda.<ref name="asm">Template:Cite journal</ref> Some websites must also make their content hospitable to advertisements.<ref name="asm"/>
In the United States, content moderation is governed by Section 230 of the Communications Decency Act, and has seen several cases concerning the issue make it to the United States Supreme Court, such as the current Moody v. NetChoice, LLC.
Supervisor moderationEdit
Also known as unilateral moderation, this kind of moderation system is often seen on Internet forums. A group of people are chosen by the site's administrators (usually on a long-term basis) to act as delegates, enforcing the community rules on their behalf. These moderators are given special privileges to delete or edit others' contributions and/or exclude people based on their e-mail address or IP address, and generally attempt to remove negative contributions throughout the community.<ref name="idj"/>
Commercial content moderationEdit
Commercial Content Moderation is a term coined by Sarah T. Roberts to describe the practice of "monitoring and vetting user-generated content (UGC) for social media platforms of all types, in order to ensure that the content complies with legal and regulatory exigencies, site/community guidelines, user agreements, and that it falls within norms of taste and acceptability for that site and its cultural context".<ref>Template:Cite news</ref>
Industrial compositionEdit
The content moderation industry is estimated to be worth US$9 billion. While no official numbers are provided, there are an estimates 10,000 content moderators for TikTok; 15,000 for Facebook and 1,500 for Twitter as of 2022.<ref name=":0">Template:Cite journal</ref>
The global value chain of content moderation typically includes social media platforms, large MNE firms and the content moderation suppliers. The social media platforms (e.g Facebook, Google) are largely based in the United States, Europe and China. The MNEs (e.g Accenture, Foiwe) are usually headquartered in the global north or India while suppliers of content moderation are largely located in global southern countries like India and the Philippines.<ref>Template:Citation</ref>Template:Rp
While at one time this work may have been done by volunteers within the online community, for commercial websites this is largely achieved through outsourcing the task to specialized companies, often in low-wage areas such as India and the Philippines. Outsourcing of content moderation jobs grew as a result of the social media boom. With the overwhelming growth of users and UGC, companies needed many more employees to moderate the content. In the late 1980s and early 1990s, tech companies began to outsource jobs to foreign countries that had an educated workforce but were willing to work for cheap.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Working conditionsEdit
Employees work by viewing, assessing and deleting disturbing content.<ref>Template:Cite news</ref> Wired reported in 2014, they may suffer psychological damage.<ref>Template:Cite magazine</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref name="idj">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>Template:Cite magazine</ref> In 2017, the Guardian reported secondary trauma may arise, with symptoms similar to PTSD.<ref name="guardian-2017-05-04">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Some large companies such as Facebook offer psychological support<ref name="guardian-2017-05-04" /> and increasingly rely on the use of artificial intelligence to sort out the most graphic and inappropriate content, but critics claim that it is insufficient.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In 2019, NPR called it a job hazard.<ref name="NPR Job Hazard">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Non-disclosure agreements are the norm when content moderators are hired. This makes moderators more hesitant to speak up about working conditions or organize.<ref name=":0" />
Psychological hazards including stress and post-traumatic stress disorder, combined with the precarity of algorithmic management and low wages make content moderation extremely challenging.<ref>Template:Cite book</ref>Template:Rp The number of tasks completed, for example labeling content as copyright violation, deleting a post containing hate-speech or reviewing graphic content are quantified for performance and quality assurance.<ref name=":0" />
In February 2019, an investigative report by The Verge described poor working conditions at Cognizant's office in Phoenix, Arizona.<ref name="Trauma Floor">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Cognizant employees tasked with content moderation for Facebook developed mental health issues, including post-traumatic stress disorder, as a result of exposure to graphic violence, hate speech, and conspiracy theories in the videos they were instructed to evaluate.<ref name="Trauma Floor" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Moderators at the Phoenix office reported drug abuse, alcohol abuse, and sexual intercourse in the workplace, and feared retaliation from terminated workers who threatened to harm them.<ref name="Trauma Floor" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In response, a Cognizant representative stated the company would examine the issues in the report.<ref name="Trauma Floor" />
The Verge published a follow-up investigation of Cognizant's Tampa, Florida, office in June 2019.<ref name="Bodies in Seats">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>Template:Cite news</ref> Employees in the Tampa location described working conditions that were worse than the conditions in the Phoenix office.<ref name="Bodies in Seats" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Similarly, Meta's outsourced moderation company in Kenya and Ghana reported mental illness, self-harm, attempted suicide, poor working conditions, low pay, and retaliation for advocating for better working conditions.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref>
Moderators were required to sign non-disclosure agreements with Cognizant to obtain the job, although three former workers broke the agreements to provide information to The Verge.<ref name="Bodies in Seats" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In the Tampa office, workers reported inadequate mental health resources.<ref name="Bodies in Seats" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> As a result of exposure to videos depicting graphic violence, animal abuse, and child sexual abuse, some employees developed psychological trauma and post-traumatic stress disorder.<ref name="Bodies in Seats" /><ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> In response to negative coverage related to its content moderation contracts, a Facebook director indicated that Facebook is in the process of developing a "global resiliency team" that would assist its contractors.<ref name="Bodies in Seats" />
FacebookEdit
Template:Seealso Facebook had increased the number of content moderators from 4,500 to 7,500 in 2017 due to legal requirements and other controversies. In Germany, Facebook was responsible for removing hate speech within 24 hours of when it was posted.<ref>Template:Cite news</ref> In late 2018, Facebook created an oversight board or an internal "Supreme Court" to decide what content remains and what content is removed.<ref name="NPR Job Hazard" />
According to Frances Haugen, the number of Facebook employees responsible for content moderation was much smaller as of 2021.<ref>Template:Cite AV media</ref>
TwitterEdit
Social media site Twitter has a suspension policy. Between August 2015 and December 2017, it suspended over 1.2 million accounts for terrorist content to reduce the number of followers and amount of content associated with the Islamic State.<ref name="tnsm">{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> Following the acquisition of Twitter by Elon Musk in October 2022, content rules have been weakened across the platform in an attempt to prioritize free speech.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref> However, the effects of this campaign have been called into question.<ref>{{#invoke:citation/CS1|citation |CitationClass=web }}</ref><ref>Template:Cite news</ref>
Distributed moderationEdit
User moderationEdit
User moderation allows any user to moderate any other user's contributions. Billions of people are currently making decisions on what to share, forward or give visibility to on a daily basis.<ref name="anf">Template:Cite journal</ref> On a large site with a sufficiently large active population, this usually works well, since relatively small numbers of troublemakers are screened out by the votes of the rest of the community.
User moderation can also be characterized by reactive moderation. This type of moderation depends on users of a platform or site to report content that is inappropriate and breaches community standards. In this process, when users are faced with an image or video they deem unfit, they can click the report button. The complaint is filed and queued for moderators to look at.<ref>Grimes-Viort, Blaise (December 7, 2010). "6 types of content moderation you need to know about". Social Media Today.</ref>
UnionizationEdit
150 content moderators, who contracted for Meta, ByteDance and OpenAI gathered in Nairobi, Kenya to launch the first African Content Moderators Union on 1 May 2023. This union was launched 4 years after Daniel Motaung was fired and retaliated against for organizing a union at Sama, which contracts for Facebook.<ref>Template:Cite magazine</ref>
See alsoEdit
- Content intelligence
- Like button
- Meta-moderation system
- Moody v. NetChoice, LLC
- News aggregator
- Recommender system
- Social network aggregation
- Trust metric
- We Had to Remove This Post
ReferencesEdit
Further readingEdit
External linksEdit
- Data Workers' Inquiry a collaboration between Weizenbaum Institute, Technische Universität Berlin and DAIR
- Cliff Lampe and Paul Resnick : Slash (dot) and burn: distributed moderation in a large online conversation space Proceedings of the SIGCHI conference on Human factors in computing systems table of contents, Vienna, Austria 2005, 543–550.