Sign In

Communications of the ACM

ACM TechNews

Trust in Online Content Moderation Depends on Moderator

View as: Print Mobile App Share:
A human and an artificial intelligence providing content moderation.

Nearly 400 study participants were asked to log in at least twice a day for two days, and were randomly assigned to one of six experiment conditions, varying both the type of content moderation system and the type of harassment comment they saw.

Credit: Analytics India Magazine

An interdisciplinary research team at Cornell University found that an individual's trust in online content moderation systems and decisions depend on whether the moderator is human or artificial intelligence (AI) and the type of harassing content.

The study involved a custom social media site and a simulation engine that uses preprogrammed bots to mimic the behavior of other users.

Almost 400 participants were asked to beta test a new social media platform and randomly assigned to one of six experimental conditions that differed based on the type of content moderation system and harassing content.

With inherently ambiguous content, the researchers found that AI moderators were more likely to be questioned by users. However, trust in all types of moderation was about the same when clearly harassing comments were involved.

From Cornell Chronicle
View Full Article


Abstracts Copyright © 2022 SmithBucklin, Washington, DC, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account