(The Hill) – YouTube will begin warning users before they post comments that may be offensive to other people, the company announced Thursday. The new feature is part of the video-sharing platform’s efforts to address widespread racist and homophobic harassment targeted at creators by commenters and other accounts.

YouTube will also begin proactively asking users to provide demographic information in an effort to find patterns of hate speech “that may affect some communities more than others.” The company last December beefed up its policy on harassment, saying it would be taking a stricter stance on “veiled or implied threats” moving forward.

The company touts that since the beginning of 2019 it has increased the number of daily hate speech comment renewals by 46-fold. However, hateful content remains rampant on the platform. The strategy of warning users that their comments may be offensive has been tested by other platforms.


Advertisement


Instagram began giving users pop-ups asking if they are sure they want to post comments that might violate the app’s guidelines in July 2019. It expanded these “nudge warnings” this October. Instagram has said that early trials of the pop-up yielded positive results. A study conducted by OpenWeb and Google’s AI conversation platform released in September attempted to quantify the effects of comment feedback by analyzing 400,000 comments on news websites. READ MORE