YouTube now warns users before posting offensive comments

YouTube now warns users before posting offensive comments
Pile of 3D YouTube now warns users before posting offensive commentsPlay Button Logos

To promote respectful conversations, YouTube is introducing a new update that will alert users when their comments will be disrespectful to others, giving them the ability to think before uploading. 

From a reminder, the commenter can step on by uploading the comment as it is or taking a few extra moments to modify the comment before publishing it. The warning would show before the YouTube AI-based system comments are deemed abusive. 

Johanna Wright, Vice President of Product Marketing At YouTube, said that to assist creators in properly handling comments and interact with their viewers, the organization would test a new YouTube Studio filter for possibly offensive and hurtful commentary that has been automatically checked.

The latest Community Guideline for posting potentially offensive comments on Android. If the commentator continues to create a possibly negative remark even after the pop-up message, they may suggest doing so or even opt to edit/delete it. If users believe they’ve been incorrectly flagged, they will let YouTube know the same thing with a pop-up message.

YouTube to Warn Users Before Posting Offensive Comments – but Why?

In an attempt to recognize holes in the framework that may affect the creator’s ability to fulfill their maximum potential, YouTube will voluntarily require creators to include their gender, sexual identity, race, and ethnicity as of next year. Using this the video streaming site said it will investigate how material from diverse cultures is viewed in its processes, such as search and discovery and monetization. 

The google-owned organization said it will also look into potential trends of hatred, abuse, and prejudice that may have a greater effect on certain groups than others. 

YouTube said it has invested in technologies that will help algorithms better identify and delete hateful remarks, taking into consideration the subject of the post and the background of the message.

Launching of survey

Also, YouTube will start a survey in 2021 to collect more data about creators and build fair opportunities for them.

Starting in 2021, YouTube would voluntarily ask creators to provide us with their gender, sexual identity, race, and ethnicity. 

“We would then look closely at how material from diverse cultures is handled in our search and discovery and monetization processes. We would also search into potential trends of animosity, abuse, and bigotry that may impact certain groups rather than others,” the officials added. 

“This survey will be an extra option for creators to engage in initiatives hosted by YouTube; such as #YouTubeBlack Creator Meetings and FanFest if they are interested,” the official said. 

The survey will launch first in the United States in soon 2021.

The emerging trend explained in more detail

YouTube would soon begin proactively encouraging viewers to include demographic details in an attempt to find signs of hate speech that could have a greater effect on certain groups than others. 

Last December, the organization updated its position on harassment, saying that it must adopt a tougher approach on “veiled or implied threats” to step ahead. 

The organization reports that since the beginning of 2019, the amount of regular hate speech ratings has risen by 46-fold. Nevertheless, hateful language remains rampant on the website. 

The method of warning users that their remarks might be abusive has been tried by other sites. 

Instagram started sending users pop-ups asking if they’re sure they’d like to make comments that could break the app’s rules in July 2019. These “nudge warnings” were extended this October. 

Instagram also said the early pop-up experiments have produced promising outcomes. 

Research undertaken by OpenWeb and Google’s AI conversation network, published in September, sought to measure the impact of feedback responses by evaluating 400,000 comments on news websites. 

The study showed that for about a third of the comments, seeing an alert prompted them to amend their comments. A little over half of those who edited their comments updated them in a manner that no longer breached group expectations; whilst about a quarter reworked their comments to prevent triggering an automated system.


More on : Read why Internet Explorer Not Responding here.


Write A Comment