“Never read below the line” – that has become intelligent advice for anyone stimulated to view the online comments.
The extremely toxic nature of internet conversations is one of the growing concern to many publishers. But Google thinks that they may have a response with the help of computers to moderate these comments.
The search giant Google has initiated something called Perspective, which it describes as a technology based on machine learning to determine problematic comments. The software has been developed by Jigsaw, a division of Google which aims to tackle online security dangers such as extremism and cyberbullying.
The machine learning system learns by observing how thousands of online conversations have been moderated and then measures new comments by evaluating how “toxic” they are and whether the same language had caused other people to exit conversations. What it’s doing is trying to improve the quality of debate and ensure that people do not hesitate to join it.
Jared Cohen of Jigsaw states three ways Perspective that could be used: by websites to help moderate comments, by customers needing to choose the level of discourtesy they see in the online conversations that they participate and by people wanting to restrict their own behaviour.
He claimed that the research had exposed the fact that many aggressive comments came from people who were normally reasonable but were having a bad day.
“You could get the response as you type. ‘Hey this is 70% toxic’,” he added.
An experiment on the Jigsaw website reveals how the tool might help users to analyze the level of “toxicity” in comments about global warming. Set the slider up high and you get people describing each other as “stupid” or “uneducated”. Set the slider as low and you get “they are ill-informed”.
Automatic moderation tools
Wikipedia, where the tool has been used to look at disagreements over editing, the Guardian and the Economist have also experimented with the software. But, the tool is in its development stage, will be available freely to any publisher who needs it.
Surprisingly, it doesn’t appear that it is going to be used by Google itself, due to the fact that YouTube is home to some of the awful comments you will on the internet. Google says that similar automatic moderation tools are already accessible to owners of YouTube channels.
What Google is trying to stress is that it sees no role for itself in determining what is acceptable in an online conversation – that is for its customers to decide.
Like other social media platforms, it is determined to be seen as a technology business, not a media giant. As its influence on the way we see the world grows, that position is getting harder to maintain.