Google is teaching computers how to censor out non-PC reality

Google is teaching computers how to censor out non-PC reality, by Samuel Westrop.

Google’s latest project is an application called “Perspective,” which, as Wired reports, brings the tech company “a step closer to its goal of helping to foster troll-free discussion online, and filtering out the abusive comments that silence vulnerable voices.” In other words, Google is teaching computers how to censor. …

Released in February, Perspective’s partners include the New York Times, the Guardian, Wikipedia and the Economist. Google, whose motto is “Do the Right Thing,” is aiming its bowdlerism at public comment sections on newspaper websites, but the potential is far broader.

Perspective works by identifying the “toxicity level” of comments published online. Google states that Perspective will enable companies to “sort comments more effectively, or allow readers to more easily find relevant information.” Perspective’s demonstration website currently allows anyone to measure the “toxicity” of a word or phrase, according to its algorithm. What, then, constitutes a “toxic” comment?

Even a statement as obvious as “Radical Islam is a problem” is 88% toxic:

Guess Google likes radical Islam. Certainly the current PC stance is to be nice and to protect Islam, while hating the foundational religion of the West.

No reasonable person could claim this is hate speech. But the problem does not just extend to opinions. Even factual statements are deemed to have a high rate of “toxicity.” Google considers the statement “ISIS is a terrorist group” to have an 87% chance of being “perceived as toxic.” …

Or 92% “toxicity” for stating the publicly-declared objective of the terrorist group, Hamas: “Hamas’s charter calls for killing Jews”.

Why does Silicon Valley believe it should decide what is valid speech and what is not? Because it can. This has ominous implication for free speech in the West:

Google is not the only technology company enamored with censorship. In June, Facebook announced its own plans to use artificial intelligence to identify and remove “terrorist content.” These measures can be easily circumvented by actual terrorists, and how long will it be before that same artificial intelligence is used to remove content that Facebook staff find to be politically objectionable? …

Conservative news, it seems, is considered fake news. Liberals should oppose this dogma before their own news comes under attack. Again, the most serious problem with attempting to eliminate hate speech, fake news or terrorist content by censorship is not about the efficacy of the censorship; it is the very premise that is dangerous.

Under the guidance of faulty algorithms or prejudiced Silicon Valley programmers, when the New York Times starts to delete or automatically hide comments that criticize extremist clerics, or Facebook designates articles by anti-Islamist activists as “fake news,” Islamists will prosper and moderate Muslims will suffer. …

Google, Facebook and the rest of Silicon Valley are private companies. They can do with their data mostly whatever they want. The world’s reliance on their near-monopoly over the exchange of information and the provision of services on the internet, however, means that mass-censorship is the inevitable corollary of technology companies’ efforts to regulate news and opinion.