Silicon Valley escalates its war on white supremacy despite free speech concerns, by Tracy Jan. Isn’t this like burning books?
Silicon Valley significantly escalated its war on white supremacy this week, choking off the ability of hate groups to raise money online, removing them from Internet search engines, and preventing some sites from registering at all.
The new moves go beyond censoring individual stories or posts. Tech companies such as Google, GoDaddy and PayPal are now reversing their hands-off approach about content supported by their services and making it much more difficult for “alt-right” organizations to reach mass audiences.
But the actions are also heightening concerns over how tech companies are becoming the arbiters of free speech in America. And in response, right-wing technologists are building parallel digital services that cater to their own movement.
The non-PC Internet is coming:
Gab.ai, a social network for promoting free speech, was founded shortly after the presidential election by Silicon Valley engineers alienated by the region’s liberalism. Other conservatives have founded Infogalactic, a Wikipedia for the alt-right, as well as crowdfunding tools Hatreon and WeSearchr. The latter was used to raise money for James Damore, a white engineer who was fired after criticizing Google’s diversity policy.
“If there needs to be two versions of the Internet so be it,” Gab.ai tweeted Wednesday morning. The company’s spokesman, Utsav Sanduja, later warned of a “revolt” in Silicon Valley against the way tech companies are trying control the national debate.
“There will another type of Internet who is run by people politically incorrect, populist, and conservative,” Sanduja said. …
PayPal late Tuesday said it would bar nearly three dozen users from accepting donations on its online payment platform following revelations that the company played a key role in raising money for the white supremacist rally.
I’ll bet they haven’t banned antifa or black lives matter.
Technology companies have long relied on a 20-year-old law that shields them from responsibility for illegal content hosted on their platforms. The more they get into the business of policing speech — making subjective decisions about what is offensive and what isn’t — the more they are susceptible to undermining their own immunity and opening themselves to regulation, said Susan Benesch, director of the Dangerous Speech Project, a nonprofit group that researches the intersection of harmful online content and free speech.