Watch out for Prebunking, the latest propaganda tool

Watch out for Prebunking, the latest propaganda tool. By Carlisle Kane.

Check out the sheer scale of the censorship in 2020:

In the last quarter leading up to the [2020 US election], Facebook banned over 750,000 political ads. And it algorithmically suppressed the sharing of the New York Post’s bombshell report on Hunter Biden’s laptop. …

Twitter … removed or discredited over 300,000 tweets (mostly from conservatives), including President Trump’s. …

Google banned over 8,000 Republican channels. …

A new type of censorship trialed in 2020:

In 2020, Twitter began experimenting with a new censorship tool they call “prebunking.”

 

 

Instead of banning public figures, they started arbitrarily labeling their tweets as misinformation and then providing subjective explanations that supposedly debunk their claims.

On May 27, 2020, Twitter made history by becoming the first social media channel to discredit an American President’s words as propaganda. By November that year, Twitter had flagged nearly 40% of Trump’s tweets.

Later, Twitter didn’t just mark the tweets, it began hiding the flagged tweets behind warning labels or straight up removing them without notice …

Even if they weren’t taken down, Twitter didn’t allow anybody to reply to them or retweet them. And Twitter’s algorithm didn’t recommend them, which significantly limited their reach.

Prebunking proved a huge success during the last election. …In July, Princeton released a paper that found: … “Evidence from survey data, primary elections, and a text analysis of millions of tweets suggests that Twitter’s relatively liberal content may have persuaded voters with moderate views to vote against Donald Trump.” …

Stepping up the censorship for the midterms:

Twitter redesigned “misinformation” warning labels and adjusted the algorithm so flagged tweets would reach even fewer people. And so far, it’s working wonders. From Twitter’s blog about the preparation for the midterms:

“Late last year, we tested new misleading information labels and saw promising results. The new labels increased click-through rates by 17%, meaning more people were clicking labels to read debunking content. We also saw notable decreases in engagement with Tweets labeled with the new design: -13% in replies, -10% in Retweets, and -15% in likes.” …

Unlike last time, these prompts won’t just appear on flagged tweets. They’ll soon start popping up in the search field whenever someone types in related keywords.

In other words, Twitter is preparing to engage in a well-planned, progressive brainwashing of its 90 million American user base.

After Twitter’s move, Google announced an even more subtle prebunking approach. It teamed up with scientists from Cambridge to create 90-second cartoon clips that debunk “manipulation techniques” and show them as YouTube ads. The campaign is now piloting in Eastern Europe, tackling some of the local narratives. …

Birdwatch creates the illusion of crowd-sourced censorship:

[Birdwatch is] rallying an “invite-only” army of volunteers to annotate “misleading” tweets. …

This month, Twitter is expanding the program to the first 1,000 members. … The catch is how these members are selected and granted (or stripped of) the right to annotate tweets …

Here’s how it works. First, Twitter will give an aspiring birdwatcher a series of Birdwatch notes to rate. If that person’s rating falls within the consensus, they get a point. If they disagree with the consensus, they lose a point. …

[So,] if a member’s views, be they political or otherwise, don’t align with the largely progressive status quo, they’re out.

And that’s how Twitter is banding together an army of leftist trolls to silence the opposition in the upcoming election. …

Explicitly aiming to change votes to the left:

Twitter came up with what it calls a “bridging algorithm.” It’s designed to show only those annotations that statistically have the best shot of swaying people who hold different views. …

“In order to be shown on a Tweet, Birdwatch notes need to be found helpful by people who have tended to disagree in their past ratings. This means the algorithm takes into account not only how many contributors rated a note as Helpful or Not Helpful, but also whether people who rated it seem to come from different perspectives.” …

Plausible deniability:

Twitter already has and is still receiving regulatory heat for removing and shadow-banning content that doesn’t align with its ideologies. This would, in essence, make Twitter a publisher of information instead of a platform for user-generated content. From a regulatory perspective, being a publisher means Twitter would be responsible for anything anyone posts.

This is why, [Twitter has been testing] the ability to downvote tweets and comments. Only, these downvotes aren’t available for anyone to see.

“Twitter describes the button’s purpose as meant to let the platform know when a reply “isn’t adding to the conversation.” Under the heading “help us make Twitter better,” the company says downvote feedback will help the platform prioritize higher quality content.”

In other words, if someone were to challenge Twitter in court for suppressing their tweet, Twitter could respond by saying they never did — it was the audience.

And given that we now know that a considerable portion of Twitter users are bots — thank you, Elon — this essentially gives Twitter de facto power to suppress speech on its platform without recourse.

Jonathon Swift (or maybe it was Mark Twain or Winston Churchill) famously said that “A lie can get round the world before the truth has put its boots on”. Now big tech has arrived and tied truth to the bed.