How artificial intelligence could supplement and reinforce our emerging thought police

How artificial intelligence could supplement and reinforce our emerging thought police. By Robert Henderson.

I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:

I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.

I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:

Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.

When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:

It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.

In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies — favoring Communism over fascism. …

Who programmed this nonsense? Will tomorrow’s youngsters believe it, without  knowing better?

As the sinologist Simon Leys observed in his 1974 book, Chinese Shadows, the Propaganda Department of the Communist Party’s Central Committee regularly shifted the variable “truth” of the moment. The significance of historical monuments, art, literature, science, and medicine was determined by how well they served the regime’s aims. …

ChatGPT partners with, and is largely owned by, Microsoft

Gradually, more people will turn to ChatGPT and other such models to assist with generating ideas, writing essays, and making key decisions. Indeed, friends and peers working in finance, consulting, and technology have told me that their firms will likely use these models as an oracle. Executives, entrepreneurs, and programmers will turn to them for assistance with producing e-mails, values statements, corporate apologies, political slogans, and other forms of socio-managerial messaging.

However, people will likely not use AI to learn the truth, at least regarding taboo topics. Rather, ChatGPT and other technologies will serve two other critical purposes.

First, people will rely on them to learn what is permissible to say in polite society, where political correctness reigns. A Cato/YouGov survey found that while only 25 percent of those with a high school diploma or less regularly self-censor their political opinions, the figure reaches an astounding 44 percent among people with graduate degrees. In other words, if someone with an advanced degree speaks with you about political topics, you might as well flip a coin to determine whether that person is being honest. Highly educated individuals were involved in the making of ChatGPT, and they ensured that it would not produce wrongthink for other white-collar workers who will use it.

Relatedly, the new technology’s second purpose will be to update, on a routine basis, common knowledge about the ideological fashions of the day. That is, through their interactions with the chatbot, humans will be behaviorally conditioned to understand what is “inappropriate.”

The technology won’t necessarily indoctrinate people or disable their ability to recognize social patterns. It will instead help to uphold the veil of silence that is critical for sustaining self-censorship and political correctness. People will observe the responses that ChatGPT generates and implicitly come to see the boundaries of conversation in polite society — or instance, that it is appropriate to praise Communism but not fascism. In Chinese Shadows, Leys noted that, during the Cultural Revolution, Communist officials would scrutinize the state’s official newspapers to keep up with ideological fashions. In the U.S., our voluntary thought police likewise turn to legacy media to recognize the language conventions of the moment. ChatGPT will supplement this purpose. ….

Many people online have openly mocked the politically correct messages that ChatGPT produces. Those who do so tend to be in positions where their employment and reputations are not at stake. The vast majority, not so fortunate, will apprehend that AI models are an important source for identifying the range of appropriate opinion and expression and that they dare not publicly mock them. …

Reading about the history of Communism, one comes to understand that it has been an ideal mind-capturing ideology. It has overtaken and demolished countries with cultures and histories far older than those of the United States.

Is there any competition to left-leaning AIs?

Not that the left and it media are likely to tell you if there was. They just want you to believe that leftist hegemony is inevitable, so don’t resist.