AI Makes Stuff Up

AI Makes Stuff Up. By John Hinderaker.

Tony Venhuizen, a smart guy from South Dakota, operates a web site where he writes about the history of the governors of that state. He asked ChatGPT, “Please write a blog post discussing South Dakota’s oldest and youngest governors.” Chat GPT responded with a competent description of South Dakota’s oldest governor, Nils Boe. It then went on to write about the state’s youngest governor, Crawford H. “Chet” Taylor. That part of ChatGPT’s post began like this, and continued for five paragraphs:

Crawford H. “Chet” Taylor served as the 14th governor of South Dakota, from 1949 to 1951. Taylor was born on July 23, 1915, in Sioux Falls, South Dakota, and he grew up in nearby Flandreau. Taylor attended the University of South Dakota, where he earned a law degree.

Remarkably, however, Crawford H. “Chet” Taylor is entirely a figment of ChatGPT’s imagination. Tony Vanhuisen wrote:

Crawford H. “Chet” Taylor was never Governor of South Dakota and, in fact, I can find no evidence of such a person, at all. I will credit ChatGPT, though, that Governor Taylor is a plausible-sounding fictional governor.

The 14th Governor of South Dakota was not Chet Taylor (who again, doesn’t exist) but Tom Berry. Taylor is said to have served from 1949 to 1951; in fact, that would coincide with the second gubernatorial term of George T. Mickelson.

AI even produced a fake portrait of the fake governor:

Invented by ChatGPT

Presumably ChatGPT could have written a pedestrian description of the career of Richard Kneip, who was actually South Dakota’s youngest governor, as it did for Nils Boe. But no: the program decided to act mischievously. I love the fact that it even invented a nickname for its imaginary governor. …

Recently, ChatGPT made up some legal cases:

Some lawyers in New York relied on AI, in the form of ChatGPT, to help them write a brief opposing a motion to dismiss based on the statute of limitations. Chat GPT made up cases, complete with quotes and citations, to support the lawyers’ position. … The lawyers are in deep trouble.

ChatGPT just does pattern recognition, not any logic or “thinking.”

ChatGPT is smart enough to figure out who the oldest and youngest governors of South Dakota are and write standard resumes of their careers. It knows how to do legal research and understands what kinds of cases would be relevant in a brief. It knows how to write something that reads more or less like a court decision, and to include within that decision citations to cases that on their face seem to support the brief’s argument. But instead of carrying out these functions with greater or lesser skill, as one would expect, the program makes stuff up–stuff that satisfies the instructions that ChatGPT has been given, or would, anyway, if it were not fictitious.

Presumably the people who developed ChatGPT didn’t program it to lie. So why does it do so? …

In the meantime, anyone who relies on ChatGPT or other AI programs is foolish.

ChatGPT doesn’t know what lying is. That requires logic and thinking. Instead, it is just making realistic patterns — which in some contexts is “lying.”

There are two sorts of AI — pattern recognition by neural nets, and logic engines following databases of rules and facts. ChatGPT is the former. The latter type of AI doesn’t really exist yet — there are some, but they aren’t remotely human like and they don’t cope well with language.