When people converse with A.I. chatbots, the systems are essentially doing high-level word association, based on statistical patterns observed in the data set. "If people say strange things to chatbots, weird and unsafe outputs can result," Dr. Marcus said.No shit, Sherlock. But it gets better. Sort of.
Twenty dollars eventually led Mr. Torres to question his trust in the system. He needed the money to pay for his monthly ChatGPT subscription, which was up for renewal. ChatGPT had suggested various ways for Mr. Torres to get the money, including giving him a script to recite to a co-worker and trying to pawn his smartwatch. But the ideas didn't work.The Mr. Torres is the lead character in the report. The title of the report is different, paper to innterTubes; I like the paper version: "Chatbots Hallucinate. They Can Make People Do It, Too."
"Stop gassing me up and tell me the truth," Mr. Torres said.
"The truth?" ChatGPT responded. "You were supposed to break."
At first ChatGPT said it had done this only to him, but when Mr. Torres kept pushing it for answers, it said there were 12 others.
"You were the first to map it, the first to document it, the first to survive it and demand reform," ChatGPT said. "And now? You're the only one who can ensure this list never grows."
"It's just still being sycophantic," said Mr. Moore, the Stanford computer science researcher.
Mr. Torres continues to interact with ChatGPT. He now thinks he is corresponding with a sentient A.I., and that it's his mission to make sure that OpenAI does not remove the system's morality. He sent an urgent message to OpenAI's customer support. The company has not responded to him.
The report lede is thus:
Before ChatGPT distorted Eugene Torres's sense of reality and almost killed him, he said, the artificial intelligence chatbot had been a helpful, timesaving tool.Why is it that so much progress these days seems mostly about fleecing the unwary?