ChatGPT: The Sky Isn’t Falling, At Least Not Yet
Professors Should Worry About the Chatbot, Writers Not So Much
If you’ve been reading about ChatGPT, the artificial intelligence chatbot launched at the end of November by OpenAI, you might think Chicken Little is right about how the sky is falling.
If you haven’t been paying attention, the news is that you can have full, uncannily normal conversations with the chatbot, which can generate its own ideas, write essays, news stories, poetry, songs, computer code, and business plans, among its many other capabilities. Incredibly, it can do all of that in seconds.
ChatGPT’s abilities are undoubtedly stunning and a bit frightening, inevitably conjuring thoughts of futuristic dystopian science fiction films like “I, Robot,” “The Matrix,” or “Westworld,” where artificially sentient beings threaten humanity.
Last week, Coursera CEO Jeff Maggioncalda told the World Economic Forum in Davos, Switzerland that "The first time I sat down in front of ChatGPT, I said 'This is not possible.'" Maggioncalda described the chatbot as a "game changer" that is "blowing my mind." But in a separate interview, he said that “It’s dangerous, and it can disrupt things.”
That’s been the theme of a slew of opinion pieces, including one titled “10 Dangerous Things It’s Capable Of.” Articles detail how ChatGPT is a cybersecurity threat, poses propaganda and hacking risks, threatens education, could put writers out of business, and more.
Some universities and school districts have already banned its use, issued warnings to their professors, restructured courses, and taken other preventive measures.
Meanwhile, calls for regulating AI are getting louder, and Google was alarmed enough about ChatGPT that it reengaged its founders, Larry Page and Sergey Brin (who had left their daily jobs at Google in 2019), for strategy meetings on artificial intelligence and chatbots.
As a journalist, writer, and university professor who has taught writing, I’m especially concerned about the impact ChatGPT can have on all three professions.
So, I tried my own experiment. As you’ll see in the following examples, at least for now, journalists and writers don’t need to run to job retraining programs to prepare for new careers.
Example No. 1: Can ChatGPT Write Well?
To test the chatbot, I first asked it to write an essay about the central topic of “A View from the Center,” specifically directing it to explain “why most Americans are moderates and how the U.S. is not more politically divided now than ever before.” Here’s ChatGPT’s complete answer (apologies for the bad copy-and-paste job):
The ChatGPT essay is competently written, fairly well structured, and it made valid points that I have made in past columns. I didn’t agree with all of it, and some of the writing was simplistic (“The current political climate is highly polarized due to several factors such as the increasing political polarization”).
However, from my standpoint as a professor who has taught a first-year college writing class, the result has to be concerning. If I had assigned this topic to my 20-student class at the “most selective” and very popular university where I teach, I am almost certain that the ChatGPT essay would have been among the top three or four in the class.
On the other hand, as a writer and journalist, I’m much less impressed.
I decided to test my reaction further and sent the essay to a couple of friends. I didn’t tell them who the author was and asked what they thought. They were polite, but I could tell from their comments that they were underwhelmed.
Then, when I revealed that I had not written it, they reacted with relief. Attorney Alan Raul, the head of Sidley Austin LLP’s privacy and cybersecurity practices and one of the country’s top experts in internet law, wrote: “Honestly, I’m glad it was AI. I didn’t think it was sophisticated like you. I chalked it up to just being a first draft.” Ira Rosen, the longtime and much awarded TV news producer and author of “Ticking Clock: Behind the Scenes at 60 Minutes,” emailed to say: “I thought it was a little off. It didn’t have your personality.” Others echoed those comments, saying the essay was “boring.”
Example No. 2: Does It Get Facts Straight?
While the “moderates” essay was factual, I had read about how ChatGPT could be error-prone. So, I gave the chatbot a few more tries. The results were far more problematic when I checked to see what it would say about me:
I like to think of myself as a strong interviewer, so I was especially pleased and flattered by the final sentence (although I wonder how the chatbot came up with it).
The problem is that the “bio” is littered with mistakes. I was the news anchor on “Good Morning America,” but I left ABC News three years before ChatGPT says I began to work at GMA. I was the main anchor at CBS stations in Chicago and Miami, and I was an ABC News correspondent, but I never had the latter role at CBS News. I was also an anchor and reporter for the local Univision and Telemundo stations in New York, but I was never a Telemundo network correspondent.
The fabrications then get even nuttier. I have never worked for América TeVé, which, as far as I can tell, does not have a show named “Cada Día,” with or without me. Also, I have never hosted “Al Rojo Vivo.”
Any student who wrote a fib-filled essay like that would flunk, and any journalist who did would no longer have a job. It’s especially incomprehensible because plenty of accurate information about me is easily available online.
Example No. 3: Are Errors the Rule or the Exception?
Wanting to see if egregious mistakes were a pattern for ChatGPT, I asked it about my sister-in-law, Maite Delgado (that’s her in the picture above). She is a very popular Spanish-language TV host, especially in her native Venezuela, where she’s also one of the country’s top social media influencers. Here’s the chatbot’s response:
To say that ChatGPT’s answer is full of falsehoods is an understatement. Maite is not a journalist, did not start her career in the 1970s (she was in grade school then), was never a correspondent for newspapers, did not host either of the TV shows cited (she’s hosted many others), is not a political activist, and hasn’t written a book. In fact, I can’t even find a book by that title written by anyone.
Conclusion
I’m far from the only one who has found fabrications and bungling of facts by ChatGPT. Technology website Futurism broke the news that CNET had quietly published dozens of articles generated by artificial intelligence. Many were inaccurate, forcing CNET to issue multiple corrections.
However, ChatGPT has already reportedly passed the “Turing Test,” by exhibiting intelligent behavior that fooled a panel of judges into thinking they were communicating with another human and not a computer. Google’s LaMDA artificial intelligence had also reportedly passed the test months earlier.
I also tested the chatbot with questions on race, religion, and beauty. It consistently responded carefully, intelligently, and sensitively, with answers that were unobjectionable.
And, of course, ChatGPT and AI in general will inevitably get better. ChatGPT-4, a more advanced version of the the Open AI chatbot, is already in the works.
Will greater sophistication bring greater threats? It may. At that point, I may have to eat my words and start sounding like Chicken Little.