ChatGPT Is Here to Stay. What Do We Do With It?
At Provost’s Forum, scholars address benefits and concerns of generative AI technology
Facebook Whistleblower to Students: You Be the Change
In the afternoon session of the Provost’s Forum, data scientist Frances Haugen shared her story of holding Facebook accountable for damaging practices.
“The goal of the model is to predict the next word in the sentence,” Kenney said.
To do this, the GPT part takes a massive amount of data and trains the model to understand the relationships among the words. This then helps the model to decide what is the probability of the next word. Kenney said Chat is the second part: OpenAI took the GPT architecture and trained it to engage in human dialogue. The model uses reinforcement learning to reward the model to select the most appropriate word and to penalize it for a mistake.
This means when you read stories about creepy AI conversations with journalists, you should remember that the model learns from humans, said Duke biostatistics professor and machine learning expert David Page.
“I read one such story this week, and what stood out to me is how hard the journalist had to push to get it to take it down a scary path,” Page said. “If you present it with problematic matter then it will continue on that path.
“It seems to lack a sense of self and is easily led. The scarier part is that someone who is not trying to use it for a bad purpose could inadvertently lead it to go to the dark part of the web. ChatGPT has strong potential to be ‘rabbit hole 2.0.’”
For similar reasons, generative AI can be easily programmed to spread disinformation, said Casey Fiesler, a technology ethicist at the University of Colorado-Boulder. She gave a scenario where you present ChatGPT a description of a family member with extreme political beliefs and ask it to build an argument around those beliefs.
“In that you can create a perfect disinformation campaign for subreddit, and you can do that very quickly,” Fiesler said.
For educators, ChatGPT presents new challenges, but panelists said current discussion focuses too much on policing its use and punishing cheating. Instead, generative AI can be used to teach students how to make critical judgments about the content before them, said Aarthi Vadde, Duke associate professor of English.
“One of the tasks generative AI presents for educators is thinking what is necessary in AI literacy,” said Vadde. “How do you evaluate text in front of you and the sources you find online? This means studying AI text as an object. In a class on Jane Austen, you could have students study its writing style mimicking Austen and compare it with the real thing. This can lead to a better understanding of relations between words and meaning and creativity. And it can lead to interesting conversations on what creativity is about.”
Vadde said she is less interested in worrying about policing students who pass off ChatGPT writing as their original work because right now the essays it produces aren’t very good. “What it produces is good writing for an AI, not for a human being. If our standard for good writing in the future is what an AI produces, that’s not a standard you want to see.”
Likewise, Fiesler said concerns about the death of take-home tests is overblown. “I’ve had teachers tell me they won’t do take-home tests anymore,” she said. “That’s the wrong response. We don’t want witchhunts for AI generated writing. What we need to do is work with the students so they know when it’s appropriate to use ChatGPT and when it’s not.
“There are ways to use GPT tools to integrate it into the learning process. We’ve done this with other tools previously. Should you stop students from using spell check? During a spelling test, yes, but if they’re writing an essay, why would you cut them off from using that technology? The question for educators is to decide when the work is a spelling test and when it is not, when you can use GPT and when you can’t.”
There was disagreement on the panel about the potential impact on the work force. Most said that generative AI will lower the cost of information, which will disrupt many industries, but not lead to replacement of people in most fields that depend on information, from coding and writing to law.
“Some professions will shrink but there will be adaptability,” Kenney said. “I’m not concerned, for example, about programmers. Models just aren’t there yet in their quality.”
Fiesler said the ability of generative AI to produce information quickly, and to write standard content to a template could change how many professions do their jobs. But as the technology, improves, disruptions will increase.
“We won’t have robot doctors, but we will have doctors with supercomputers in pockets,” she said. “For now, it will just augment most professions. However, looking far into the future, it could mean we’ll need universal basic income, because a lot of jobs will need less people. We may have to rethink the entire economy.”
Page said the disruptions may come sooner. “In journalism, for example, our standard for a news-written story has been lowered over time. I think it will impact people now.”
In introducing the session, interim Provost Jennifer Francis said the forum brings the Duke community together to learn from each other and explore solutions on critical issues affecting our personal lives and wider society.
“These questions are incredibly timely and relevant to questions nearly all of us in higher education—and in society more broadly—are thinking about and wrestling with right now,” Francis said.
Following the Provost's Forum Friday exploring generative AI technologies and their impact on education and society, a Duke Today editor prompted ChatGPT, one of the most widely used generative AI tools, to write a short story on the forum's topic. Read the results at “ChatGPT on ChatGPT”