A.I. Puts Eyes on Durham News
A Sanford publication generates stories through a chatbot. Disclosure and knowing its limitations are key.
How does this process of AI work with your stories?
We’re not replacing student journalism at all. We’re trying to fill that gap where we were not getting coverage before. Anything the city government put out in a press release. Local organizations like the RDU Airport Authority, we can take that press release and rewrite it as a news story [by running it through a chatbot].
What motivated you to generate local news through AI?
Local newsrooms don’t have the resources to cover every story that matters to their readers. The AI project allows us to cover some of those smaller stories that a newsroom (ours included) might not assign a reporter to – road closures, traffic delays, event announcements, etc. Where a lack of funding or staffing might result in some of those stories falling through the cracks, we still think they’re valuable to readers, and AI lets us quickly and without much effort produce news stories we otherwise wouldn’t have the resources to write.
How can you trust what AI produces?
We’re aware of the dangers that AI poses. Particularly in the AI world, it’s called “hallucinating,” or AI making up something and stating it as fact. That's where my job comes in. Once the AI produces something, part of my job is going back and checking the facts to make sure that what it says is accurate and reflects what was originally put out in that statement or press release. There’s value in reporting what governments and officials are saying in press releases. If someone were to find a falsehood, there would be value in knowing that the government said something that was false.
How will you know when the project is successful?
I think a good way to go about it is to see what people are reading. Within the first day or two of (the project) being available, we had over 100 people sign up for it, which, to us, indicates that people want this kind of content that they’re not getting elsewhere. If Blackwell Street downtown by the Bulls Ballpark is closed, people need to know that, and that’s not something that they would get reading the Raleigh News & Observer or any other news outlet.
How can we create healthier relationships between AI and readers?
Disclosure is key for us. The other thing is that we’re having fun with it. One thing that we do is use an AI image generator to create a funny mashup of the stories. So, we’ll take a few of the AI news briefs and go to an image generator and say, “make something that represents these stories.” We had one of the stretches of N.C. 751 that was renamed Coach K Highway. We asked an AI to come up with an image of Coach K Highway, and it’s people dribbling basketballs all over the road, which is not what Coach K Highway actually looks like, obviously, but it’s sort of funny, like a wink to what AI can do, and maybe what some of the limitations still are.
And what are your concerns about this project overall?
I think the big concerns for AI and journalism is the hallucination piece. Are we sure that what we're putting out is factually accurate, which is why we emphasize the need for a human editor to review everything that it produces. We also, on the disclosure front, say that the articles were produced by AI and reviewed by human editors. So that’s an important piece that we’re communicating to the readers, that they’re still getting the human touch, even if something was generated by AI.
Look What ChatGPT Dragged In
Our cue: Write a story about how the 9th Street journal at Duke University's Sanford School of Public Policy is using artificial intelligence to write stories from local government news releases. Read what ChatGPT produced.