Encouraging People to Try New AI Tools — While Still Protecting Sensitive Data

Image
A lab photo

To help the community experiment safely, Duke’s Office of Information Technology partnered with IT Security, legal, and procurement teams to build privacy protections into the systems themselves. The idea is simple: People shouldn’t have to be privacy experts before trying a new tool. 

“It’s not glamorous, and it’s largely invisible,” said Nick Tripp, Duke’s chief information security officer. “But instead of asking every individual to assess privacy risk on their own, Duke builds those protections into the infrastructure people rely on every day.”

One example of this approach is DukeGPT, the university’s AI platform designed for institutional use. It gives students, researchers, and staff access to AI tools inside an environment that follows Duke’s data stewardship policies. Duke also negotiated an educational license for ChatGPT Edu. Under that agreement, input data stays within Duke’s environment and is not used to train external models or for marketing.

The university also set clear rules about internal data access. Content from individual AI interactions is not viewable except under a few specific, policy‑defined circumstances.

All of this creates a structure that supports responsible experimentation. “Responsible AI isn’t about saying no — it’s about creating the conditions where researchers can say yes with confidence,” MacAlpine said.

For more information on AI tools at Duke, go to the Office of Information Technology website.