Skip to main content

These Works of Art Were Created by Artificial Intelligence

Ruby to showcase Duke’s first A.I. art contest, 6 to 8 p.m., Wednesday, March 20

A new art contest at Duke isn’t limited to human artists -- the contestants in the 'AI for Art' competition also collaborated with machines. Meet the artists and see their work, 6 to 8 p.m. Wednesday, March 20, Rubenstein Arts Center
A new art contest at Duke isn’t limited to human artists -- the contestants in the 'AI for Art' competition also collaborated with machines. Meet the artists and see their work, 6 to 8 p.m. Wednesday, March 20, Rubenstein Arts Center

The thumbnail in the top right? That’s what Duke campus might look like if it were painted by a Chinese ink master. If you look closely, you can also see a Picasso version of the red bridge in the Gardens, and the Chapel recreated in the style of Van Gogh’s “Starry Night.”

These pieces may seem to be the work of human artists. But for the roughly two dozen entries in the first A.I. art competition at Duke, a big part of the creative process belonged to a machine.

No brushes. No paints. Instead, the contestants used advanced computer algorithms that can sort through thousands of image examples, recognize patterns, and then generate new images of their own with the help of artificial intelligence.

See the results for yourself at a reception from 6 to 8 p.m. on Wednesday, March 20 at the Rubenstein Arts Center, hosted by the +Data Science initiative and the Vice Provost for the Arts.

Some of the entries in the ‘A.I. for art’ contest used artificial intelligence to create a mash-up of the style of one image and the content of another. By Scott Emmons '19 and Myla Swallow '19.
The contestants used several different approaches in artificial intelligence to make art. Some teams focused on teaching machines to capture a specific style, like Monet's Impressionism or the dreamy surrealism of Dalí, and transfer it to another image.

That technique, called style transfer, is what’s behind a piece called “Bridged Reflections,” by undergraduates Scott Emmons ’19 and Myla Swallow ’19. First they trained the system on thousands of images of everyday objects so it could learn what things like faces and buildings typically look like. Then they gave it two images -- one of the lake at Duke Gardens, and another of Picasso's painting “Les Femmes d'Alger” -- and the algorithm reinterpreted the lake in the style of Picasso’s work.

Duke masters students Jingwen Wang ’20 and Yicheng Deng ’19, of data science and ECE respectively, played with a similar technique to create A.I. art inspired by Chinese brush painting.

The secret behind their entry is a class of algorithms called “generative adversarial networks,” or GANs. To get their algorithm to produce a convincing likeness of a traditional Chinese painting, they fed it 9,000 such paintings -- mostly monochrome images of mist-shrouded mountains and meandering rivers -- that they painstakingly scraped from the web.

Based on these, one side of the GAN generates new images, while the other decides if they are “real” or fakes. The generated images get better and better until the system can’t tell the difference.

A lazuli bunting (top) and a hooded warbler (bottom) as seen through the ‘eyes’ of an image recognition algorithm developed in the lab of Duke professor Cynthia Rudin.Yet another contest entry might look like a cubist remix of “Peterson’s Field Guide to Birds,” but the algorithm works a bit differently.

A team led by Duke computer science and ECE professor Cynthia Rudin used a neural network they developed to analyze thousands of bird photos ranging from pelicans to hummingbirds. Then, given a photo of a mystery bird, the A.I builds a surreal photo collage that shows which parts of the bird are most similar to typical species features it has seen before, remixing and overlaying patches of images together to create its own representation.

Co-creator and computer science Ph.D. student Alina Barnett selected the training data. “My parents are birders,” Barnett says.

The resulting mishmash essentially says: “This isn't just any warbler. It's a hooded warbler, and here are the features -- like its masked head and yellow belly -- that give it away.”

They’re “sort of Frankenbirds,” said co-creator James Hoctor, a masters student in computer science who also collaborated on the project with Ph.D. student Chaofan Chen and undergraduate Oscar Li ‘19.

“It’s what you might come up with if you saw a tern on a safari, and instead of taking a picture of it, you went home and drew a copy of it later from memory. And you’re really bad at drawing,” Hoctor said.

These portraits were produced using artificial intelligence by Duke undergraduate Daniel Zhou ’21.
Art made by artificial intelligence raises difficult questions. How is human creativity different from what A.I. learns to do? How much of the credit for AI-generated art should go to the human versus the machine?

Swallow says she wrestles with these issues. Sure, the human artist chooses what images to feed the algorithm. But beyond those inputs, Swallow says, we “had almost no control over the output. You never really knew what you were going to get.”

Few of the contestants consider themselves artists per se. “I’m not the most artistically creative person,” said undergraduate Daniel Zhou ‘21, who identifies as a STEM guy. "But I was curious to see if I could produce aesthetic art through machine learning." 

STEM geeks are inspired by notions of beauty too, says math and computer science major Emmons. “I don’t paint or draw in my free time. But I study math because I think it’s beautiful.”

One of the benefits of using machine learning to create art is scale. With the touch of a button, Emmons said, he was able to “output 900 different paintings overnight. I wanted to hang them up on my wall.”

Similarly, Zhou used A.I. to churn out hundreds of trippy-looking portraits and landscapes, lined up like the repetitive heads of a Warhol painting.

Emmons says his take-away from the contest is that A.I. is no replacement for human artists, but it could be a way to enhance the creative process. “I think what machine learning and A.I. are doing is providing new tools for people to use.”

The submissions have been judged by faculty from Duke’s Art, Art History & Visual Studies Department, and from the Rhodes Information Initiative at Duke (Rhodes-iiD).

The top three entries will take home a share of $8500, with winners announced at the reception on March 20.

Duke Ph.D. student Zach Monge (psychology and neuroscience) used machine learning to transform forest images into abstract paintings and back.