Meet Professor Ahmed Elgammal, who designed a research project on AI-generated art. After a while his computer started spitting out a series of breath-taking images. After the success of this previous study, he conducted a deeper, special Turing test to see how his digital art stacked up against dozens of human created paintings.
In a randomized blind study, subjects were unable to tell the AI-art from work created by aclaimed, human artists. In fact, the computer-made pictures were often rated by subjects as more “inspiring” and “communicative” than the human-made art.
At the Sonophilia Spark in Frankfurt he will speak about this special study and display some of the artworks created by these novel algorithms.
We interviewed him before the Spark in Frankfurt.
Enjoy the reading!
[wolf_button text=”REGISTER for SPARK” url=”https://www.eventbrite.com/e/sonophilia-spark-the-frankfurt-book-fair-arts-festival-tickets-37061821907″ color=”accent-color” size=”large” type=”round” target=”_blank” icon=”fa-adjust” icon_position=”after”]
Sonophilia: What is GAN and how does it work?
Ahmed Elgammal: I will explain what GAN is and how we modified it to make it creative (CAN)..
Deep neural networks have recently played a transformative role in advancing artificial intelligence across various application domains. Several generative deep networks have been proposed that have the ability to generate novel images to emulate a given collection. Generative Adversarial Networks (GAN) have been quite successful in achieving this goal (created by Ian Goodfellow et al. in 2014). GAN learns to generate images through a game between two players. The first (called discriminator) has access to a collection of images (training images). The second (called the generator) generates images starting from random. The discriminator tries to excel in identifying real images from generated ones, while the generator tries to excel in generating images that fool the discriminator to believe they are real.
However, GANs have no motivation to generate anything creative. Since the generator is trained to generate images that fool the discriminator to believe it is coming from the training distribution, ultimately the generator will just generate images that look like already existing art. There is no force that pushes the generator to explore the creative space.
Sonophilia: Your system goes further than training AI to simply copy art, instead, it creates art that is different from what it’s been taught. What will the impact of this research be in art but also in other fields?
Ahmed Elgammal: If we simply use GAN and training it on art data (images of paintings), it will ultimately produce images that are similar to what it has seen, not exactly copies, but it will generate things that looks like traditional art with traditional styles and genres. That is limitation is what motivated us to develop CAN: Creative Adversarial Network.
CAN is inspired by the psychological theory of art evolution proposed by Colin Martindale (1943-2008). He hypothesized that at any point in time, creative artists try to increase the arousal potential of their art to push against habituation. However, this increase has to be minimal to avoid negative reaction by the observers (principle of least effort). Martindale also hypothesized that style breaks happen as a way of increasing the arousal potential of art, which happens when artists exert other means within the roles of style.
CAN tries to generate art with increased levels of arousal potential in a constrained way without activating the aversion system and falling into the negative hedonic range. There are several ways to increase the arousal potential. CAN focuses on increasing the stylistic ambiguity and deviations from style norms.
Similar GAN, CAN has two adversary networks, a discriminator and a generator. The discriminator has access to a large set of art associated with style labels (Renaissance, Baroque, Impressionism, Expressionism, etc.) and uses it to learn to identify styles. The generator does not have access to any art. It generates art starting from a random input, but unlike GAN, it receives two signals from the discriminator for any work it generates, which are contradictory by design. The first signal is the discriminator’s classification of “art or not art”. The second signal is a “style ambiguity” signal that measures how confused the discriminator is trying to identify the style of the generated art as one of the established styles. The generator will use this signal to improve its ability to generate art that does not follow any of the established styles, and has increased level of style ambiguity. On one hand it tries to fool the discriminator to think it is “art,” and on the other hand it tries to confuse the discriminator about the style of the work generated.
When trained CAN on 80K digitized images of western paintings ranging from the 15th century to the end of the 20th century, the model was successful in generating aesthetically appealing novel art.
Sonophilia: You have conducted an experiment at Art Basel where people found the art generated by your system better than the human artworks.
Ahmed Elgammal: Let me clarify, the goal of our experiments was not to test people preference between human and machine art, this is a misperception. Our goal is to see to how people perceive the images generated by CAN, and what degree people would see the generated images by CAN as art.
We approach this assessment from a visual Turing-like test point of view. We tested the degree to which human subjects would be able to distinguish whether the art is generated by a human artist or by a computer system. We chose two sets of works by real artists. The first set is a collection of Abstract Expressionist masters made between 1945-2007. The second set is a collection of paintings shown in Art Basel 2016, the flagship art fair of contemporary art.
Human subjects thought that the generated images are art by artist 75% of the time compared to 85% of the time for the Abstract Expressionist collection and 48% of the time for Art Basel collection.
In another experiment we asked the subjects to rate the degree they find the works of art to be intentional, having visual structure, communicative, and inspirational. The goal was to judge aspects related to whether the images generated could be considered art. We hypothesized that human subjects would rate art by real artists higher on these scales than those generated by the proposed system. To our surprise the results showed that our hypothesis is not true! Human subjects rated the images generated by the proposed system higher than those created by real artists, whether in the Abstract Expressionism set or in the Art Basel set!
Again the issue is not what art subjects preferred. The fact that subjects found the images generated by the machine intentional, visually structured, communicative, and inspiring, with similar, and even higher levels, to actual human art, indicates that subjects see these images as art!
Sonophilia: How do people react when they find out that the art was in fact generated by a computer?
Ahmed Elgammal: In the experiments subjects were never told the answer, because that would bias their choices if they repeat the survey on other images during the course of the experiment!
We’re looking forward to hearing more at the Sonophilia Spark!
[wolf_button text=”REGISTER for SPARK” url=”https://www.eventbrite.com/e/sonophilia-spark-the-frankfurt-book-fair-arts-festival-tickets-37061821907″ color=”accent-color” size=”large” type=”round” target=”_blank” icon=”fa-adjust” icon_position=”after”]