Software That Can Paint
Recent advances in the artificial intelligence (AI) community have led to the proliferation of AI driven generative art. Lets briefly explore the history of generative art (from the 1960s until now), and some of the theory underpinning AI driven art.
Broadly speaking, generative art aims to create art with the help of computers. The artist, through code, tells the computer what to make (with elements of controlled randomness). Although software normally behaves in a predictable way each time it is executed, the element of randomness can lead to the artist being surprised by the results.
Take an early example by Georg Nees, a German generative artist. His work is one of the earliest examples of generative art and starts with a row of twelve squares. As you go down, he randomly increases the amount of rotation in the squares.
While the controlled randomness can be achieved with and without a computer, the former enables the artist to quickly scale and add many more rows for little additional effort (and time). If you're curious, you can click here to follow a tutorial and try it yourself.
Here's a different painting by Vera Molnar.
And another early example by Lillian Schwartz. Fun fact, I discovered that she was credited with proving that DaVinci himself was the model for the Mona Lisa.
But all of these artists and their works, ranging from the 60s to the 80s, had a specialized knowledge of computer science. The technique didn’t become readily accessible until later work by the MIT Media Lab that led to the creation of a tool called “Processing”.
By the early 2000s, there was a lot more acceptance of generative art — and naturally, more art. Here are a few examples by Jared Tarbell (who, by the way, also founded Etsy).
And some more examples from my personal favorite generative artist: Manolo Gamboa Naon. You can read an interview he did here to learn more about him.
In a relatively short period of time, the technique became more accessible, and more sophisticated.
The most exciting development in this field for me is in the intersection of artificial intelligence and generative art (AI driven art), where I've particularly enjoyed works by Robbie Barrat.
This isn't a purely academic pursuit. The use of AI with generative art has gained significant attention in auction houses.
Sotheby's recently had a sale featuring work by Mario Klingemann, who devised a machine installation which used neural networks to generate an infinite stream of portraits.
And then there was an auction at Christie's which sold the first AI generated portrait for $432,500. The piece was offered by 3 French students who used Robbie Barrat's algorithm (with some adjustments) to generate the painting, leading to controversy over who gets credit for the resulting art.
Was it Barrat for coming up with the original algorithm or the students for taking it and making minor tweaks before generating the image?
Ownership questions aside, let's attempt to understand the underlying technique behind AI generated art: Generative Adversarial Networks (GANs).
Suppose I want to forge a Picasso. In this imaginary world, I have to create a forgery that a detective thinks is legitimate. Let's say that the detective has access to Picasso's real paintings, and all I have to start with is random noise.
Naturally, the detective wants to minimize its error rate. And as the forger, I want to maximize the detective's error rate.
Here's how this might play out.
I start with my random noise and generate what I think a Picasso looks like. I show it to the detective who, having been trained in distinguishing a real Piccasso from a fake, says "nope". But instead of just saying that, it gives me feedback to adjust my process to be more correct. Similarly, if I fool the detective, that feedback goes back to improving the detective's determination of real or fake.
This game repeats for a while, where I continue to produce new images each round to trick the detective into thinking what I produce is legitimate. The result of each round is used to improve the quality of my forgeries or the detective's classification, depending on whether or not my forgery gets caught.
Over a sufficient number of iterations, we'll reach an equilibrium where the detective will say, for each of my forgeries, there is a 50% chance of it being real or fake (for a deeper reading, this is the nash equilibrium of a non-cooperative zero-sum game).
The effect of this is that the forger has learned a distribution of Picasso's work that is close to the true underlying distribution. And it can now use its learned distribution to generate an image that can pass as being from the original, true distribution.
While this is an over-simplification of state of the art GAN techniques, it should provide a reasonable intuition for what is happening. Diagramatically, we can present the high level architecture as follows (where the Generator is the forger, and the Discriminator is the detective from the example).
For a more concrete application, look no further than Robbie Barrat's recent network that learned how to generate its own Balenciaga outfits.
By taking a collection of poses (silhouettes of people) from Balenciaga runway shows, catalogues, and campaigns, Robbie was able to train a network to generate outfits on silhouettes. It is worth emphasizing that the resulting images aren't just copies. They are novel, but inspired by the original Balenciaga lines they were trained against.
Here are some examples.
In the second picture, you'll notice a red bag near the model's leg. The network as a whole was focusing on learning details about the outfits and likely saw a pose of someone holding a bag. It (incorrectly) learned that the bag was something that can be attached to the shin instead of something you hold with your hand and generated what Barrat affectionately refers to as the "shin bag".
What's really fascinating about this sort of work is it can offer a way for someone to bring about fashion lines without necessarily having years of fashion experience.
More broadly, the general technique of using a GAN isn't restricted to art. Imagine taking samples of someone's voice and creating new audio samples that sound the same. This could be used therapeutically for someone who wanted to remember an old friend, or family member. But as it goes with technology, there's a dark side which in this case might involve using deepfakes to fake the voice of a CEO to trick employees into wiring $243,000 dollars.
The point remains that techniques like this will shape how we interact with, use, and think about technology. At the very least, it is being used to generate really cool art.