ARTIFICIAL BOTANY
A/V INSTALLATION
2019 / ongoing
Artificial Botany is an ongoing project which explores the latent expressive capacity of botanical illustrations through the use of machine learning algorithms. Before the invention of photography, botanical illustration was the only way to visually record the many species of plants. These images were used by physicists, pharmacists, and botanical scientists for identification, analysis, and classification.
While these works are no longer scientifically relevant today, they have become an inspiration for artists who pay homage to life and nature using contemporary tools and methodologies. Artificial Botany draws from public domain archive images of illustrations by the greatest artists of the genre, including Maria Sibylla Merian, Pierre-Joseph Redouté, Anne Pratt, Mariann North, and Ernst Haeckel.
Developing as an organism in an interweaving of forms that are transmitted and flow into each other, the plant is the symbol of nature’s creative power. In this continuous activity of organizing and shaping forms two opposing forces in tension are confronted: on one hand, the tendency to the shapeless, the fluidity of passing and changing; on the other, the tenacious power to persist, the principle of crystallization of the flow, without which it would be lost indefinitely. In the dynamic of expunction and contraction that marks the development of the plant, beauty manifests itself in that moment of balance which is impossible to fix, caught in its formation and already in the point of fading into the next one.
PROCESS
The illustrations collected from digital archives have become the learning material for a particular machine learning system called GAN (Generative Adversarial Network), which through a training phase is able to recreate new artificial images with morphological elements extremely similar to the images of inspiration but with details and features that seem to bring out a real human representation. The machine in this sense re-elaborates the content by creating a new language, capturing the information and artistic qualities of man and nature.
The recent advances in the generative models realm makes it really intriguing to exploit their ability to create novel content from a given set of images. Following this direction, we focused our attention on the study of Generative Adversarial Networks (GANs). They represent a technique made up of two networks that are in competition with one another’s in a zero-sum game framework.
The first network is called a generator and its job is to generate data from a random distribution. These data are then conducted to the second network, the discriminator, who on the basis of the data acquired during the learning phase, learns to decide whether the distribution of the generator data is close enough to what the discriminator knows as the original data. If the value obtained does not meet the requirements, the process will be repeated until the result is obtained. GANs typically run unsupervised, teaching itself how to mimic any given distribution of data, which means that once trained they are able to replicate novel content starting from a specific dataset.
The distribution the network is able to learn is often called the latent space of the model. It is usually high dimensional, though much lower than the dimensions of the raw medium. E.g. When dealing with full colour (e.g. RGB) images with 1024x1024 pixel resolution, we are dealing with roughly 3 million dimensions (i.e. features 1024x1024x3), whereas we might use a latent space of only 512 dimensions.
The first step in establishing a GAN is to identify the desired output and gather an initial training dataset based on those parameters. This data is then randomized and input into the generator until it acquires basic accuracy in producing outputs.
In an unconditioned generative model, there is no direct control on the model and the data being generated. However, by conditioning the model on additional information it is possible to direct the data generation process. Such conditioning could be based on class labels, on some part of data for painting like, or even on data from different modalities.
TRANSFER LEARNING
We further developed the process by integrating the concept of transfer learning to the previously trained models. Practically, it consists on reusing or transferring information from previously learned tasks for the learning of new tasks as the potential to significantly improve the efficiency of the network. In this case, we started from the trained model used for the creation of the synthetic botanical illustrations and we started a new training process with a new dataset composed of images of forests and leaves.
In this animation, you can see the intermediate steps during the new training phase of the forest dataset. It’s particularly fascinating the way the previously learned features, defining part of a plant illustration, slowly shift their meaning by outlining other parts of a mixed-complex structure.