Presentation at the Die Angewandte Interdisciplinary Lab, Vienna, 2022 AI in Art: Exploration of AI-Generated Art

Presentation at the Die Angewandte Interdisciplinary Lab, Vienna, 2022                 AI in Art: Exploration of AI-Generated Art

From the Media Art Nexus Studio – Die Angewandte
Artistic duo Ina Conradi and Mark Chavez about their current work – A cooperation with Digital Arts, Die Angewandte Interdisciplinary Lab, 18:00, 31 May 2022
Ehemalige Postsparkasse, Georg-Coch-Platz 2, 1010 Vienna
https://angewandteinnovationlab.at/

“We are not special. We are not crap or trash, either. We just are. We just are, and what happens just happens.”

― Chuck Palahniuk, Fight Club

For the past five years, Media Art Nexus Studio has been creating innovative content for a large-scale LED platform at Nanyang Technological University in Singapore. This platform serves as a meeting point for artists, scientists, and engineers to exchange ideas and collaborate. Our talk will delve into a series of creative research initiatives stemming from this project. These range from artworks that use cultural archetypes to elucidate aspects of quantum mechanics for casual viewers, to cutting-edge animated, co-immersive spaces viewed daily by hundreds, if not thousands, at the university. Our latest works explore the unique artistic potential of AI and Machine Learning, investigating how these technologies can enhance the creative process both as sources of inspiration and as final mediums of expression. This collaboration with machine learning enables the creation of new landscapes where artists, poets, musicians, scientists, and philosophers can more easily bring incredible imaginary worlds to life.

Currently, we are investigating a new and novel field of Generative Adversarial Networks (GANs)-based AI visual art. Much of these art generation tools leverage word-prompted techniques to generate visual art. A field of image creation has recently found popular interest among social media platforms and WEB3.0 entrepreneurs. However, these techniques have been around in various forms for several years. The newest iterations leverage the vast libraries’ meta-tagged data that has only recently been made available. As a result, new large reference libraries of parseable libraries can be organized into image-centric database models. These databases are also comprised of video from live-action sources and traditionally animated motion and computer-generated animation. Unique to this new reference-based platform is the way visuals are called into the image composition space by word-prompted language structures. This gives artists unique opportunities to blend visual art with language in new prompt-based image creation techniques. Our first case study explored Quantum ideas with Mexican ethnic cultural tropes. We recently completed Quantum LOGOS (vision serpent) and Nocturne and wanted to take a respite from complex technical animation techniques to interpret scientific data and personal feelings. VQGAN+CLIP was the first technique we used to generate imagery with word prompts. VQGAN Vector (Quantized Generative Adversarial Networksand CLIP* (Contrastive Language-Image Pre-Training)are two separate machine learning algorithms that can be used together to generate images based on a text prompt. VQGAN is a generative adversarial neural network that is good at generating images that look similar to others (but not from a prompt), and CLIP is another neural network that can determine how well a caption (or prompt) matches an image. (Russell, 2021) (Tanzi, 2022) [1]  The idea of using words to assemble images was originally developed in theoretical papers by OpenAI (“Publications,” 2022)[2], published for a planned release of DALL-E software designed for artists. (Ho et al., 2020).[3] The two algorithms were emulated in various forms by AI-generated art enthusiasts, originally by Ryan Murdock (Music & Murdock, 2022)[4] and Katherine Crowson (Crowson, 2013) (Crowson et al., 2022)[5]   in early 2021. These implementations of VQGAN+CLIP were made public on GitHub as installable software with links to Machine Learning papers describing the techniques used. They also included clickable links to Google Colab notebooks where the software was implemented in demo mode. Anyone could fork the code, copy it, add user-centric code to make the program more user-friendly, use the code to run a business, generate their art, and invent entirely new methods that focus on their needs. These techniques were offered to the generative art community as open-source code available for download and use on GitHub, a shared software repository. These are packaged as Python notebooks able to run on Google Colab in a Linux-based software solution that allows the user access to an array of computers at a subscription cost.

Since then, numerous Google Colab notebooks have appeared, combining various new image database models and animation techniques resulting in numerous approaches and visual looks. We found that this technique allows us to design imagery with a refreshing new approach to image creation. With AI toolsets, the exceptional quality of blending meaning and generating visuals with descriptive language allows for imagery that can have counterintuitive embedded messages in its design. This, along with video-motion-tracking with styled expressions overlaid onto the image plane, new generated looks and means of expression are possible.

For more please see the link to:
“The Thirst for Illusion.” dieAngewandte. Accessed June 9, 2024.

Ina and Mark Chavez excerpt from the lecture

Angewandte Interdisciplinary Lab in the Postsparkasse in the heart of Vienna

“In spring 2021, after seven years at Franz-Josefs-Kai 3, AIL moved to the former Postsparkasse in the heart of Vienna – a historic building designed by architect Otto Wagner – thus joining other departments of Angewandte and a newly emerging neighborhood comprising several research institutions from the field of art and science. The new location provides the opportunity to further expand and strengthen networks for interdisciplinary work and research on an area of about 300 square meters, divided into three rooms on the mezzanine floor, with Café Exchange (former Kassenhalle) as its centerpiece.
Angewandte Interdisciplinary Lab is a space and a platform for projects at the intersection of art, science and artistic research. Founded in 2014 by the University of Applied Arts Vienna as an initiative by Gerald Bast, it was launched to enable exchange among different disciplines and to open up art and artistic research. AIL is dedicated to facilitating dialogue between all visitors and participants as well as various fields of knowledge and connects partners from the fields of science, arts, design, research with the resources of the University of Applied Arts Vienna.AIL makes space for a broad variety of projects, such as exhibitions, panels and thought experiments, exploring crucial and future-oriented topics to make them available to an interested public and community. Originally called Angewandte Innovation Lab, AIL relaunches as Angewandte Interdisciplinary Lab in 2022 to put interdisciplinarity at the forefront as its main approach and way of acting and thinking…”
Taken from https://ail.angewandte.at/about/

[1] Russell, A. (2021). How to use VQGAN+CLIP to generate images from a text prompt — a complete, non-technical tutorial.. Medium. Retrieved from https://medium.com/nightcafe-creator/vqgan-clip-tutorial-a411402cf3ad.

Tanzi, L. (2022). VQ-GAN, explained A straightforward introduction to Vector Quantized Generative Adversarial Networks. Medium. Retrieved 30 July 2022, from https://medium.com/geekculture/vq-gan-explained-4827599b7cf2.

[2] Publications. OpenAI. (2022). Retrieved 17 July 2022, from https://openai.com/publications/.

[3] Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. arXiv.org. Retrieved 17 July 2022, from https://arxiv.org/abs/2006.11239.

[4] Music, J., & Murdock, R. (2022). Making images with CLIP. deeplearn.art. Retrieved 17 July 2022, from https://deeplearn.art/making-images-with-clip/.

[5] Crowson, K. (2013). crowsonkb – Overview. GitHub. Retrieved 17 July 2022, from https://github.com/crowsonkb.

Crowson, K., Biderman, S., Kornis, D., Stander, D., Hallahan, E., Castricato, L., & Raff, E. (2022). VQGAN-CLIP: Open Domain Image Generation and Editing with Natural Language Guidance. arXiv.org. Retrieved from https://arxiv.org/abs/2204.08583.

Fission: The New Wave of International Digital Art, Guizhou, China, April – Aug 2022 The Thirst for Illusion, Discussion Round at the Angewandte Interdisciplinary Lab, Vienna, May 2022