AI in Art Workshop, Pecha Kucha Seminar and Exhibition, Singapore, November 2022
Pecha Kucha Seminar and Exhibit
Monday, 14. November 1630h-1830h
Prompt Battle with DALL-E
1830h-1930h
NTU School of Art Design and Media In Collaboration with NITSH co:Lab
NTU Institute for Science, Technology for Humanities
Interdisciplinary Collaborative Research Program
Student presentations explore the ways in which AI can serve as both muse and vehicle for the development of novel forms of storytelling, visual art, music, and performance. This workshop aims to bring together the active NTU machine learning community with the exciting artistic potential presented by AI and ML in the Arts at the School of Art, Design and Media.
ADM undergraduate and postgraduate students are joined by Master Students from the Global Innovation Design Programme, at the Royal College of Art and Imperial College London. Students are supervised by Ina Conradi (Associate Professor ADM NTU).
Participating artists from the undergraduate class DA5009 Explorations in A.I. Generated Art: Cammie Toh, Joshua Chen, Shiao-ya Huang (Maggie), Jonathan Tan, Tamaki Kobayashi, LuQi Li, Benjamin Lim, Manasi Kumar, Ng Jing Xuan, Ng Teng Han, Nur Haidah Arib, Pena Castro Conzales Nicolas, Phuah Jun Ting Alvin, Rachel Lim, Renete Chan, Rodolfo Barcelli Jo, Megan Tang, Noel Cheng, Siobhan Yeow, Sheng Yip, Ziyi Zhang
Participating artists from Master class AP7055 Art in the Age of the Creative Machine: Gong Ze, Ong Kian Peng, Wang Mengdie, Xie Shujiao, Yi XiaoHan, Lim Shu Min, Clara Chow, Agcaoili John Gabriel Chua, Xin Wen, Al-Sahlani Maroa-Isabell, Mathew Mukachirayil Savio, Andlay Adira, Cardall Hannah Quinn, Li Peixuan, Lall Vedika, Ha Phuong Thao, Peng Xiaolin, Li ZongRui, Li Xingran, Robitzki Sascha Roman
Special Guest Professor Verena Kraemer and Students from Department of Visualization and Interaction in Digital Media, Ansbach University of Applied Sciences, Germany.
About NISTH co:Labs NTU Institute of Science and Technology for Humanity NISTH’s program of collaborative Labs called NISTH co:Labs aims to bring together NTU faculty from disciplines cutting across STEM and non-STEM, with established researchers and professionals from academia, industry, government, and the community.
About DA5009 Explorations in A.I. Generated Art This interdisciplinary elective course introduces students to how artificial intelligence is used in the arts and how to use AI techniques to create your own art. Students learn about the unique artistic potential of AI and machine learning and how to apply them to the creative process for both inspiration and as a medium. Instructor Assoc Prof Ina Conradi / OSS Class Blog
AP7055 Art in the Age of the Creative Machine This graduate-level course introduces you to the most recent research and critical machine learning theories in creative fields such as media art, music, performance, and literature. You will review and analyze how machine learning has transformed art and culture by examining and comparing human-based and machine-based art practices and Artificial Intelligence tools to enhance creativity and production.
“Memories of Ming By Maggie Huang is: “an AI generated immersive space that brings to life the memories of Ming, an AI. Ming (明), also means “tomorrow” in Chinese. It symbolizes the near-existence of Ming and the spaces generated by the AI. They look like they can exist somewhere in the world, but not really, similar to those spaces stored deep in our memories that have become a syncretism of reality and our imagination. With Ming, we create a new world with a new image that have not existed ever before.
I wanted to use Dream Studio but switched to DALL.E 2 as it was easier to generate an equirectangular format image. I used the prompt at the left as a reference and did not have any initial image, purely text-image. I edited the equirectangular 360 images outpainted with DALL.E 2 and edited the part that does not stitch well with Photoshop. Later, I imported to resolve and used Reframe360 plugin to make it immersive and rotate around in the space. I also had to play around with project settings with DAVinci Resolve so the view covered the screen. I animated the surrounding and then added particles overlay onto the video. I tried generating 3D particles in Fusion but I needed more time to make it work.
Ming “narrates” his own memories via AI narrating platform, Murf. I could pick the voice, pitch, speed, accent, and pauses. The biggest challenge was to figure out how to make the image 360, immersive.”
Feedback* by Ong Kian Peng is: “an audiovisual toolkit that is a collection of modules in Touchdesigner that connects externally via Open Sound Control / MIDI to external software like Ableton Live or custom software. With the framework in mind, a first performance is conceptualized and proposed with Future-Past-Corals, a performance that speculates a fictional future in which corals have ceased to exist as a result of global warming”
Tools coralGAN, stable-diffusion, disco-diffusion, google magenta, waveGAN GPT-2.”
Box-Body by John Gabriel Chua and Xin Wen : “is a collaborative new media exploration on the meaning of life within a busy urban environment, centered around the concept of “human in a box”. AI was used for rapid generation of visual ambience and narrative concepts to facilitate communication between team members and collaborators. Prompt-based generative AI such as Midjourney and Dall-e was used to create both abstract ambience and human-realistic conceptual images, helping us to quickly iterate ideas without the need to recreate them in real life.
We primarily used Midjourney and Dall-e in our visual exploration of the Box-Body concept. The biggest challenge in the process was coming up with the right prompt to create the vision that we have in our minds. We found that Midjourney tended to have a more digital fantasy aesthetics, based on which we were only able to discuss abstract ambience. Dall-e, on the other hand, provided much more helpful and human-realistic images for similar prompts. We experimented with different lighting modifiers, and found that “low studio light” and “dim warm light” worked best for our purpose. Once we figured that out, it was easier to experiment with different character visuals for our box concept. The character visuals were extremely helpful in our conversation with external collaborators. If we had more time, it would be interesting to see if AI can generate any kind of video based on prompt, as this is a film project.”