Guest Speaker
Vadim Mirgorodskii is the Co-Founder of Interactive Items, a company that creates digital art experiences for events, exhibitions and advertising campaigns. Their work is a mix of art and design with programming, engineering and emerging technologies. More recently they have been exploring using real time generative AI image/video models with video feeds as an input.
Topics Covered and Helpful Links
Touchdesigner
TouchDesigner is a visual programming software used for creating interactive multimedia experiences. Popular in live events, installations, and virtual production, it allows artists and developers to design real-time visuals, interactive environments, and complex generative art through a node-based interface, combining audio, video, and 3D elements.
https://derivative.ca/download
Ai Tools for Touchdesigner
DotSimulate is a developer who created a set of realtime AI tools for touchdesigner. Here’s a link for his Patreon. You can find realtime Ai engine, connection to ChatGPT, Voice commands Api and more.
https://www.patreon.com/dotsimulate
Lora Training Tutorial
LoRA (Low-Rank Adaptation) is a way to quickly and cheaply “teach” big AI models new skills or information. Instead of retraining the whole model, LoRA tweaks only a small part, making it faster and easier to customize AI for specific tasks.
https://youtu.be/70H03cv57-o?si=JEmFtgkgaaMG1Ye-
Ciivitai Web Ai library
Civitai.com is a website where people share and download AI models, tools, and images, especially for creating art. It’s like a library for AI creations, letting users find models for specific styles or effects and share their own work with others easily. You can find lots of trained LoRA models that work with realtime Ai
https://civitai.com
Interactive Items Instagram: https://www.instagram.com/interactive.items/profilecard/?igsh=MWtnbjR0Z3c5OThvbw==
Lecture
Todays lecture will start with a short history of the movie image, animation and motion graphics and then we will focus on generative AI video models like Stable Diffusion + Deforum, Runway ML, Luma Labs, Kling AI, Pika, Viggle, Open AI’s Sora, Adobe Firefly, and Wonder Dynamics. We will also discuss generative AI avatars using D-ID and Synthesia as well as generative AI audio models like Eleven Labs and Uberduck. Lastly we will discuss AI assisted video editing tools like Adobe Premiere Pro’s video extend and Descript’s video editing features.
Homework
- Watch the AI documentary, The Wizard of AI
- Explore the world of generative AI video and audio:
- Generate videos and audio and edit them together to make a short 30-60 second video to share with the class next week. Use ChatGPT to help you conceptualize the video and prompts to use.
- I would suggest outputting the AI generated videos and editing them together using Quicktime (Edit > Add Clip to End), you can also click and drag an audio track to add sound. Feel free to use another video editing program if you prefer.
- Upload your video to a platform like YouTube or Vimeo, write a blog post about your experience and embed the video into the blog post.
- Share your blog post and video on Slack before we meet next week.