Week 11: Fine Tuning AI Models

Lecture

Today we will be going over training simple AI models with our own data using platforms like Scikit Learn or TensorFlow and Google Colab as a programming and processing environment. We will be using existing models trained on datasets like Imagenet and Mobilenet as well as training models on our own data using platforms like Teachable Machine. We will then use those models for an image classification project we will be building with p5js and ml5js. We will then discuss how we can fine-tune larger existing models like Stable Diffusion and Flux using smaller image datasets (5 or more) and a process called LoRA (Low-Rank Adaptation) to train these models on specific styles, objects or characters. We will explore the world of LoRA’s on HuggingFace and then use Replicate to train our own.

Lecture Slides

Helpful Links

Homework

Choose at least one:

  1. Explore the world of LoRAs on HuggingFace. Output multiple images using different LoRAs and write a blog post about your experience.
  2. Train your own LoRA using Replicate. Output multiple images using different prompts to see how accurate your LoRA is. Write a blog post about your experience.
  3. Go through some of the Dan Shiffman’s Coding Train examples on machine learning using p5js and ml5js. Experiment with a few of those tutorials. Write a blog post about your experience.

Week 10: AI Generated Video

Guest Speaker

Vadim Mirgorodskii is the Co-Founder of Interactive Items, a company that creates digital art experiences for events, exhibitions and advertising campaigns. Their work is a mix of art and design with programming, engineering and emerging technologies. More recently they have been exploring using real time generative AI image/video models with video feeds as an input.

Topics Covered and Helpful Links

Touchdesigner
TouchDesigner is a visual programming software used for creating interactive multimedia experiences. Popular in live events, installations, and virtual production, it allows artists and developers to design real-time visuals, interactive environments, and complex generative art through a node-based interface, combining audio, video, and 3D elements.
https://derivative.ca/download

Ai Tools for Touchdesigner
DotSimulate is a developer who created a set of realtime AI tools for touchdesigner. Here’s a link for his Patreon. You can find realtime Ai engine, connection to ChatGPT, Voice commands Api and more.
https://www.patreon.com/dotsimulate

Lora Training Tutorial
LoRA (Low-Rank Adaptation) is a way to quickly and cheaply “teach” big AI models new skills or information. Instead of retraining the whole model, LoRA tweaks only a small part, making it faster and easier to customize AI for specific tasks.
https://youtu.be/70H03cv57-o?si=JEmFtgkgaaMG1Ye-

Ciivitai Web Ai library
Civitai.com is a website where people share and download AI models, tools, and images, especially for creating art. It’s like a library for AI creations, letting users find models for specific styles or effects and share their own work with others easily. You can find lots of trained LoRA models that work with realtime Ai
https://civitai.com

Interactive Items Instagram: https://www.instagram.com/interactive.items/profilecard/?igsh=MWtnbjR0Z3c5OThvbw==

Lecture

Todays lecture will start with a short history of the movie image, animation and motion graphics and then we will focus on generative AI video models like Stable Diffusion + Deforum, Runway ML, Luma Labs, Kling AI, Pika, Viggle, Open AI’s Sora, Adobe Firefly, and Wonder Dynamics. We will also discuss generative AI avatars using D-ID and Synthesia as well as generative AI audio models like Eleven Labs and Uberduck. Lastly we will discuss AI assisted video editing tools like Adobe Premiere Pro’s video extend and Descript’s video editing features.

Lecture Slides

Homework

  • Watch the AI documentary, The Wizard of AI
  • Explore the world of generative AI video and audio:
    • Generate videos and audio and edit them together to make a short 30-60 second video to share with the class next week. Use ChatGPT to help you conceptualize the video and prompts to use.
    • I would suggest outputting the AI generated videos and editing them together using Quicktime (Edit > Add Clip to End), you can also click and drag an audio track to add sound. Feel free to use another video editing program if you prefer.
    • Upload your video to a platform like YouTube or Vimeo, write a blog post about your experience and embed the video into the blog post.
    • Share your blog post and video on Slack before we meet next week.

Week 09: Code Generation and AI

Lecture

Todays lecture will be focused on how to use ChatGPT 4o, ChatGPT o1 and Claude 3.5 to help generate and debug code. We will focus on converting a static website design to a coded HTML and CSS files using both ChatGPT and Claude, explore developing a web app calculator using Claude and play around with ChatGPT o1’s reasoning model to generate some more creative applications with code. These platforms are very helpful and can get you most of the way there when programming a project, but at this point its still important to understand the coded output to make the necessary manual edits to get you to the finish line. We will also discuss how we can use ChatGPT and Claude when it comes to debugging and learning how to code.

Lecture Slides

Professor Woo’s p5js ChatGPT Coding Examples

Homework

Use Claude or ChatGPT to help ideate concepts for a simple web application or game. Use Claude to program and publish the web application or game you come up with. Write a blog post about your experience developing a web application or game and share both the link to your blog post and the link to your published project on Slack.

Week 08: User Experience/User Interface Design and AI

Lecture

Today we will be having a guest speaker, Julia Bradshaw, Lead User Experience Designer at Color Of Change, giving us a breakdown of what user experience and user interface design is and her process when working on a project. We will then be looking at how we can use generative AI platforms like ChatGPT, Perplexity, MidJourney, Uizard and Figma AI Plugins (Wireframe Designer and Codia AI Design) in collaboration with traditional tools like Google Forms, Adobe Illustrator and Figma to help augment and optimize the user experience and user interface design process for an interactive digital experience.

Julia Bradshaw’s Talk

UX/UI and Gen AI Demo

Homework

  • Pick a website to redesign.
  • To help with the user experience design process, use ChatGPT or Perplexity to:
    • better understand the company as well as the market, user demographics and competition.
    • understand the existing website (sitemap, current user flow, content architecture, content hierarchy).
    • better understand how the website could be improved for that specific demographic.
    • put together survey and interview questions.
    • identify internal staff members who you should interview. As a thought experiment use ChatGPT/Perplexity to act as each one of those staff members and answer the interview questions.
    • output suggestions on a recommend new sitemap, user flow, content architecture, content hierarchy.
  • Play around with Uizard’s AI tools and Figma’s Wireframe Designer plugin.
  • Use a text-to-image platform (MidJourney suggested, Firefly is fine) to generate wireframes for the new website and import those wireframes into Uizard’s plaform or Figma using the Codia.
  • Write a blog post talking about your experience going through this process and where you see generative AI being helpful when it comes to user experience design and user interface design.

Week 06: Midterm Project Development

Lecture

Today we will be having a guest speaker, Pablo Stanley, who will be discussing his experience with AI from creating inclusive images using generative AI to AI for UX to building AI driven creative platforms.

Link to Slides: https://www.figma.com/slides/z4AFQxoy9TuICsGOtHysYm/UX-of-AI?node-id=1-255&t=SfcI7BaMmpBVdXCy-1
Pablo’s Contact Information: [email protected]

Related Links:
Lummi.ai
Pi.ai
Descript
System Prompting in LLMs
Open AI Documentation and API Access
Black Forest Labs – Flux
Pablo’s Codepen Sketches
Pablo’s UX of AI Presentation Slides

We will also be discussing your midterm project ideas and moving forward with development on the selected midterm project.

Homework

Continue working on you midterm project, create a blog post based on the project and your process and be prepared to present your project to the class next week!

Week 05: Communication Design and AI

Lecture

This lecture will cover different generative AI models you can use and the process of using AI to augment your process of designing brand identities, marketing materials, sales materials, social media posts, advertisements, etc. We will discuss how to use ChatGPT as a means to help ideate for the creative process by asking it to help with color schemes, typography, iconography and imagery. We will look at using existing generative AI platforms specifically designed for branding and communication design like Adobe Illustrator generative vector tool, using text-to-image platforms to help with logo development, experimenting with Adobe Express and ChatGPT, and looking at platforms like playground.com, looka.com, as well as brandmark.io.

Lecture Slides

Referenced Links:
Josef Albers Interaction of Color
Google Fonts
Brand Style Guidelines
Adobe Firefly
MidJourney
Looka
Playground
Adobe Express
Google AI Experiment – GenType
Google AI Experiment – Say What You See Prompt Training

In Class / Homework

Using ChatGPT (or similar) come up with a list of project ideas that you could work on for your midterm project based on what we have covered so far in class. Write a blog post based on your conversation with ChatGPT (or similar) and select one option to move forward with for your midterm project. We will discuss your project ideas in class the following week.

Week 04: Ethics, Bias and Legalities in AI

Lecture

This class will be focused on ethics, bias and the legal side of generative AI. We will discuss the ethical side of how these platforms are trained on vast amounts of data that was taken without the consent of the original creators. We will also talk about the issues with the outputs generated from these platform that tend to have a racial and socioeconomic bias. We will also be talking about the legal issues behind how these modals are trained as well as how to properly use the content that is generated by these platforms.

Guest Lecture: James Creedon

Sam Harris Podcast

Homework

Choose one (or more) below:

If you are interested in bias and stereotypes in AI training data:

Using a generative AI image model of your choosing, output at least 100 generations using a descriptive adjective (beautiful, ugly, scary, joyful, etc.) or job title (doctor, lawyer, teacher, police officer, nurse, etc). Analyze the outputs and see what stereotypes or trends you see. Create a blog post on your findings. Look at Professor Woo’s gender and race bias studies for reference.

If you are interested in the use of IP and copyrighted materials used to train image models:

Use Have I Been Trained to research 5 of your favorite artists to see how much, if any, of their art has been used to train an AI image model. Create a blog post on your findings.

If you are interested in ethical solutions and best practices around generative AI:

Using the United Nations AI Advisory Board report on Governing AI for Humanity as a guide, design a poster (11″ x 17″) around the suggested rules and regulations for adopting AI for a more positive impact on humanity. Create a blog post showing your process and final poster design. Feel free to use ChatGPT to help you analyze the report.

Week 03: Illustration and Image Generation with Generative AI

Lecture

This class will introduce us to the world of AI generated imagery. We will explore multiple models and platforms including Adobe Firefly, Adobe Photoshop Generative Fill, MidJourney, Stable Diffusion, Dalle and Flux. We will also be discussing different techniques for writing text-to-image prompts as well as out-painting, in-painting, composition reference, style reference, character reference, etc. We will also touch upon issues with commercial use and ownership, output issues, protecting your art and digital assets and test your skills when it comes to identifying what is AI generated versus human created.

Lecture Slides

In Class Assignment

  1. Test your ability to see if you can tell what is AI generated versus human generated at thisimagedoesnotexist.com and share your score on Slack.
  2. Play with AI image generators to see what you can output and try to test the same prompt on multiple platforms to see the difference in the outputs.
  3. Explore the Spaces on Hugging Face. Share on Slack what Spaces you found interesting and why.

Homework

  1. Use ChatGPT (or another LLM) to generate a short story, children’s book story or comic book around a topic of your choosing.
  2. Use text-to-image platforms (Adobe Firefly, Adobe Photoshop, Stable Diffusion, Mid Journey, Dalle in ChatGPT Pro, or similar) to generate visuals for the story.
  3. Use ChatGPT (or another LLM) to help you with the design process for the book.
    • Color palette
    • Imagery
    • Image prompting
    • Fonts
    • Dimensions
    • Layout
  4. Using Adobe InDesign to design/layout a small book for the story.
    • The book must be at least 10 pages (5 Spreads).
    • You must include a cover and back cover design as well.
    • Dimensions can be whatever you would like.
  5. Export your book spreads as a PDF and share that on Slack.
  6. Write a blog post about your experience using generative AI platforms to design this short story and share the link to the blog post on Slack.