Week 11: Fine Tuning AI Models

Lecture

Today we will be going over training simple AI models with our own data using platforms like Scikit Learn or TensorFlow and Google Colab as a programming and processing environment. We will be using existing models trained on datasets like Imagenet and Mobilenet as well as training models on our own data using platforms like Teachable Machine. We will then use those models for an image classification project we will be building with p5js and ml5js. We will then discuss how we can fine-tune larger existing models like Stable Diffusion and Flux using smaller image datasets (5 or more) and a process called LoRA (Low-Rank Adaptation) to train these models on specific styles, objects or characters. We will explore the world of LoRA’s on HuggingFace and then use Replicate to train our own.

Lecture Slides

Helpful Links

Homework

Choose at least one:

  1. Explore the world of LoRAs on HuggingFace. Output multiple images using different LoRAs and write a blog post about your experience.
  2. Train your own LoRA using Replicate. Output multiple images using different prompts to see how accurate your LoRA is. Write a blog post about your experience.
  3. Go through some of the Dan Shiffman’s Coding Train examples on machine learning using p5js and ml5js. Experiment with a few of those tutorials. Write a blog post about your experience.

Leave a Reply