A Journey Through Eight Practical AI Lessons
A workshop led by Tom Lauwers
When: |
Saturday, January 10, 2026 10:00 am to noon Eastern Time (USA) |
Where: |
Everywhere via Zoom |
Cost: |
$10 per person Free for people who participated in the 2025 Logo Summer Institute |
|
Over the last several years, BirdBrain Technologies has created a growing collection of classroom-ready lessons that focus on practical and creative applications of Artificial Intelligence. Our focus with these lessons has been two-fold:
- Help students understand how Machine Learning works, including data collection, the fragility of models when they encounter unfamiliar inputs, and the inherently statistical nature of their predictions.
- Show students how to use AI models as intelligent sensors that can drive robot behavior—so robots can recognize and respond to patterns in images, sounds, and movement.
Our suite of lessons allows you to build and explore models that
recognize objects, speech, or gesture, then use those models to
control Snap! games or physical robots. These activities variously
work in Snap! and MakeCode, with micro:bits, Finch robots,
Hummingbird projects, and other micro:bit enabled robots.
In this two-hour workshop, we’ll give an overview of all eight
lessons—what each one focuses on, the grade bands they best
support, and what tools and materials you’ll need to run them in
your classroom. Next, we’ll go hands-on with two model-building
tools, Google Teachable Machine and Create AI, and show you how to
connect the models you create to a micro:bit to control physical
devices.
You’re welcome to treat this as a watch-and-learn session, or
follow along hands-on. To participate interactively, you’ll need
at least one micro:bit v2. A Finch or Hummingbird robot is
helpful but not required. You can request a demo from BirdBrain Technologies
of either Finch or Hummingbird, but note that demo requests
typically take 4-6 weeks to process.
The Zoom session will be recorded. All registered participants
will have access to the recording for post-workshop review. If you
can’t attend the live session, you may register anyway to have
access to the recording. Not the same as being there, but valuable
nonetheless.
About Machine LearningArthur C Clarke’s third law states that any sufficiently advanced technology is indistinguishable from magic. Modern computing certainly seems magical in this sense, especially to anyone who remembers Windows 3.1, punch cards, or difference engines. A large part of the magic of our newest technology is powered by machine learning, which is a method for transforming predictable, highly efficient computing machines into overly emotional conversationalists that have a hard time with simple math.In all seriousness, while chatbots get much of the press, Machine Learning (ML) has powered many of the most important computing inventions of the last 15 years: In addition to text, image, video, and music generators, ML also powers: 1. Recommendation engines such as you encounter on Amazon, TikTok, or Netflix, 2. Language translation systems, 3. Spam filters, 4. Face, fingerprint, and other unlock methods for your phone or computer customized to your biometrics, and much more. The (human) creators of AlphaFold, an ML algorithm for identifying protein structure, even won the Nobel prize in Chemistry in 2024. This, despite the fact that none of them were chemists. So, how does Machine Learning work? By examining a large number of specific cases that are placed into categories, a computer program can examine a new case to see which category it best fits into. This identification can then be used by the program to take action. With modern computing power, it’s possible to train a model on a computer, and then connect that model to something as tiny as a micro:bit. |
![]() |

