Here are all the projects I completed as part of Udacity's Self-Driving Car Nanodegree. Each project has a README that describes the project, usually with GIFs or links to videos of the end result.
Capstone Project: Use ROS and write code that will run on Udacity's physical Carla test car. This was a group project and I owned the red light detection end-to-end, building a TF classifier using the TF Object Detection API and training a ResNet on sample traffic light datasets.
Vehicle Detection: Given an input video, annotate and track car locations in each frame. I wanted a smooth solution, so I cached locations between keyframes and used optical-flow to move bounding boxes so the entire frame search only had to be explored every 15 frames or so. This allows the algorithm to run in realtime.
Path Planning: Plan paths for a vehicle on the highway to pass other vehicles, optimizing for speed while requiring jerk/braking/acceleration limitations.
Advanced Lane Lines: Annotate lane driving area on a highway in a video feed. This required calibrating the camera to remove lens distortion, transforming the perspective to a top-down view, and creating a polynomial fit across lane lines.
Behavioral Cloning: Record footage of driving on a video game track with steering angles annotated, then use that to train car to drive itself on that track. Like the GTA V stuff on Twitch.
MPC Project: Drive a car using simulated Drive-By-Wire with latency, using cost functions to prevent overcorrecting at high speeds.