This project implements a basic autonomous driving system for an rc car, using a RPi 5 along with a camera (webcam), an electronic speed controller (ESC), speed encoder, and a steering servo. The car we used came with a ESC and steering servo that we connected to the Pi using GPIO, and we attached a speed encoder via 2 extra wheels on the back that the Pi also connected with via GPIO. The webcam was connected via USB.
The car can keep lanes and take turns while keeping within lanes, as well as stop at "stop signs". The lanes must be blue scotch tape, and the stop sign is a red paper between the lanes.
Throttle ControlUsing a custom device overlay and encoder driver, we used GPIO pin 5 to record when the encoder switches from LOW to HIGH every 1/20 of an encoder wheel revolution. The driver code calculates the time between the current encoder trigger and the previous encoder trigger by registering an interrupt routine with GPIO pin 5 and a function that updates a driver parameter accessible to user programs. This parameter is accessible at /sys/module/speed_driver/parameters/elapsedTime. The encoder driver code is available at driver_code/speed_driver.c and the custom overlay is at overlay.dts.
The user program main.py uses this parameter by checking it every few ticks and if the elapsedTime parameter is higher than a threshold value, it speeds up the throttle by increasing the duty cycle. On the other hand, if elapsedTime is lower than another threshold value, it slows down the throttle by decreasing the duty cycle.
Lane RecognitionThe goal of the project was to make the car follow the lane lines made out of blue painters tape. To convert an image of the lane lines to an angle value that informs our steering duty cycle, we first had to filter the image by converting it into a mask, where pixels not within the color range of painters tape would be black (0) and the pixels within range of painters tape would be white (1). This color range was experimentally determined to best capture the painter's tape and nothing else.
This black and white mask is then filtered through canny edge detection, where the boundaries between 0 and 1 are highlighted with 1 values and everything else is 0. Now that only edges of the tape are found, we then run the image through OpenCV's HoughLinesP, which gets line segments with at least a certain length within two regions of interest for the right and the left lane. Once these line segments are found, their angles are calculated and averaged to get our final lane angle. The difference between this angle and the straight-ahead angle is our error that informs our steering PD control algorithm.
Steering PD ControlOur steering control algorithm uses the error mentioned above in our lane recognition algorithm. It uses two experimentally-derived coefficients for proportion and derivative. For each tick, we set the steering duty cycle to the neutral angle, plus a proportion of the error from lane detection, plus a proportion of the difference between the previous and current error. This proportion is determined by the coefficient parameters. The proportional parameter drives most of the original steering, but if this is not sufficient to correct the angle or if the error is being corrected too quickly, the derivative parameter comes into play.
Stop Box and Object RecognitionThe stop box is recognized by first overlaying a region of interest that consists of the bottom half of the view, and then determining what percentage of that view is within an arbitrary threshold of red. The arbitrary threshold was determined experimentally, ensuring that our stop sign register as red, but not any other color. We gave it some wiggle room as well, to ensure that it wouldn't miss the stop sign. The percentage threshold is also experimental, and we aimed it to be the approximate percentage of red on the bottom half of the screen when approaching the stop, so that it would stop in time.
Our VehicleOur code also outputs plots of important data about the car. These include a plot showing error, proportional response, and derivative response vs frame number, as well as a showing error, steering duty cycle, and speed duty cycle versus frame number. These plots can be seen below:
Our car was tested on a track consisting of three turns and two stop signs. The car navigated this environment successfully, as can be seen in this demo video.
Image DetectionIn our project, we also used YOLOv5 to identify objects using machine learning. This allows the car to detect and label objects that it sees. Video demos of our car running the code as well as what it sees can be seen here:
It also estimates a framerate for the video and prints it to the console when the code is terminated. For this run, the average framerate was 2.02 frames per second.
Citations[1] User raja_961, “Autonomous Lane-Keeping Car Using Raspberry Pi and OpenCV”. Instructables. URL: https://www.instructables.com/Autonomous-Lane-Keeping-Car-U sing-Raspberry-Pi-and/
[2] Team Houston Dynamics, "We Have Cybertruck at Home". URL: https://www.hackster.io/houston-dynamics-spencer-darwall-noah-elzner-agha-sukerim-anthony-zheng/we-have-cybertruck-at-home-0e429f
[3] Team M.E.G.G., "The Magnificent M.E.G.G. Car". URL: https://www.hackster.io/m-e-g-g/the-magnificent-m-e-g-g-car-28ec89
[4] OpenAI. (2025). Futuristic robotic vehicle on neon-lit path [AI-generated image]. ChatGPT. https://chat.openai.com/












Comments