For our ME 461 (Computer Control of Mechanical Systems) final project at UIUC, we built a line-following robot-car. It uses Python OpenCV blob-detect commands to identify an orange line and a switch function to self-correct its driving trajectory by following this line. The robot also detects crashes through a thresholded IMU Y-axis acceleration. An additional two-line following algorithm was implemented to simulate road boundaries.
Subsystems:The project comprises two main subsystems:
Robot-car
The robot-car is powered by two DC motors, controlled by the TI F28379D Launchpad board. Motor encoder data is read to vary the two control variables: Vref and turn. Vref controls the motor power & speed. turn is a signed variable that turns the robot left or right. The onboard IMU monitors acceleration spikes that correspond with obstacles on the track.
Raspberry Pi/Camera system
The Raspberry Pi/Camera system implements the Python OpenCV blob-detect function. Camera input is filtered into a binary black-and-white image through an HSV (Hue-Saturation-Value) model. White regions are identified as blobs of the specified HSV. X and Y coordinates are saved for the centroids of these regions.
One-line following:UART communication (Launchpad serial port) and pySerial (Pi) establish communication between the Pi and the Launchpad. pySerial packs the X and Y blob-centroid data. The Launchpad receives these values, as shown in the controller code (F28379DSerial.c).
Experimenting with varied ambient lighting scenarios established reasonable HSV thresholds to identify the neon orange line. Python OpenCV blob-detect functions then identify the largest blob by area. The selected blob's centroid data is serially sent to the Launchpad through the established UART communication protocol.
The Launchpad controller code uses a switch function to alter Vref and turn based on the received blob centroid coordinates. It utilizes an interconnected state flow that represents executable commands in the broad scenarios of line following and crashes.
The line following state is defined by the centroid coordinate error (centerr), the lateral difference between the X-centroid coordinate and the camera view center. The captured video has a resolution of 160x120 pixels, so
centerr is X_centroid_coord - (160/2), or cx - 80 in the code
Proportional control is implemented to scale turn based on the value of centerr.
turn = 0.01*(centerr)
0.01 was determined experimentally to provide a smooth line following. Similarly, Vref is always set to 0.15.
One limitation of this control scheme is the large turn radius, as the robot is nearly a square foot in area. This makes it easy for the camera to lose sight of the line on tighter turns.
Crash Detection:Crashes are detected by identifying Y-acceleration spikes above an experimentally determined threshold. An open-loop avoidance command helps the robot detour around obstacles and back onto its intended track/direction path.
To determine the Y-acceleration threshold, an oscilloscope was connected to the 6-channel IMU on the robot-car to read the output signals. Subsequent testing defined the robot speed (Vref) necessary to attain distinguishable collision data from ambient noise or robot-turn accelerations. This was found to be around 7 (note: readouts are scaled to metric acceleration values in the code). We increased this value slightly for better object-specific avoidance.
A switch-based control flow structure runs a detour algorithm upon detection of collision-acceleration peaks. This detour algorithm alters turn and the robot speed (Vref) to reverse, turn 90 degrees to the right, go straight, turn 90 degrees to the left, go straight, go 90 degrees to the left, go straight, and turn 90 degrees to the right. This draws a rectangle (of fixed dimensions) around the right side of the obstacle and returns the robot to the line (assuming the line passes straight through the obstacle). If the line is not perpendicular to the obstacle (front & back) or the robot collides at an angle, the robot fails to identify the obstacle appropriately & continues its one-line following routine.
Combining the line following & crash detection routines at first led to instant crash detection when the robot-car was placed down. To address this, a 5-second timer was used to delay the crash detection routine. So, the robot can line follow for at least 5 seconds before a crash can be detected. This delay-timer can be reactivated by pressing the yellow reset button on the Launchpad board to reset the robot's entire code.
Two-line following:The OpenCV blob-detect centroid calculator iterates through a vector of all visible blobs in a frame and saves the two largest blob centroids. Instead of sharing the Y coordinate of the largest blob's centroid, the X coordinate of the second largest blob's centroid is shared from the Pi to the Launchpad.
The Launchpad averages the two X coordinates and utilizes the pre-established proportional control. So the control law is
turn = 0.01*(centerr), where centerr = (cx1 + cx2)/2
On tight turns, this control law fails by veering the robot-car too far off the track. The robot-car reverts to one-line following for a segment of the track before recognizing the second line. To address this the two lines were made closer to each other, thereby more distinguishable in the camera's FOV (field of view).
Two-line following capabilities can be improved by pointing the camera forward and increasing the FOV. This reduces the control law failure discussed. An optimum FOV-enabling camera mount can be 3D printed, and thus the lateral distance between the two lines can be increased.
Comments