The AutoWheelchair is an autonomous vehicle that is capable of lane keeping and stop box recognition on the ground. When the car encounters a stop box shown on the ground for the first time, it will stop and wait for several seconds and then continue moving. When it encounters a stop box for the second time, it stops eventually.
A Linux-based BeagleBone AI-64 (BBAI-64) board is used in this case to implement the algorithm and a webcam is used to feed real-time video to the BBAI-64. At this point, the autonomous car can keep moving along the given lanes and turn at a bend. Additionally, a USB WiFi adapter is connected to the BeagleBone, facilitating SSH access and remote management of the car's maneuvers. OpenCV was employed for its capabilities in maintaining lane discipline and detecting traffic signs.
Our team AutoWheelChair executed the project as part of the ELEC 553 – Mobile and Embedded Systems course at Rice University, Houston, TX, during the Fall 2023 semester.
Based on code and descriptions from these given sources:
1. User raja_961, “Autonomous Lane-Keeping Car Using Raspberry
2. Karan V., Yufei G., Jie G., Jikun COVID Debuff (Semi-Autonomous RC Car Platform)
The AutowheelChair regulates the resolution of the video at the beginning. To match the computational performance of BeagleBone AI 64 and reduce visual computation time, only 160x120 resolution samples were collected from the 1080p network camera input. Using lower resolution can not only accelerate the speed of rotation angle updates but also ignore the interference of more details caused by high resolution.
To process the video to find the target driving trajectory, our team first converts the input frame from the BGR color space to the HSV color space. After that, we apply Canny edge detection on the region that is only in blue and define a polygon representing the lower half of the screen. In the lower half of the screen, we use the HoughLinesP algorithm to detect line segments in the cropped edges.
With all the preparation work for the video has been done, we can now work on the target driving trajectory. we first take line segments and classify them as left or right lane lines based on their slopes and positions, then average the slopes and intercepts of the detected lines to obtain a representative line for each side. Convert slope and intercept to pixel coordinates, and define a line from the bottom of the frame to the middle so that we will get the target driving trace.
def make_points(frame, line):
height, width, _ = frame.shape
slope, intercept = line
y1 = height # bottom of the frame
y2 = int(y1 / 2) # make points from middle of the frame down
if slope == 0:
slope = 0.1
x1 = int((y1 - intercept) / slope)
x2 = int((y2 - intercept) / slope)
return [[x1, y1, x2, y2]]To address the inherent instability of the remote-controlled car platform, our team has implemented PD controllers to mitigate this problem. This closed-loop controller ensures the vehicle consistently follows the lane line, staying between two lanes and close to the center of the road, thereby enhancing control around the set point (lane center). The PID controller applies three types of control, namely proportional, integral, and differential terms, to stabilize the system.
Proportional:
P represents proportion when our robot is located in a two-dimensional world with coordinates (x, y). Assuming the target path is y=0, the current error towards the target is y-0=y. Based on the error, we should adjust the steering of the car accordingly. One direct method is to adjust the steering proportionally according to the error, which means that the farther away from the target, the greater the angle of driving the car. However, one case is that our steering was so intense that the car exceeded our target, and when it tried to turn backward, it once again exceeded the target. In this case, we should introduce the differential term.
Differential:
As the error decreases, our car should turn in reverse to avoid overshoot, rather than constantly turning downwards, which can cause oscillation. The new term added to the control process is δY/δt. Among them δT is 1, so δY=current_ Y - last_ Y. After inputting the PD parameters into the code, our team almost perfectly stabilizes the remote-control car platform.
The stop box detection is implemented in a relatively primitive way - it stops when the remote-controlled car platform encounters a red square on the track. When encountering a red square for the second time, the controller will permanently stop the remote control of the car platform. The process of stopping box detection involves using a calibrated red HSV mask on the OpenCV frame used as the steering input, and then using a Region of Interest (ROI) mask.
YOLOP:
In order to run drivable space identification by YOLOP, we must ensure a Python environment with the compatible Torch version, download YOLOP code from its official GitHub repository, and install necessary dependencies. We also adds Frame Rate calculations in the YOLOP part (running on Macbook Pro 4-core Intel i5 CPU). It works well when we trying it with the blue lane lines.
Limitation:
We also strive to sustain a consistent speed using a speed encoder. This functionality is facilitated by our driver, gpiod_driver.c, which calculates the duration between encoder inputs and relays this data to our main.py file for further analysis. Our primary script then fine-tunes the speed as needed. However, it seems we are encountering an issue with the hardware board. Although the code is accurate, it's not functioning as expected on our system.
Since we got some problem with our speed encoder, we couldn't get the speed PWM value. However, we could still get the Steer value: the green line shows the Steer PWM and the orange line shows the Steer Error, which is constant 0.
Here is our PID Error Responses versus Frame Number. You can see that our Derivative Response is very sensitive, while the Proportional Response is more stable, reflecting the algorithm of our project.
References:
D. Rothfusz, L. Ivory, R. Bose, I. Scott, and B. Stanley, “Autonomous Path following car, ” Hackster.io, 13-Dec-2021. [Online]. Available: https://www.hackster.io/really-bad-idea/autonomous-path-following-car-6c4992. [Accessed: 08-Dec-2023].
Fredotran, “Fredotran/traffic-sign-detector-yolov4: This repository contains my upgraded version of using Yolov4 with opencv DNN to detect 4 classes of traffic road signs: Traffic lights, speed limit signs, crosswalk and stop signs., ” GitHub. [Online]. Available: https://github.com/fredotran/traffic-sign-detector-yolov4. [Accessed: 08-Dec-2023].
J. G. Ziegler and N. B. Nichols, “Optimum settings for automatic controllers, ” Journal of Dynamic Systems, Measurement, and Control, vol. 115, no. 2B, pp. 220–222, 1993. [Accessed: 08-Dec-2023].
“OpenCV modules, ” OpenCV. [Online]. Available: https://docs.opencv.org/4.x/. [Accessed: 08-Dec-2023].
raja_961 and Instructables, “Autonomous Lane-keeping car using Raspberry Pi and opencv, ” Instructables, 07-Oct-2022. [Online]. Available: https://www.instructables.com/Autonomous-Lane-Keeping-Car-Using-Raspberry-Pi-and/. [Accessed: 08-Dec-2023].
YOLOP code source. Available: https://github.com/hustvl/YOLOP. [Accessed: 08-Dec-2023].






Comments