The following describes how the various steps and functions used in the process work and how they were tested.
Astar Path-planning :The pathfinding system for the robot is based on the A* algorithm which efficiently calculates optimal paths through environments with obstacles. The system is implemented as a distributed architecture where the computationally intensive pathfinding runs on the robot's Raspberry Pi 4, while the robot’s TI board executes the calculated path.
The robot navigates through its environment using a 2D grid representation, where the map (14ft x 13ft cells) distinguishes between obstacles ('x') and free space ('0'). This map is stored in character arrays (map and mapstart) that reflect the current state of the environment. When the robot needs to calculate a new path, it condenses this map data into a more efficient binary format for transmission. Due to the nature of our project being performed in the hallway, we needed to recreate a new map that was different from the starter code. The sharp 90 degree corner of the hallway and the predefined box obstacle was emulated by setting it as an obstacle on the map with ‘x’s.
char map[182] = //14x13
{ 'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'x', 'x', 'x', 'x', 'x', 'x', 'x', '0', '0', '0', '0', '0', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0', 'x', 'x', 'x', 'x', 'x', '0',
'0', '0', '0', '0', '0', '0', '0', '0', 'x', 'x', 'x', 'x', 'x', '0',
'0', '0', '0', '0', '0', '0', '0', '0', 'x', 'x', 'x', 'x', 'x', '0',
'0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0',
'0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0' };
The pathfinding process begins when the robot determines it needs a new path (triggered by the StartAstar flag). The robot packages its current position, destination coordinates, and the condensed map data in the SendAStarInfo structure and sends this to the Raspberry Pi 4 via serial communication. During this processing time, the robot pauses its movement (controlled by the AstarRunning flag) until the new path is received.
Once the external system calculates the path, it sends back a sequence of waypoints that avoid obstacles. The robot processes this information and stores the path coordinates in the pathRow and pathCol arrays, with the system supporting up to 40 waypoints. The robot's progress along this path is tracked using variables like statePos and numpts, with the xy_control function handling the movement between waypoints.
Extensive testing was conducted to tune the pathfinding parameters and ensure reliable navigation through various environments. Another key aspect was transforming between different coordinate systems. The robot operates in a global coordinate frame, while the A* algorithm works with a grid-based representation. This transformation is handled through a specific mapping process that addresses several challenges: reversing the path sequence (since A* returns paths from destination to start), converting grid columns directly to x-coordinates, and flipping the y-coordinate system with the operation 10 - pathRow to account for different origins between the grid and robot coordinate systems. Without this careful coordinate transformation, the robot would attempt to navigate to incorrect positions in the physical world. By properly bridging these coordinate systems, the code ensures that when the robot executes the xy_control() function to move between waypoints, it accurately follows the optimal path calculated by the A* algorithm while successfully avoiding obstacles in its environment.
The robotdest array is very similar to the ones we used in the lab. The main difference being that the Astar algorithm utilized the standard robotdest array, so to iterate through waypoints AFTER Astar we needed to create a new robotdest array to implement in a separate xy_control. To implement the second control we used the new array, a new statePos variable, and a path-selection flag to switch between controls.
The ball collection system for the robot is based on a switch case structure that uses multiple cases to set and control the speed, turn, motor control, and delays for the system. The gate and tongue are controlled by servo motors that are set by the control variables Gangle and Tangle, as seen below, which is checked and updated every 40 milliseconds.
The robot switches between the various functions by using a state machine to control what state the robot is in and what action it will take depending on the conditions. For the obstacle avoidance, ball collection, and sorting functions certain parameters are set to trigger the robot state to change depending on the minimum conditions. The first state of the robot is a check state that looks if any sensor on the robot meets the minimum requirements to change into its respective state. The first check is for obstacle avoidance in which the robot uses the lidar sensor to slow down the robot if it detects an obstacle 6 inches in front of it and switches to the right wall following state. The second and third checks, use the camera to scan for magenta and green golf balls and use OpenMV to process the image and determine which is in front of it. The robot switches to the pink and green when it detects at least 6 pixels of the corresponding color for 2 seconds and then switches to case 20 and 30.
The ball collection and sorting for the magenta balls is done in cases 20 to 26 and follows a process of tracking the ball, deciding on the collection bin, grabbing the ball, and closing the gate and resetting the collector. Case 20, shown in the next figure, is the tracking case that follows and centers the ball in front of the robot. In this state the center of the robot’s camera vision is defined as colcentroid which is the middle of the column values of the camera. The code then has the robot follow a series of if statements that will set the vref and turn values to follow and approach the ball position until it is a set distance in front of the robot determined by the number of pixels it is from the top of the vision range.
Once it reaches the minimum row threshold the robot state changes to 21 and follows the structure seen in the following. State 21 is a simple delay state that stops the robot for 3 seconds to allow the balls in the collection bins to settle and prevent them from rolling out when the gate is open. State 22 opens the gate by changing the control gate angle and sets the tongue angle for the left bin for sorting and then changes to state 24. In this case the robot moves forward for 1 second and advances to state 25 which continues to move the robot forward for another second while also closing the gate behind it. Finally case 26 uses another 2 seconds to reset the tongue and gate angles before continuing on its path and collecting more balls. The collection process for the second green color balls is the same but uses the color threshold values set for it and funnels the collected balls into the right bin.
Multiple tests were conducted to tune the thresholds for collections, delays, and angles and to ensure a reliable and reproducible collection process. The color thresholds were done by placing the golf balls in the appropriate environment and using the camera’s vision an image was captured of the balls and processed to isolate the colors and account for glare from lights and reflections. The collection testing served multiple purposes and was useful in troubleshooting the robot to avoid interferences. The standard test procedure was to place the two colored golf balls around the path of the robot and see how it would track and collect them. In the initial tests the gate position would block the camera's vision of the ball and prevent it from leaving its control state and cause it to be stuck in front of the ball. To account for this the gate angle for the open state was further opened to clear the camera vision. Another issue that the testing discovered was that during collection some balls would roll out due to the momentum of the balls in the bin. As a result the delay cases were added to the collection steps to ensure the balls would not continue to roll forward during the collection process and when the gate is open. To further account for rolling the case 25 procedure was added to close and scoop balls back into the bin in case they managed to roll forward. Finally the ball collection and sorting was determined to be complete when it could reliably and repeatedly collect the various balls as expected and sort them without getting stuck or spilling its contents.
April Tag Detection:To allow the robot to detect april tags a main.py file was provided. The only changes that needed to be made were to set runtag=1 and set tag.id<= to the number of tags desired.
The localization algorithm starts with the detection of the april tag in the dead reckoning case structure. If the april tag has not been detected before, the robot switches to case 60 which stops its movement for half a second timed by a count variable. Then another switch sets the Robot to case 61, where vref is set to 1.0 for another half second (determined by the count variable). This is to set the Robot closer to the april tag for improved accuracy of the built-in z distance and z angle april tag detection of the OpenMV camera. Finally, The robot is switched to case 62 which is where the distance calculations between the april tag and camera is located.
The Robots velocity and turn speed is set to 0. A summing operation of the z distance between the camera and the april tag is started(which will be negative as it considers the april tag as the initial point and the april tag as the negative distance away from that point. This sum will occur for 5 seconds after which it will be divided by the amount of times it's been counted (5000) timed by the count variable and a scaling factor of 12 so that the average is in feet. This average reduces the error of the noise in the z distance produced by the camera. An expected x and y coord for the april tag is set in the global variables, and depending on whether the robot is facing the global positive x or negative x directions will calculate the current location of the robot. A detailed description of these equations can be seen below. The current position of the robot is set to these calculated points to account for drift using x = xpred[0][0] and y = xpred[1][0], a counter variable is set so the tag won't be repeatedly detected, and then the robot is switched back to zig zag mode.
The obstacle avoidance follows a simple right wall following program to avoid anything in its path. When an obstacle is detected by the lidar 6 inches in front of the robot, it sets a flag called right_wall_follow_state to 1 and switches to case 10 shown below in the figure. When the flag is 1 the robot will turn left depending on the lidar distance in front of it and once the front distance is greater than the minimum threshold the left turn will stop and the flag will change to 2. When it is 2 the robot will switch to the right wall following and balance its distance from the right wall depending on a set distance and the front right lidar values.
The turn speed is limited by a turn threshold that saturates the values and prevents it from spinning too quickly. If another wall appears in front below the threshold the robot will turn again and repeat the process until a full 5 seconds pass and there is at least a 1.5 foot distance from the front reading and return to state 1. The thresholds and values were fine tuned by testing the right wall following in a square testing area and placing multiple obstacles in its path and measuring its performance in avoiding them. In addition to the front and front right readings from the lidar multiple other ranges were defined as seen in the next figure, to allow for other obstacle avoidance processes or check parameters. These values could also have been used in aligning the robot to be centered in hallways and passages.
As most of the other processes a case structure was created, this time we entered the case structure when we read a specific AprilTag that corresponded to drop off zones. These drop off zones were color coordinated so each of the two tags had their own case structure that dropped off either the green or the magenta golf balls. The structure consisted of an approach, a delay (to keep the golf balls from rolling much), a gate open and tongue actuation, a reverse case, and a gate close case. Each case structure also requires a flag to be ‘0’ which is set to one during the drop off sequence.




Comments