This project demonstrates an autonomous humanoid robot utilizing a Particle Tachyon with a Qualcomm AI Accelerator to detect and kick balls using a YOLO-Pro model trained on Edge Impulse. The robot leverages ROS 2 for seamless integration of perception, decision-making, and motion control. The YOLO-Pro model enables real-time ball detection with high accuracy, optimized for edge deployment on the Qualcomm AI Accelerator. The system processes visual input, plans trajectories, and executes precise kicking actions autonomously. This innovative setup showcases the synergy of embodied AI, edge computing, and robotics for dynamic, real-world tasks.
HardwareWe are using a Particle Tachyon with Qualcomm Dragonwing QCM6490 SoC. The Dragonwing QCM6490 features an octa-core Qualcomm Kryo CPU, a Qualcomm Adreno 643 GPU, and a Qualcomm Hexagon 770 DSP containing an AI accelerator capable of delivering 12 TOPS.
We selected a HiWonder AiNex Biped Humanoid robot equipped with 24 serial bus servos and a 640x480 resolution USB camera fitted on the head. It comes with a preinstalled Raspberry Pi 4, which we will swap out for the Particle Tachyon.
We opted to use the Particle Tachyon with its Qualcomm AI Accelerator for rapid inferencing to detect the ball in a dynamic environment requiring quick response times. The Particle Tachyon shares the Raspberry Pi 4 form factor, but replaces the USB and LAN connectors with a 5G antenna. Swapping the existing Raspberry Pi 4 with the Particle Tachyon was straightforward. We can remove the robot's back plate and detach the robot controller hat and Raspberry Pi 4 by unscrewing them.
Mount the Particle Tachyon using the 4 x hex standoffs.
Attach the robot controller hat to the 40-pin header of the Particle Tachyon and fix it using the screws.
The robot controller hat connects to 24 serial bus servos and supplies power to the Particle Tachyon (via the 5V power pin on the 40-pin header) and the servos from an 11.1 V, 3500 mAh LiPo battery. We assessed the power usage of the Particle Tachyon while connected to 5G, USB webcam, and running the AI accelerator. The consumption generally remained below 800mA, with occasional peaks reaching 1.2A.
To configure the Particle Tachyon, follow the setup guidelines provided at https://developer.particle.io/tachyon/setup/install-setup. Once completed, SSH access to the device over WiFi should be fully enabled and operational. We chose Headless mode installation because the device will be deployed on a walking humanoid robot and does not need a desktop interface.
Qualcomm AI Engine Direct SDK InstallationTo execute inference with the QNN model compiled for the Qualcomm AI Accelerator, please follow the installation instructions for the required packages outlined in my other project at the provided link.
Humanoid Robot SetupThe HiWonder AiNex driver and SDK rely on ROS 1 Noetic, which reached end-of-life on May 31, 2025. Its Kinematics library, built for Python 3.8 on Ubuntu 20.04, is incompatible with Ubuntu 22.04 or 24.04 due to their newer Python versions. However, the Particle Tachyon natively supports Ubuntu 20.04. Although ROS 2 Humble and Jazzy cannot be installed via APT on Ubuntu 20.04, ROS 2 Humble can be compiled from source. Thus, the Particle Tachyon running Ubuntu 20.04 is the optimal choice for using ROS 2 Humble and the AiNex Kinematics library. Additionally, since the RPI.GPIO library is not supported on Particle Tachyon, it can be substituted with Adafruit Blinka, which uses libgpiod for GPIO functionality and is compatible with Particle Tachyon. We fully rewrote and ported all the provided code from ROS 1 to ROS 2.
ROS 2 Humble Installation
Follow the instructions below to install the essential packages needed to build ROS 2 from source.
Set Locales
$ sudo apt update && sudo apt install locales
$ sudo locale-gen en_US en_US.UTF-8
$ sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
$ export LANG=en_US.UTF-8
Set the ROS 2 Apt Repository
$ sudo apt install software-properties-common
$ sudo add-apt-repository universe
$ sudo apt update && sudo apt install curl -y
$ export ROS_APT_SOURCE_VERSION=$(curl -s https://api.github.com/repos/ros-infrastructure/ros-apt-source/releases/latest | grep -F "tag_name" | awk -F\" '{print $4}')
$ curl -L -o /tmp/ros2-apt-source.deb "https://github.com/ros-infrastructure/ros-apt-source/releases/download/${ROS_APT_SOURCE_VERSION}/ros2-apt-source_${ROS_APT_SOURCE_VERSION}.$(. /etc/os-release && echo ${UBUNTU_CODENAME:-${VERSION_CODENAME}})_all.deb"
$ sudo dpkg -i /tmp/ros2-apt-source.deb
Install development tools and ROS tools
$ sudo apt update && sudo apt install -y \
python3-flake8-docstrings \
python3-pip \
python3-pytest-cov \
ros-dev-tools
$ python3 -m pip install -U \
flake8-blind-except \
flake8-builtins \
flake8-class-newline \
flake8-comprehensions \
flake8-deprecated \
flake8-import-order \
flake8-quotes \
"pytest>=5.3" \
pytest-repeat \
pytest-rerunfailures \
empy==3.3.4
Get ROS 2 sourcecode and install dependencies
$ mkdir -p ~/ros2_humble/src && cd ~/ros2_humble
$ vcs import --input https://raw.githubusercontent.com/ros2/ros2/humble/ros2.repos src
$ sudo apt upgrade
$ sudo rosdep init
$ rosdep update
$ rosdep install --from-paths src --ignore-src -y --skip-keys "fastcdr rti-connext-dds-6.0.1 urdfdom_headers"
Build the ROS 2 Humble from source
$ cd ~/ros2_humble/
$ colcon build --symlink-install
If the build process fails due to missing Ubuntu packages, check the log to identify the missing package, install it using apt, and then resume the build process with the following command.
$ colcon build --symlink-install --packages-skip-build-finished
We omitted RViz2 and its dependencies because of conflicts with the Qualcomm graphics driver, and we won’t need it in headless mode anyway.
Data AcquisitionWe need to sign up an account at the Edge Impulse Studio and create a new project. We will be using the webcam mounted on the robot's head to capture images of a ball. We need to install the Edge Impulse CLI tool by following the instructions at https://docs.edgeimpulse.com/tools/clis/edge-impulse-cli/installation.
Run the command below and follow the onscreen instructions to connect the device to Edge Impulse Studio for data collection.
$ edge-impulse-linux
We captured 232 images with different distances and lighting settings and labeled them using the Labeling Queue tab in Edge Impulse Studio’s Data Acquisition page.
We collected multiple samples of motion blur and bright lighting conditions resulting from camera movement, overhead lighting, and reflective surfaces to improve the model's accuracy and prevent missed detections.
For model development, we need to create an impulse, a custom pipeline integrating image processing and machine learning models. Navigate to the Impulse Design > Create Impulse page, select Add a processing block, and choose Image to preprocess and normalize image data. On the same page, select Add a learning block and choose Object Detection (Images). We are using a 320x320 image size. Then, click the Save Impulse button.
Next, go to the Impulse Design > Image page set the Color depth parameter as RGB, and click the Save parameters button which redirects to another page where we should click on the Generate Feature button. It usually takes a couple of minutes to complete feature generation.
We can see the 2D visualization of the generated features in the Feature Explorer.
To train the model, navigate to the Impulse Design > ObjectDetection page. The Object detection settings we selected are displayed below.
We opted for the basic spatial and color space augmentations in the Advanced training settings.
We selected the latest YOLO-Pro model, ideal for detecting balls of varying sizes due to different distances from the camera, and capable of running efficiently on the Qualcomm AI Accelerator. Click on the Save & train button to begin training.
Note: YOLO-Pro is currently in active development. Edge Impulse does not guarantee accuracy and does not encourage the use of this model in production..
Once training is finished, the training performance is shown as follows. The model attained a 98.7% precision score on the training data.
To evaluate the model’s performance on the test datasets, navigate to the Model testing page and click the Classify All button. The model shows improved performance on unseen datasets with a 100% accuracy.
As the model will run inference using the Qualcomm AI Accelerator on the Particle Tachyon, we chose the Linux (AARCH64 with Qualcomm QNN) option on the Deployment page.
Select the Quantized (int8) option for Model Optimizations, since the Qualcomm AI Accelerator is incompatible with float32 models.
Click the Build button to compile and download the EIM (Edge Impulse Model) binary.
Application DevelopmentThe application is built as distributed ROS 2 nodes, adhering to the separation of concerns principle. It captures images from a camera, processes them to detect a ball, and uses inference to track and approach the ball with precise gait control. The results are streamed over HTTP for real-time monitoring. This modular design ensures clear division of tasks, enhancing efficiency and maintainability. Each node handles a specific function, from image capture to motion control, while seamlessly integrating to achieve the application's objectives.
Here is the code for the object_detection
node.
import rclpy
import cv2
import os
from rclpy.node import Node
from sensor_msgs.msg import Image
from std_msgs.msg import Header
from vision_msgs.msg import Detection2DArray, Detection2D, ObjectHypothesisWithPose
from cv_bridge import CvBridge
from edge_impulse_linux.image import ImageImpulseRunner
class ObjectDetectionNode(Node):
def __init__(self):
super().__init__("object_detection")
self.declare_parameter("resource_path", "")
resource_path = (
self.get_parameter("resource_path").get_parameter_value().string_value
)
model_file = os.path.join(
resource_path, "object_detection-linux-aarch64-qnn-v42.eim"
)
self.cvbridge = CvBridge()
self.sub = self.create_subscription(
Image, "/camera/image_rect_color", self.image_rgb_callback, 10
)
self.pub_dets = self.create_publisher(
Detection2DArray, "/edge_impulse/detections", 10
)
self.runner = ImageImpulseRunner(model_file)
model_info = self.runner.init()
resizeMode = model_info["model_parameters"].get(
"image_resize_mode", "not-reported"
)
self.get_logger().info(f"resizeMode = {resizeMode}")
def image_rgb_callback(self, msg):
img = self.cvbridge.imgmsg_to_cv2(msg)
features, cropped = self.runner.get_features_from_image_auto_studio_settings(img)
res = self.runner.classify(features)
vision_msg_dets = Detection2DArray()
vision_msg_dets.header = msg.header
for bb in res["result"]["bounding_boxes"]:
det2D = Detection2D()
# det2D.header.stamp = ts
objHyp = ObjectHypothesisWithPose()
det2D.bbox.center.position.x = float(bb["x"] + bb["width"] / 2)
det2D.bbox.center.position.y = float(bb["y"] + bb["height"] / 2)
det2D.bbox.size_x = float(bb["width"])
det2D.bbox.size_y = float(bb["height"])
objHyp.hypothesis.class_id = str(bb["label"])
objHyp.hypothesis.score = float(bb["value"])
det2D.results.append(objHyp)
vision_msg_dets.detections.append(det2D)
self.pub_dets.publish(vision_msg_dets)
def destroy_node(self):
if self.runner:
self.runner.stop()
if rclpy.ok():
self.get_logger().info("Object detection node stopped")
super().destroy_node()
def main(args=None):
rclpy.init(args=args)
object_detection_node = ObjectDetectionNode()
try:
rclpy.spin(object_detection_node)
except KeyboardInterrupt:
pass
finally:
object_detection_node.destroy_node()
if rclpy.ok():
rclpy.shutdown()
if __name__ == "__main__":
main()
Here is the code for the find_and_kick_ball
node.
import rclpy
import math
from rclpy.node import Node
from rclpy.duration import Duration
from std_msgs.msg import Int32MultiArray, MultiArrayDimension
from vision_msgs.msg import Detection2DArray
from ainex_sdk import pid, common
from find_and_kick_ball.pid_track import PIDTrack
from find_and_kick_ball.approach_object import ApproachObject
from ainex_interfaces.srv import SetServosPosition, GetServosPosition, RunAction
from ainex_kinematics.gait_manager import GaitManager
class KickBallNode(Node):
def __init__(self):
super().__init__("kick_ball_node")
self.img_width = 640
self.img_height = 480
self.start = True
self.left_shot_action_name = "left_shot"
self.right_shot_action_name = "right_shot"
self.detections = []
self.count_miss = 0
self.start_index = 0
self.start_find_ball = False
self.head_pan_init = 500
self.head_tilt_init = 300
self.head_time_stamp = self.get_clock().now()
self.create_subscription(
Detection2DArray,
"/edge_impulse/detections",
self.object_detection_callback,
1,
)
self.run_action_client = self.create_client(RunAction, "run_action")
self.set_servos_pos_client = self.create_client(
SetServosPosition, "set_servos_position"
)
self.get_servos_pos_client = self.create_client(
GetServosPosition, "get_servos_position"
)
self.wait_for_services()
self.declare_parameter("calib_config_file", "")
self.calib_config_file = (
self.get_parameter("calib_config_file").get_parameter_value().string_value
)
self.calib_config = common.get_yaml_data(self.calib_config_file)
self.rl_dis = None
self.ud_dis = None
self.last_rl_dis = None
self.last_ud_dis = None
self.gait_manager = GaitManager(self)
self.approach_object = ApproachObject(
self.gait_manager, self.calib_config, step_mode=0
)
self.approach_object.update_gait(dsp=[400, 0.2, 0.02])
self.approach_object.update_stop_count(1)
self.approach_object.update_gait_range(x_range=[-0.013, 0.013])
self.approach_object.update_approach_stop_value(30, 0, 3)
# self.approach_object.update_approach_stop_value(20, 0, 3)
self.head_pan_range = [125, 875]
self.head_tilt_range = [260, 500]
# self.pid_rl = pid.PID(0.1, 0.0, 0.001)
# self.pid_ud = pid.PID(0.1, 0.0, 0.001)
self.pid_rl = pid.PID(0.05, 0.0, 0.005)
self.pid_ud = pid.PID(0.05, 0.0, 0.005)
self.head_pan_init = 500
self.head_tilt_init = 300
self.rl_track = PIDTrack(self.pid_rl, self.head_pan_range, self.head_pan_init)
self.ud_track = PIDTrack(self.pid_ud, self.head_tilt_range, self.head_tilt_init)
self.yaw_stop = 40
self.find_ball_position = [
[650, 300, 2000],
[650, 500, 2000],
[500, 500, 2000],
[350, 500, 2000],
[350, 300, 2000],
]
self.call_run_action("walk_ready")
self.init_action(self.head_pan_init, self.head_tilt_init)
def init_action(self, head_pan_init, head_tilt_init, delay=200):
self.call_set_servos_position(
delay, [[23, head_pan_init], [24, head_tilt_init]]
)
self.gait_manager.stop()
self.get_clock().sleep_for(Duration(seconds=delay / 1000))
def object_detection_callback(self, msg):
self.detections = msg.detections
for detection in self.detections:
detection.bbox.center.position.x = (
detection.bbox.center.position.x * 480 / 320 + 80
)
detection.bbox.center.position.y = (
detection.bbox.center.position.y * 480 / 320
)
detection.bbox.center.position.x -= self.calib_config["center_x_offset"]
def head_track_process(self, detection):
if abs(detection.bbox.center.position.x - self.img_width / 2) < 10:
detection.bbox.center.position.x = float(self.img_width / 2)
if abs(detection.bbox.center.position.y - self.img_height / 2) < 10:
detection.bbox.center.position.y = float(self.img_height / 2)
rl_dis = self.rl_track.track(
detection.bbox.center.position.x, self.img_width / 2
)
ud_dis = self.ud_track.track(
detection.bbox.center.position.y, self.img_height / 2
)
if rl_dis is None or ud_dis is None:
self.get_logger().error(
f"Invalid PID output: rl_dis={rl_dis}, ud_dis={ud_dis}"
)
return None, None
try:
servo_positions = [[23, int(rl_dis)], [24, int(ud_dis)]]
# self.call_set_servos_position(20, servo_positions)
self.call_set_servos_position(50, servo_positions)
# self.get_logger().info(f'head track: {rl_dis}, {ud_dis}')
return rl_dis, ud_dis
except ValueError as e:
self.get_logger().error(f"Error converting PID outputs to int: {e}")
return None, None
def body_track_process(self, rl_dis, ud_dis, detection):
if detection is not None:
yaw_stop = 500 + math.copysign(self.yaw_stop, rl_dis - 500)
yaw_stop_ = (yaw_stop - rl_dis) / 9
if 15 < abs(yaw_stop - rl_dis) < 27:
yaw_stop_ = math.copysign(4, yaw_stop - rl_dis)
if self.approach_object.process(
500 - ud_dis,
0 + self.calib_config["center_x_offset"],
yaw_stop_,
self.head_tilt_range[0],
0,
0,
self.img_width,
self.img_height,
):
# self.get_logger().info('diable gait_manager')
self.gait_manager.disable()
if rl_dis > 500:
self.call_run_action(self.left_shot_action_name)
else:
self.call_run_action(self.right_shot_action_name)
def find_ball_process(self):
if self.get_clock().now() > self.head_time_stamp:
if self.start_index > len(self.find_ball_position) - 1:
self.start_index = 0
rl_dis = self.find_ball_position[self.start_index][0]
ud_dis = self.find_ball_position[self.start_index][1]
self.rl_track.update_position(rl_dis)
self.ud_track.update_position(ud_dis)
duration = self.find_ball_position[self.start_index][2]
self.call_set_servos_position(duration, [[23, rl_dis], [24, ud_dis]])
self.gait_manager.move(2, 0, 0, 5)
self.head_time_stamp = self.get_clock().now() + Duration(
seconds=int(self.find_ball_position[self.start_index][2] / 1000)
)
self.start_index += 1
def run(self):
while rclpy.ok():
rclpy.spin_once(self, timeout_sec=0.1)
if self.start:
detection = None
for detection in self.detections:
self.rl_dis, self.ud_dis = self.head_track_process(detection)
self.get_logger().info(
f"[{detection.bbox.center.position.x}, {detection.bbox.center.position.y}] rl:{self.rl_dis} ud:{self.ud_dis}"
)
if self.rl_dis is not None:
self.body_track_process(self.rl_dis, self.ud_dis, detection)
self.rl_dis = None
self.count_miss = 0
self.start_find_ball = False
else:
if not self.start_find_ball:
self.count_miss += 1
if self.count_miss > 20:
self.count_miss = 0
self.start_find_ball = True
self.start_index = 0
else:
self.find_ball_process()
self.get_clock().sleep_for(Duration(seconds=0.01))
def wait_for_services(self):
services = [
("run_action", self.run_action_client),
("set_servos_pos", self.set_servos_pos_client),
("get_servos_pos", self.get_servos_pos_client),
]
for service_name, client in services:
while not client.wait_for_service(timeout_sec=1.0):
self.get_logger().info(f"Waiting for service {service_name}...")
self.get_logger().info(f"Service {service_name} is available.")
def call_set_servos_position(self, duration, servo_positions):
request = SetServosPosition.Request()
request.duration = duration
request.positions = Int32MultiArray()
request.positions.layout.dim = [
MultiArrayDimension(
label="servos",
size=len(servo_positions),
stride=2 * len(servo_positions),
),
MultiArrayDimension(label="id_pos", size=2, stride=2),
]
request.positions.data = [val for pair in servo_positions for val in pair]
future = self.set_servos_pos_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
if future.result() is not None:
response = future.result()
# self.get_logger().info(f'SetServosPosition response: success={response.success}, message="{response.message}"')
else:
self.get_logger().error(
f"SetServosPosition service call failed: {future.exception()}"
)
def call_get_servos_position(self, servo_ids):
request = GetServosPosition.Request()
request.servo_ids = servo_ids
future = self.get_servos_pos_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
if future.result() is not None:
response = future.result()
# Parse the Int32MultiArray response into [[servo_id, pulse], ...]
positions = []
if response.success:
dims = response.positions.layout.dim
if len(dims) == 2 and dims[1].size == 2:
num_servos = dims[0].size
stride = dims[1].stride
data = response.positions.data
for i in range(num_servos):
start_idx = i * stride
servo_id = data[start_idx]
pulse = data[start_idx + 1]
positions.append([servo_id, pulse])
self.get_logger().info(
f"GetServosPosition response: success={response.success}, "
f'message="{response.message}", positions={positions}'
)
return positions
else:
self.get_logger().error(
f"GetServosPosition service call failed: {future.exception()}"
)
return []
def call_run_action(self, action_name):
request = RunAction.Request()
request.action_name = action_name
future = self.run_action_client.call_async(request)
rclpy.spin_until_future_complete(self, future)
if future.result() is not None:
response = future.result()
# self.get_logger().info(f'RunAction response: success={response.success}, message="{response.message}"')
return response
else:
self.get_logger().error(
f"RunAction service call failed: {future.exception()}"
)
return None
def destroy_node(self):
try:
self.start = False
# self.init_action(self.head_pan_init, self.head_tilt_init)
# self.call_run_action('walk_ready')
if rclpy.ok():
self.get_logger().info("Shutting down.")
except Exception as e:
if rclpy.ok():
self.get_logger().warning(f"Failed to destroy node: {e}")
super().destroy_node()
def main(args=None):
rclpy.init(args=args)
kick_ball_node = KickBallNode()
try:
kick_ball_node.run()
except KeyboardInterrupt:
pass
finally:
kick_ball_node.destroy_node()
if rclpy.ok():
rclpy.shutdown()
if __name__ == "__main__":
main()
The functionality of the find_and_kick_ball
node can be understood through the flowchart diagram shown below.
The complete project code can be found in the GitHub repository listed in the Code section. Copy or clone the code into the ws_ainex_robot
directory and build it.
$ cd ws_ainex_robot
$ colcon build
Launch the applicationExecute following commands to setup ROS 2 workspace environment.
$ source /home/particle/ros2_humble/install/setup.bash
$ source /home/particle/ws_ainex_robot/install/setup.bash
Here is the launch file code.
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
from ament_index_python.packages import get_package_share_directory
import os
def generate_launch_description():
find_and_kick_ball_pkg = get_package_share_directory('find_and_kick_ball')
calib_config_file = os.path.join(find_and_kick_ball_pkg, 'config/calib.yaml')
action_group_path = os.path.join(find_and_kick_ball_pkg, 'config/action_groups')
ainex_controller_launch = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('ainex_kinematics'),
'launch', 'kinematics.launch.py'
)
)
)
object_detection_launch = IncludeLaunchDescription(
PythonLaunchDescriptionSource(
os.path.join(
get_package_share_directory('find_and_kick_ball'),
'launch', 'object_detection_node.launch.py'
)
)
)
# Define the kick_ball node
kick_ball_node = Node(
package='find_and_kick_ball',
executable='find_and_kick_ball_node',
name='find_and_kick_ball',
output='screen',
parameters=[
{
'calib_config_file' : calib_config_file,
}
]
)
# Return the launch description
return LaunchDescription([
ainex_controller_launch,
object_detection_launch,
kick_ball_node
])
Execute the following command to launch the application.
$ ros2 launch find_and_kick_ball find_and_kick_ball_node.launch.py
We can monitor the detection results on the /edge_impulse/detections
topic.
$ ros2 topic echo /edge_impulse/detections
header:
stamp:
sec: 1760333393
nanosec: 249222000
frame_id: default_cam
detections:
bbox:
center:
position:
x: 134.5
y: 166.5
theta: 0
size_x: 77
size_y: 71
results:
- hypothesis:
class_id: sports ball
score: 0.9219977259635925
pose:
pose:
position:
x: 0
y: 0
z: 0
orientation:
x: 0
y: 0
z: 0
w: 1
covariance:
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
- 0
id: ''
ROS 2 Nodes
Below is a concise description of the ROS 2 nodes used in the application:
/camera/rectify_color
: Rectifies raw camera images using calibration data./find_and_kick_ball
: Subscribes to the detections topic, decides on approaching, and initiates ball-kicking actions./controller_node
: Runs a control loop at 50 Hz to manage robot motion and gait through a low-level serial servo driver./object_detection
: Detects balls in the rectified image stream using the Edge Impulse model./image_bbox_node:
Subscribes to rectified images and detection results, draws bounding boxes, and streams the annotated images.
The diagram of the node graph is provided below.
The First-Person View (FPV) of the detection results on 320x320 pixel images can be viewed through a web browser at <tachyon ip address:8080> once the web_video_server
node is started.
$ ros2 run web_video_server web_video_server
This project showcases the power of integrating Edge Impulse’s AI capabilities with the Particle Tachyon’s efficient processing with the Qualcomm chip to enable real-time, autonomous robotic tasks. ROS 2 provides low-latency communication, enhanced interoperability across robotic platforms, and streamlined development with reusable components. Future enhancements could include multi-ball tracking or adaptive learning from missed kicks to improve accuracy in dynamic environments.
Comments