Today, we will demonstrate a highly automated simulation scenario showcasing an efficient sorting system with multiple robotic arms and a conveyor belt working in coordination. In this scenario, robotic arms use visual recognition technology to classify objects and place them accurately in designated locations through precise mechanical operations. This system not only improves sorting speed and accuracy but also illustrates the significant potential of modern automation technology in the industrial sector. Whether handling large quantities of everyday items or applying in complex industrial processes, this automated sorting solution demonstrates exceptional flexibility and efficiency.
Scenario DescriptionIn this highly automated sorting scenario, the main equipment includes two robotic arms and an 800mm conveyor belt. The loading robot on the right side is responsible for identifying and grabbing marked objects and placing them onto the conveyor belt. The conveyor belt transports the marked objects within the working range of the unloading robot on the left side. The unloading robot then identifies the marked objects according to classification requirements and places them orderly in designated areas.
Next, we will briefly introduce the relevant parameters of the product.
ProductsThe ultraArm is a 4-DOF robotic arm with a classic metal design structure, occupying only half the volume of an A4 sheet of paper. It is equipped with high-performance stepper motors, capable of achieving a repeat positioning accuracy of ±0.1mm and high stability.
The high-performance stepper motors can operate continuously 24/7 while maintaining excellent performance, making them an ideal choice for highly automated scenarios.
This conveyor belt system is also driven by stepper motors and requires an Arduino Mega 2560 development board as the controller. It serves as the equipment that transports objects, providing reliable and efficient conveyance within the automated sorting system.
The camera, an essential part of machine vision, is a crucial device for acquiring marked objects. A USB camera provides images and, through various machine vision algorithms, determines the exact position and coordinates of the marked objects. This information is then fed back to the robotic arm to execute the grabbing operation.
The entire project is divided into several functional modules to realize the automated sorting scenario.
Let's take a closer look at how each functional module is implemented in the code.
Visual Recognition ModuleIn this project, the marked objects use Aruco codes, a widely used type of binary square marker primarily utilized in augmented reality and robotic navigation. The design of Aruco codes makes them easy to detect and recognize in images, and they have several key features:
1. Ease of Detection and Recognition: Aruco codes are designed to be easily detected and recognized in images.
2. Uniqueness and Error Resistance: Each Aruco code has a unique ID and possesses error-correction capabilities.
3. Pose Estimation: Aruco codes can be used for recognizing and locating objects, as well as estimating the camera's pose (position and orientation) relative to the marker.
4. Open Source and Easy to Use: The OpenCV library fully supports Aruco codes, including generation, detection, and decoding.
5. Flexibility and Diversity: Aruco codes can be generated in various sizes and complexities to suit different application needs.
6. Low Cost: Generating and using Aruco codes is very cost-effective. They can be simply printed on paper or made on the surface of objects, without the need for expensive hardware.
#import lib
import cv2.aruco as aruco
#load aruco dict
aruco_dict = aruco.Dictionary_get(aruco.DICT_6X6_250)
parameters = aruco.DetectorParameters_create()
#Grayscale processing and recognition
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)
corners, ids, rejectImaPoint = cv.aruco.detectMarkers(
gray, self.aruco_dict, parameters=self.aruco_params
)
#detect aruco
if len(corners) > 0:
if ids is not None:
id = int(ids[0][0])
Determining the pose of Aruco codes is crucial for object grasping, as it provides feedback to the control algorithm to adjust the robot's actions. After pose estimation, data is transformed and compensated to calculate and adjust angles, ultimately providing the final position and pose angles. This process ensures precise and accurate operations by the robotic arm, enhancing the efficiency and effectiveness of the automated sorting system.
#pose estimation
ret = cv.aruco.estimatePoseSingleMarkers(
corners, 0.022, self.camera_matrix, self.dist_coeffs
)
(rvec, tvec) = (ret[0], ret[1])
#Position calculation
xyz = tvec[0, 0, :]
xyz = [round(xyz[0]*1000 + self.pump_x, 2), round(xyz[1]*1000 + self.pump_y, 2), round(xyz[2]*1000, 2)]
#Rotation vector processing
try:
rvec = np.reshape(rvec, (3, 1))
except ValueError as e:
print("reshape error:", e)
print("rvec1=", rvec)
rvec = np.array([[[-2.86279729, -0.00687534, -0.05316529]]])
print("rvec2=", rvec)
#Calculate rotation matrix and Euler angles
rotation_matrix, _ = cv.Rodrigues(rvec)
euler_angles = cv.RQDecomp3x3(rotation_matrix)[0]
yaw_angle = int(euler_angles[2])
#return result
for i in range(rvec.shape[0]):
cv.aruco.drawDetectedMarkers(img, corners, ids)
if num < 100:
num += 1
elif num == 100:
cv.destroyAllWindows()
print("final_x:", xyz[0])
print("final_y:", xyz[1])
print("final_yaw_angle=", -yaw_angle)
return xyz[0], xyz[1], -yaw_angle, id
Before this, hand-eye calibration is required. The current calibration is to determine the relative position and attitude relationship between the camera and the robot's end effector.
Robotic Arm Control ModuleThe ultraArm uses a Python control library called pymycobot. Once the environment is set up, you can use the library to control the robotic arm. Below is a sample code for basic control:
from pymycobot.ultraArm import ultraArm
# Create an instance, COM is the serial port number of the robotic arm
ua = ultraArm(COM)
# Angle control
ua.send_angles([angle_list], speed)
# Coordinate control, mode determines whether to move in a straight line or not
ua.send_coords([coords_list], speed, mode)
# Vacuum pump control: 1-open; 0-close
def pub_pump(self, flag):
if flag:
self.ua.set_gpio_state(0)
else:
self.ua.set_gpio_state(1)
ua.pub_pump(state)
Controlling the robotic arm is straightforward with simple function calls. However, it is important to design the movement trajectory carefully to avoid collisions with objects and to accurately reach designated points based on the coordinates obtained from the Aruco markers. This includes defining starting positions, grabbing points, and other critical positions.
Conveyor Control ModuleThe conveyor belt equipped with a stepper motor is typically controlled via a microcontroller (Arduino Mega 2560). The Mega 2560 provides sufficient I/O pins and processing power to precisely control the step and direction of the stepper motor, enabling the conveyor belt to start, stop, adjust speed, and change direction.
Below is a sample code to control the conveyor belt:
import serial
class ConveyorControl:
def __init__(self, port, baudrate):
# Initialize serial connection
self.serial = serial.Serial(port, baudrate, timeout=1)
# Write command to the microcontroller
def write_command(self, command):
self.serial.write(command.encode())
# Set the direction of the conveyor belt
def set_direction(self, direction):
command = f'DIR {direction}\n'
self.write_command(command)
# Set the speed of the conveyor belt
def set_speed(self, speed):
command = f'SPD {speed}\n'
self.write_command(command)
# Start the conveyor belt
def start(self):
self.write_command('START\n')
# Stop the conveyor belt
def stop(self):
self.write_command('STOP\n')
# Example usage:
# conveyor = ConveyorControl('/dev/ttyUSB0', 9600)
# conveyor.set_direction('FORWARD')
# conveyor.set_speed(100)
# conveyor.start()
# conveyor.stop()
Data Processing and Communication Module
In this automated sorting system project, data processing and communication are crucial components. Ensuring that each part of the system is aware of the others' activities and maintaining overall coherence is essential. If any part fails, the program halts. Below is the step-by-step process:
1. Vision Detection by Loading Robot: If the detected objects do not meet the requirements (e.g., correctly placed), subsequent procedures will not execute.
2. Collaboration between Loading and Unloading Robots: The loading robot first performs de-palletizing work and transports objects to the conveyor belt.
3. Conveyor Belt Transport: The conveyor belt moves objects into the detection range of the unloading robot's camera and within the robot arm's working radius.
4. Pose Recognition by Unloading Robot: The unloading robot identifies the pose of the objects, adjusts based on the feedback, grabs the objects, and places them in designated areas.
5. Periodic Vision Check: After every six objects are grabbed, the loading robot performs another vision check. If any object changes during this period, it can lead to a grabbing failure (this logic can be adjusted as needed).
Below is a sample implementation of the described logic in Python:
import time
class AutomatedSortingSystem:
def __init__(self, obj, cam, robot, conveyor, robot2):
self.obj = obj
self.cam = cam
self.robot = robot
self.conveyor = conveyor
self.robot2 = robot2
def run(self):
while self.robot2.count < 18:
data = self.obj.detect()
while len(data) < 6:
print("The number of QR codes detected is incorrect. Please ensure the QR codes are within the camera range and can be correctly recognized.")
time.sleep(1)
data = self.obj.detect()
for i in range(len(data)):
self.robot.move(data[i][2], data[i][1])
self.conveyor.open_conveyor(100)
time.sleep(5.2)
self.conveyor.close_conveyor()
for j in range(3):
try:
print(f"Attempt {j+1}")
pose = self.cam.detect()
if pose is not None:
break
except Exception as e:
if j == 2:
self.message()
temp = self.obj.exception_handling()
self.robot.special_handling(temp[0][2], temp[0][1], temp[0][3])
self.conveyor.open_conveyor(100)
time.sleep(5.2)
self.conveyor.close_conveyor()
id = self.robot2.move(pose[0], pose[1], pose[2], pose[3])
self.robot2.judge(id)
# Example usage:
# sorting_system = AutomatedSortingSystem(obj, cam, robot, conveyor, robot2)
# sorting_system.run()
In this implementation:
● Detection and Initial Checks: The system continuously detects objects and ensures that a sufficient number of QR codes are correctly identified.
● Movement and Transport: The loading robot moves the detected objects to the conveyor, which then transports them to the unloading robot's working area.
● Pose Detection and Handling: The unloading robot attempts to detect the pose of the objects up to three times. If successful, it moves the object accordingly; if not, it handles exceptions and retries the process.
● Feedback Loop: After a set number of operations, the system rechecks the loading area to ensure continued accuracy and handling.
This modular approach ensures robust operation and efficient communication between different parts of the automated sorting system.
SummaryThis project showcases an automated sorting system based on the UltraArm P340 robotic arm and a conveyor belt, primarily designed for educational purposes. It aims to teach and demonstrate automated sorting technologies. The system integrates computer vision, stepper motor control, hand-eye calibration, and robotic arm motion control to achieve an efficient automated sorting process.
If you have any suggestions for improving this project, please leave a comment below. Your feedback and support are the greatest encouragement for us to continue updating and enhancing the system.
Comments