In modern industrial settings, automation is critical for improving efficiency, reducing costs, and maintaining high-quality standards. Traditional sorting systems often rely on manual labor or basic mechanical methods, which are prone to errors, slow, and lack adaptability to varying product types. Existing industrial sorting systems often rely on simple sensors (e.g., color or weight sensors) or manual intervention, limiting their adaptability. Recent advancements in computer vision and edge AI have enabled more sophisticated systems, but many are either too complex for small-scale industries or require significant computational resources.
This project addresses these challenges by combining machine learning and industrial control systems to create a versatile and efficient sorting solution. The goal of the project is to develop an advanced industrial automation system that leverages computer vision and finite state machine (FSM) principles to sort small blocks using microcontrollers only. The system integrates a Programmable Logic Controller (PLC) for real-time control and Edge Impulse for efficient, edge-based machine learning model development and deployment.
Hardware components selectionThis project is designed to tackle the challenges of automated sorting and precise control in industrial environments through an entirely microcontroller-centric approach, eliminating the need for external processors or cloud dependencies. For seamless real-time actuation and control, we have opted for the M5Stack StamPLC, an ESP32-S3 based programmable logic controller (PLC).
To support the computer vision requirements, we have chosen the M5Stack AtomS3R Camera, driven by the ESP32-S3 microcontroller, equipped with 8MB of PSRAM for efficient processing of AI models and image data.
We require an RS485 Atomic base to enable communication and power supply for the camera from the StamPLC.
We'll use an M5Stack 6060-PUSH Linear Motion Control Module to smoothly and accurately move the blocks. This steady platform is perfect for an overhead camera, ensuring clear, blur-free images for better object detection.
We will use the M5Stack Servo Kit 180° to tilt the tray holding blocks, enabling sorting and placement into specific bins.
The M5Stack Puzzle Unit 8x8 LED Matrix adds visual flair by showing the progress of the blocks' forward and backward movement on the Linear Motion Control Module. It displays a yellow/green filled matrix when the camera detects blocks and a red filled matrix when the tube is empty.
We will employ a Kitronik Linear Actuator Kit to block and unblock the tube, controlling the holding and releasing of blocks in sync with system states. It also gently pushes the block to prevent it from getting stuck at the tube's opening.
We need a way to detect if the tube is full or empty to control the start and stop of operations. We'll use a small Pololu 38 kHz IR Proximity Sensor to detect the presence of blocks in the tube. The sensor's typical detection range is up to 30 cm, but it is adjusted to detect objects at approximately 15 cm to avoid false triggers.
The proximity sensor is attached to the linear actuator using a zip tie to hold it in place, ensuring a firm and adjustable mount. Its IR receiver and transmitter are positioned to face the opening of the aluminum tube, allowing it to detect blocks near the tube's entrance.
We 3D-printed a tray to hold blocks with LEGO axle connector hole.
The tray snap-fits onto the M5Stack Servo Kit using the included LEGO connector adapter.
For the object sorting demonstration, we chose yellow and green foam blocks.
We are utilizing a 30mm x 30mm x 250mm Aluminum Alloy Square Tube Pipe to stack the blocks.
We employed 15x15 aluminum extrusion profiles to securely mount the camera in an overhead position.
The completed setup appears as follows.
We need to sign up an account at the Edge Impulse Studio and create a new project for the data processing, model training, and deployment. We used the AtomS3R Camera to take pictures of yellow/green blocks under various orientations, backgrounds, and lighting conditions. The AtomS3R Camera comes with preinstalled firmware that functions as a webcam via the UVC protocol. If the firmware has been changed, it can be downloaded from this link and built/flashed using ESP IDF. Connect the AtomS3R Camera to your computer, go to the Data Acquisition page, click the "Connect a device" link, select "Connect to your computer, " and begin capturing images with appropriate labels. We can see the grid view of captured images at the Data Acquisition page.
For the model developments, we need to design an impulse which is a custom processing pipeline that combines signal processing and machine learning models. Go to the Impulse Design > Create Impulse page, click Add a processing block, and then choose Image, which preprocesses and normalizes image data. Also, on the same page, click Add a learning block, and choose Object Detection (Images). We are using a 64x64 image size, sufficient for detecting simple, distinct colored shapes with a reasonable inference rate on the microcontroller. Now, click the Save Impulse button.
Next, go to the Impulse Design > Image page set the Color depth parameter as RGB, and click the Save parameters button which redirects to another page where we should click on the Generate Feature button. It usually takes a couple of minutes to complete feature generation.
We can see the 2D visualization of the generated features in the Feature Explorer.
To train the model, navigate to the Impulse Design > ObjectDetection page. The training settings we selected are displayed below.
We chose the FOMO MobileNetV2 model. Click on the Save & train button to begin training.
The FOMO model uses an architecture similar to a standard image classification model which splits the input image into a grid and runs the equivalent of image classification across all cells in the grid independently in parallel. By default the grid size is 8x8 pixels, which means for a 64x64 image, the output will be 8x8 as shown in the image below.
After training is complete, the confusion matrices are displayed as shown below. The model achieved a 100% F1 score on the training data.
On the Model testing page, click on the Classify All button which will initiate model testing with the trained model. The testing accuracy is 100%.
Prior to deploying the model on actual hardware without access to the detected image stream, we aimed to verify its performance. Edge Impulse Studio enables us to connect the camera via UVC, as done during data collection, to run inference and review the results. Navigate to the Impulse > Live Classification page and click on the Connect a development board icon.
Choose the Connect to your computer option, then select Switch to classification mode on the following screen.
We could view the inference results of the object detection model, as displayed below.
As we will use an Arduino Sketch for inferencing on the AtomS3R camera, we chose the Arduino library option on the Deployment page.
For Model Optimizations, select the EON Compiler to reduce the model's memory usage. Additionally, choose the Quantized (Int8) model to fit within the ESP32-S3 memory alongside the application. Click the Build button to compile and download the Arduino library zip bundle.
The StamPLC serves as the core controller, managing real-time operations through GPIO (General Purpose Input/Output) and RS485, a serial communication standard, connections to multiple peripherals. The AtomS3R Camera, connected via RS485, captures visual data of the blocks, performs on-device inferencing when requested by the StamPLC, and sends the results back for decision-making. The Linear Actuator executes precise movements to hold or release the blocks, assisted by a Proximity Sensor, which is interfaced through GPIO and detects the presence of blocks, triggering the sorting process. The Push6060 Linear Motion Module, connected via RS485, handles the physical sorting action by directing blocks to appropriate locations. The Tray Servo, controlled via GPIO, tilts the tray to drop the sorted items. Additionally, an LED Matrix, connected through GPIO, provides visual feedback on the system's status, enhancing operator interaction. This setup leverages the StamPLC's real-time control capabilities and Edge Impulse's efficient ML model deployment to create a versatile sorting solution. The connection diagram below provides a visual overview.
The application is designed using Finite State Machine (FSM) principles. An FSM is a powerful computational model used to design and manage systems by defining a finite number of distinct states and the transitions between them based on specific inputs or conditions. In the context of the sorting process, the FSM ensures a systematic and reliable operation by clearly outlining states such as idle, detecting, sorting, and rejecting, along with the conditions (or time based) that trigger transitions between these states. This structured approach enhances predictability and robustness in system behavior.
The FSM diagram (below) illustrates the states and transitions described in the following section.
The states and transitions utilized in the application are outlined as follows:
IDLE
: The starting state, where the system remains in a waiting mode for blocks to enter the tube.- UNBLOCK_TUBE: Entered from IDLE state
POSITION_START
: Time-transitioned from UNBLOCK_TUBE with a 250ms delayBLOCK_TUBE
: Initiated from POSITION_START when the tube is blocked (TUBE_BLOCKED event)DETECT_READY
: Reached from BLOCK_TUBE when positioned under the camera, signaling readiness for object detection.OBJECT_DETECTION
: Entered from DETECT_READY; successful detection leads to SORT_READY, while failure returns to POSITION_ZERO.SORT_READY
: Achieved from OBJECT_DETECTION (on success), preparing for sorting with the OBJECT_SORT_STARTED event.OBJECT_SORTING
: Initiated from SORT_READY, executing the sorting process with a 1000ms delay.SORT_FINISH
: Reached from OBJECT_SORTING, marking the completion of the sorting process.POSITION_ZERO
: Triggered upon the OBJECT_DETECTION_FAILED event or time-transitioned from the SORT_FINISH state with a 1000ms delay.
We need to obtain the latest version of the Arduino IDE from the official Arduino website (https://www.arduino.cc/en/software).
To install the application (including the Edge Impulse SDK and model) on the AtomS3R Camera, follow the steps outlined below.
- This firmware uses the ESP32 Arduino core version 2.0.17, which is compatible with the Edge Impulse SDK and ensures successful compilation. To install the ESP32 core, go to Tools > Board > Boards Manager in the Arduino IDE. Search for “ESP32” and install version 2.0.17 of the ESP32 Arduino core by Espressif Systems.
- Select the board as M5STack-ATOMS3 in the menu.
- After downloading the Arduino library zip bundle from the Edge Impulse Deployment page (as described previously), integrate it into the Arduino IDE. Open the Arduino IDE, navigate to the menu, and select Sketch > Include Library > Add.ZIP Library. Browse to the location of the downloaded Edge Impulse zip file and select it.
To install the application on the StamPLC, follow the steps provided below.
- To install the ESP32 core, go to Tools > Board > Boards Manager in the Arduino IDE. Search for “M5Stack” and install it.
- Select the board as M5StamPLC in the menu.
- Go to Tools > Manage Libraries (Library Manager). Search for and install the M5StackPLC, SimpleFSM, NeoMatrix and ESP32Servo libraries one at a time.
The complete Arduino sketches, ready for compilation and uploading to the respective target devices, are given below.
AtomS3R Camera Sketch
#include <Industrial_Sorting_inferencing.h>
#include "edge-impulse-sdk/dsp/image/image.hpp"
#include "esp_camera.h"
// M5STACK ATOMS3R CAM Pins
#define PWDN_GPIO_NUM -1
#define RESET_GPIO_NUM -1
#define XCLK_GPIO_NUM 21
#define SIOD_GPIO_NUM 12
#define SIOC_GPIO_NUM 9
#define Y9_GPIO_NUM 13
#define Y8_GPIO_NUM 11
#define Y7_GPIO_NUM 17
#define Y6_GPIO_NUM 4
#define Y5_GPIO_NUM 48
#define Y4_GPIO_NUM 46
#define Y3_GPIO_NUM 42
#define Y2_GPIO_NUM 3
#define VSYNC_GPIO_NUM 10
#define HREF_GPIO_NUM 14
#define PCLK_GPIO_NUM 40
// M5STACK ATOMS3R CAM Power pin
#define POWER_GPIO_NUM 18
/* Constant defines -------------------------------------------------------- */
#define EI_CAMERA_RAW_FRAME_BUFFER_COLS 320
#define EI_CAMERA_RAW_FRAME_BUFFER_ROWS 240
#define EI_CAMERA_FRAME_BYTE_SIZE 3
/* Private variables ------------------------------------------------------- */
static bool debug_nn = false; // Set this to true to see e.g. features generated from the raw signal
static bool is_initialised = false;
uint8_t *snapshot_buf; //points to the output of the capture
static camera_config_t camera_config = {
.pin_pwdn = PWDN_GPIO_NUM,
.pin_reset = RESET_GPIO_NUM,
.pin_xclk = XCLK_GPIO_NUM,
.pin_sscb_sda = SIOD_GPIO_NUM,
.pin_sscb_scl = SIOC_GPIO_NUM,
.pin_d7 = Y9_GPIO_NUM,
.pin_d6 = Y8_GPIO_NUM,
.pin_d5 = Y7_GPIO_NUM,
.pin_d4 = Y6_GPIO_NUM,
.pin_d3 = Y5_GPIO_NUM,
.pin_d2 = Y4_GPIO_NUM,
.pin_d1 = Y3_GPIO_NUM,
.pin_d0 = Y2_GPIO_NUM,
.pin_vsync = VSYNC_GPIO_NUM,
.pin_href = HREF_GPIO_NUM,
.pin_pclk = PCLK_GPIO_NUM,
.xclk_freq_hz = 20000000,
.ledc_timer = LEDC_TIMER_0,
.ledc_channel = LEDC_CHANNEL_0,
.pixel_format = PIXFORMAT_RGB565,
.frame_size = FRAMESIZE_QVGA,
.fb_count = 1,
.fb_location = CAMERA_FB_IN_PSRAM,
.grab_mode = CAMERA_GRAB_WHEN_EMPTY,
.sccb_i2c_port = 0,
};
bool ei_camera_init(void);
void ei_camera_deinit(void);
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf);
String inference() {
snapshot_buf = (uint8_t *)malloc(EI_CAMERA_RAW_FRAME_BUFFER_COLS * EI_CAMERA_RAW_FRAME_BUFFER_ROWS * EI_CAMERA_FRAME_BYTE_SIZE);
if (snapshot_buf == nullptr) {
ei_printf("ERR: Failed to allocate snapshot buffer!\n");
while (1) {
delay(1);
}
}
ei::signal_t signal;
signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
signal.get_data = &ei_camera_get_data;
if (ei_camera_capture((size_t)EI_CLASSIFIER_INPUT_WIDTH, (size_t)EI_CLASSIFIER_INPUT_HEIGHT, snapshot_buf) == false) {
ei_printf("Failed to capture image\r\n");
free(snapshot_buf);
while (1) {
delay(1);
}
}
ei_impulse_result_t result = { 0 };
EI_IMPULSE_ERROR err = run_classifier(&signal, &result, debug_nn);
if (err != EI_IMPULSE_OK) {
ei_printf("ERR: Failed to run classifier (%d)\n", err);
while (1) {
delay(1);
}
}
String prediction = "none";
if (result.bounding_boxes_count > 0) {
ei_impulse_result_bounding_box_t bb = result.bounding_boxes[0];
ei_printf(" %s (%f) [ x: %u, y: %u, width: %u, height: %u ]\r\n",
bb.label,
bb.value,
bb.x,
bb.y,
bb.width,
bb.height);
prediction = bb.label;
}
return prediction;
}
void setup() {
Serial.begin(115200); // USB Serial
Serial2.begin(9600, SERIAL_8N1, 5, 6); // RS485
// power on
pinMode(POWER_GPIO_NUM, OUTPUT);
digitalWrite(POWER_GPIO_NUM, LOW);
delay(500);
if (ei_camera_init() == false) {
ei_printf("Failed to initialize Camera!\r\n");
} else {
ei_printf("Camera initialized\r\n");
}
delay(500);
}
bool send_response = false;
//int counter = 0;
void loop() {
String str = "";
while (Serial2.available()) {
char c = Serial2.read();
str += c;
}
if (str.length() > 0) {
int cmdIndex = str.indexOf("P?");
if (cmdIndex > -1) {
Serial.println(str);
Serial.printf("cmdIndex = %d\n", cmdIndex);
send_response = true;
}
}
String prediction = inference();
if (send_response) {
for (int i = 0; i < 3; i++) {
if (prediction == "none") {
prediction = inference();
} else {
break;
}
}
if (prediction == "yellow") {
ei_printf("Yellow\n");
Serial2.print("P:1\r\n");
} else if (prediction == "green") {
ei_printf("Green\n");
Serial2.print("P:2\r\n");
} else {
ei_printf("Uncertain\n");
Serial2.print("P:3\r\n");
}
Serial2.flush();
delay(20);
send_response = false;
}
free(snapshot_buf);
}
bool ei_camera_init(void) {
if (is_initialised) return true;
esp_err_t err = esp_camera_init(&camera_config);
if (err != ESP_OK) {
Serial.printf("Camera init failed with error 0x%x\n", err);
return false;
}
sensor_t *s = esp_camera_sensor_get();
is_initialised = true;
return true;
}
void ei_camera_deinit(void) {
esp_err_t err = esp_camera_deinit();
if (err != ESP_OK) {
ei_printf("Camera deinit failed\n");
return;
}
is_initialised = false;
return;
}
bool ei_camera_capture(uint32_t img_width, uint32_t img_height, uint8_t *out_buf) {
bool do_resize = false;
if (!is_initialised) {
ei_printf("ERR: Camera is not initialized\r\n");
return false;
}
camera_fb_t *fb = esp_camera_fb_get();
if (!fb) {
ei_printf("Camera capture failed\n");
return false;
}
bool converted = fmt2rgb888(fb->buf, fb->len, PIXFORMAT_RGB565, snapshot_buf);
esp_camera_fb_return(fb);
if (!converted) {
ei_printf("Conversion failed\n");
return false;
}
if ((img_width != EI_CAMERA_RAW_FRAME_BUFFER_COLS)
|| (img_height != EI_CAMERA_RAW_FRAME_BUFFER_ROWS)) {
do_resize = true;
}
if (do_resize) {
ei::image::processing::crop_and_interpolate_rgb888(
out_buf,
EI_CAMERA_RAW_FRAME_BUFFER_COLS,
EI_CAMERA_RAW_FRAME_BUFFER_ROWS,
out_buf,
img_width,
img_height);
}
return true;
}
static int ei_camera_get_data(size_t offset, size_t length, float *out_ptr) {
// we already have a RGB888 buffer, so recalculate offset into pixel index
size_t pixel_ix = offset * 3;
size_t pixels_left = length;
size_t out_ptr_ix = 0;
while (pixels_left != 0) {
// Swap BGR to RGB here
out_ptr[out_ptr_ix] = (snapshot_buf[pixel_ix + 2] << 16) + (snapshot_buf[pixel_ix + 1] << 8) + snapshot_buf[pixel_ix];
// go to the next pixel
out_ptr_ix++;
pixel_ix += 3;
pixels_left--;
}
// and done!
return 0;
}
#if !defined(EI_CLASSIFIER_SENSOR) || EI_CLASSIFIER_SENSOR != EI_CLASSIFIER_SENSOR_CAMERA
#error "Invalid model for current sensor"
#endif
StamPLC Sketch
#include <M5StamPLC.h>
#include <ESP32Servo.h>
#include <SimpleFSM.h>
#include <Adafruit_GFX.h>
#include <Adafruit_NeoMatrix.h>
#include <Adafruit_NeoPixel.h>
#define SERIAL_BAUD 115200
/* StamPLC RS485 */
#define RS485_SERIAL Serial1
#define RS485_READ 0
#define RS485_WRITE 1
#define RS485_SERIAL_BAUD 9600
/* IR Proximity Sensor */
#define IR_OUT_PIN 40
#define IR_ENABLE_PIN 41
#define NEOPIXEL_PIN 5
#define NUMPIXELS 64
void stateIdleEnter();
void stateIdleUpdate();
void stateUnblockTubeEnter();
void statePositionStartEnter();
void statePositionStartUpdate();
void stateBlockTubeEnter();
void statePositionDetectEnter();
void statePositionDetectUpdate();
void stateObjectDetectionEnter();
void stateObjectDetectionUpdate();
void stateSortReadyEnter();
void stateSortReadyUpdate();
void stateObjectSortingEnter();
void stateSortFinishEnter();
void statePositionZeroEnter();
void statePositionZeroUpdate();
Servo linearServo;
Servo tiltServo;
int linearServoPin = 2;
int tiltServoPin = 1;
int servo_linear_current_angle = 180;
int servo_tilt_current_angle = 95;
bool isActive = false;
bool emptyTube = false;
int push6060_id = 123;
SimpleFSM fsm;
typedef struct {
int prediction = -1;
} FSMContext;
FSMContext context;
//Adafruit_NeoPixel pixels(NUMPIXELS, NEOPIXEL_PIN, NEO_GRB + NEO_KHZ800);
Adafruit_NeoMatrix matrix = Adafruit_NeoMatrix(8, 8, NEOPIXEL_PIN,
NEO_MATRIX_TOP + NEO_MATRIX_RIGHT +
NEO_MATRIX_COLUMNS + NEO_MATRIX_PROGRESSIVE,
NEO_GRB + NEO_KHZ800);
uint32_t red_color = matrix.Color(255, 0, 0);
uint32_t green_color = matrix.Color(0, 255, 0);
uint32_t blue_color = matrix.Color(0, 0, 255);
uint32_t yellow_color = matrix.Color(255, 255, 0);
uint32_t no_color = matrix.Color(0, 0, 0);
const uint8_t blank[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}
};
// Arrow patterns for 4 directions
const uint8_t arrowLeft[8][8] = {
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 0, 0},
{0, 1, 0, 0, 0, 0, 0, 0},
{0, 0, 1, 0, 0, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}
};
const uint8_t arrowRight[8][8] = {
{0, 0, 0, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 0, 0, 1},
{0, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 1, 0, 0},
{0, 0, 0, 0, 1, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}
};
const uint8_t arrowUp[8][8] = {
{0, 0, 0, 1, 0, 0, 0, 0},
{0, 0, 1, 0, 1, 0, 0, 0},
{0, 1, 0, 0, 0, 1, 0, 0},
{1, 0, 0, 0, 0, 0, 1, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0}
};
const uint8_t arrowDown[8][8] = {
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 0, 0, 0, 0},
{1, 0, 0, 0, 0, 0, 1, 0},
{0, 1, 0, 0, 0, 1, 0, 0},
{0, 0, 1, 0, 1, 0, 0, 0},
{0, 0, 0, 1, 0, 0, 0, 0}
};
bool guard_cb() {
return isActive;
}
/* State definitions */
State s[] = {
State("IDLE", stateIdleEnter, stateIdleUpdate, NULL),
/*1*/ State("UNBLOCK_TUBE", stateUnblockTubeEnter, NULL, NULL),
/*2*/ State("POSITION_START", statePositionStartEnter, statePositionStartUpdate, NULL),
/*3*/ State("BLOCK_TUBE", stateBlockTubeEnter, NULL, NULL),
/*4*/ State("DETECT_READY", statePositionDetectEnter, statePositionDetectUpdate, NULL),
/*5*/ State("OBJECT_DETECTION", stateObjectDetectionEnter, stateObjectDetectionUpdate, NULL),
/*6*/ State("SORT_READY", stateSortReadyEnter, stateSortReadyUpdate, NULL),
/*7*/ State("OBJECT_SORTING", stateObjectSortingEnter, NULL, NULL),
/*8*/ State("SORT_FINISH", stateSortFinishEnter, NULL, NULL),
/*9*/ State("POSITION_ZERO", statePositionZeroEnter, statePositionZeroUpdate, NULL),
};
/* Transition triggers */
enum triggers {
TUBE_UNBLOCKED = 1,
TUBE_BLOCKED,
OBJECT_DETECTION_STARTED,
OBJECT_DETECTION_FAILED,
SORT_READY_REACHED,
POSITION_ZERO_REACHED,
OBJECT_SORT_STARTED,
};
/* Transition definitions */
Transition transitions[] = {
Transition(&s[0], &s[1], TUBE_UNBLOCKED, NULL, "", guard_cb), // IDLE -> UNBLOCK_TUBE
Transition(&s[2], &s[3], TUBE_BLOCKED), // POSITION_START -> BLOCK_TUBE
Transition(&s[4], &s[5], OBJECT_DETECTION_STARTED), // DETECT_READY -> OBJECT_DETECTION
Transition(&s[5], &s[9], OBJECT_DETECTION_FAILED), // OBJECT_DETECTION -> POSITION_ZERO
Transition(&s[5], &s[6], SORT_READY_REACHED), // OBJECT_DETECTION -> SORT_READY
Transition(&s[9], &s[0], POSITION_ZERO_REACHED), // POSITION_ZERO -> IDLE
Transition(&s[6], &s[7], OBJECT_SORT_STARTED), // SORT_READY -> OBJECT_SORTING
};
TimedTransition timedTransitions[] = {
TimedTransition(&s[1], &s[2], 250), // UNBLOCK_TUBE -> POSITION_START
TimedTransition(&s[3], &s[4], 500), // BLOCK_TUBE -> DETECT_READY
TimedTransition(&s[7], &s[8], 1000), // OBJECT_SORTING -> SORT_FINISH
TimedTransition(&s[8], &s[9], 1000), // SORT_FINISH -> POSITION_ZERO
};
void set_push6060_id(unsigned int push6060_id) {
digitalWrite(STAMPLC_PIN_485_DIR, RS485_WRITE);
delay(50);
RS485_SERIAL.printf("ID=%d\r\n", push6060_id);
delay(50);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_READ);
}
void move_push6060_to(int pos) {
if (pos >= 0 && pos <= 50) {
digitalWrite(STAMPLC_PIN_485_DIR, RS485_WRITE);
delay(50);
if (pos == 0) {
RS485_SERIAL.printf("ID%dZ\r\n", push6060_id);
} else {
RS485_SERIAL.printf("ID%d:X%d\r\n", push6060_id, pos);
}
delay(50);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_READ);
}
}
int push6060_read_position() {
digitalWrite(STAMPLC_PIN_485_DIR, RS485_WRITE);
delay(5);
RS485_SERIAL.printf("ID%d?\r\n", push6060_id);
delay(10);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_READ);
delay(5);
String response;
while (RS485_SERIAL.available()) {
char c = RS485_SERIAL.read();
response += c;
}
int wposIndex = response.indexOf("WPos:");
if (wposIndex == -1) {
return -1;
}
int commaIndex = response.indexOf(',', wposIndex);
if (commaIndex == -1) {
return -1;
}
String xPosStr = response.substring(wposIndex + 5, commaIndex);
float xPos = xPosStr.toFloat();
int pos = (int)xPos;
if (xPosStr.length() == 0 || (xPos == 0 && xPosStr != "0.000")) {
return -1;
}
return pos;
}
void move_linear_servo_to(int angle) {
angle = constrain(angle, 0, 180);
if (angle != servo_linear_current_angle) {
int direction = (angle > servo_linear_current_angle) ? 1 : -1;
int stepSize = 10;
while (abs(angle - servo_linear_current_angle) > 0) {
if (abs(angle - servo_linear_current_angle) < stepSize) {
servo_linear_current_angle = angle; // Set to exact target
} else {
servo_linear_current_angle += direction * stepSize;
}
linearServo.write(servo_linear_current_angle);
delay(25);
}
}
}
void move_tilt_servo_to(int angle) {
angle = constrain(angle, 0, 180);
if (angle != servo_tilt_current_angle) {
int direction = (angle > servo_tilt_current_angle) ? 1 : -1;
int stepSize = 10;
while (abs(angle - servo_tilt_current_angle) > 0) {
if (abs(angle - servo_tilt_current_angle) < stepSize) {
servo_tilt_current_angle = angle; // Set to exact target
} else {
servo_tilt_current_angle += direction * stepSize;
}
tiltServo.write(servo_tilt_current_angle);
delay(25);
}
}
}
int read_prediction() {
String response;
int pred = -1;
while (RS485_SERIAL.available()) {
char c = RS485_SERIAL.read();
response += c;
}
if (response.length() > 0) {
Serial.println(response);
int rposIndex = response.indexOf("P:");
if (rposIndex == -1) {
return -1;
}
int crIndex = response.indexOf('\r', rposIndex);
if (crIndex == -1) {
return -1;
}
String res = response.substring(rposIndex + 2, crIndex);
Serial.println(res);
pred = res.toInt();
M5StamPLC.Display.fillRect(0, 50, 240, 80, BLACK);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setCursor(10, 50);
if (pred == 0) {
M5StamPLC.Display.setTextColor(TFT_WHITE);
M5StamPLC.Display.print("Background");
} else if (pred == 1) {
M5StamPLC.Display.setTextColor(TFT_YELLOW);
M5StamPLC.Display.print("Yellow Cube");
} else if (pred == 2) {
M5StamPLC.Display.setTextColor(TFT_GREEN);
M5StamPLC.Display.print("Green Cube");
} else {
M5StamPLC.Display.setTextColor(TFT_WHITE);
M5StamPLC.Display.print("Uncertain");
}
}
return pred;
}
void stateIdleEnter() {
Serial.println("Entering IDLE");
}
void stateIdleUpdate() {
int val = digitalRead(IR_OUT_PIN);
/* check if cube is in the pipe */
if (val == LOW) {
bool ret = fsm.trigger(TUBE_UNBLOCKED);
emptyTube = false;
M5StamPLC.Display.fillRect(0, 50, 240, 80, BLACK);
M5StamPLC.Display.setTextColor(TFT_CYAN);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setCursor(10, 50);
M5StamPLC.Display.print("Tube Filled");
matrix.fillScreen(no_color);
matrix.show();
if (ret) {
Serial.println("Triggering TUBE_UNBLOCKED");
M5StamPLC.tone(880, 50);
}
} else {
if (emptyTube == false) {
M5StamPLC.Display.fillRect(0, 50, 240, 80, BLACK);
M5StamPLC.Display.setTextColor(TFT_PINK);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setCursor(10, 50);
M5StamPLC.Display.print("Empty Tube");
matrix.fillScreen(red_color);
matrix.show();
}
emptyTube = true;
}
}
void stateUnblockTubeEnter() {
Serial.println("Entering UNBLOCK_TUBE");
move_linear_servo_to(20);
}
void statePositionStartEnter() {
Serial.println("Entering POSITION_START");
move_push6060_to(5);
}
void statePositionStartUpdate() {
int pos = push6060_read_position();
arrowScroll(1, blue_color);
if (pos == 5) {
Serial.println("Triggering TUBE_BLOCKED");
fsm.trigger(TUBE_BLOCKED);
}
}
void stateBlockTubeEnter() {
Serial.println("Entering BLOCK_TUBE");
move_linear_servo_to(180);
}
void statePositionDetectEnter() {
Serial.println("Entering DETECT_READY");
move_push6060_to(35);
}
void statePositionDetectUpdate() {
int pos = push6060_read_position();
arrowScroll(1, blue_color);
if (pos == 30) {
Serial.println("Triggering OBJECT_DETECTION_STARTED");
fsm.trigger(OBJECT_DETECTION_STARTED);
}
}
void getPredRequest() {
/* send request to the camera */
delay(100);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_WRITE);
delay(5);
RS485_SERIAL.print("P?\r\n");
RS485_SERIAL.flush();
delay(10);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_READ);
delay(5);
}
void stateObjectDetectionEnter() {
Serial.println("Entering OBJECT_DETECTION");
getPredRequest();
}
void stateObjectDetectionUpdate() {
int prediction = read_prediction();
if (prediction > -1) {
if (prediction == 1) {
matrix.fillScreen(yellow_color);
matrix.show();
fsm.trigger(SORT_READY_REACHED);
} else if (prediction == 2) {
matrix.fillScreen(green_color);
matrix.show();
fsm.trigger(SORT_READY_REACHED);
} else {
fsm.trigger(OBJECT_DETECTION_FAILED);
}
context.prediction = prediction;
}
}
void stateSortReadyEnter() {
Serial.println("Entering SORT_READY");
move_push6060_to(43);
}
void stateSortReadyUpdate() {
int pos = push6060_read_position();
if (pos == 42) {
Serial.println("Triggering OBJECT_SORT_STARTED");
fsm.trigger(OBJECT_SORT_STARTED);
}
}
void stateObjectSortingEnter() {
Serial.println("Entering OBJECT_SORTING");
if (context.prediction == 1) {
move_tilt_servo_to(55);
}
if (context.prediction == 2) {
move_tilt_servo_to(135);
}
}
void stateSortFinishEnter() {
Serial.println("Entering SORT_FINISH");
move_tilt_servo_to(95);
}
void statePositionZeroEnter() {
Serial.println("Entering POSITION_ZERO");
move_push6060_to(0);
}
void statePositionZeroUpdate() {
int pos = push6060_read_position();
arrowScroll(3, blue_color);
if (pos == 0) {
Serial.println("Triggering POSITION_ZERO_REACHED");
fsm.trigger(POSITION_ZERO_REACHED);
}
}
void transition_cb()
{
String state_name = fsm.getState()->getName();
M5StamPLC.Display.fillRect(0, 0, 240, 50, BLACK);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setTextColor(TFT_WHITE);
M5StamPLC.Display.setCursor(10, 20);
M5StamPLC.Display.print(state_name);
if (state_name == "POSITION_ZERO") {
M5StamPLC.Display.fillRect(0, 50, 240, 80, BLACK);
}
}
struct DirectionData {
const uint8_t (*pattern)[8][8];
int dx;
int dy;
};
// Define the sequence of directions
DirectionData directions[] = {
{&blank, 0, 0},
{&arrowRight, 1, 0},
{&arrowDown, 0, 1},
{&arrowLeft, -1, 0},
{&arrowUp, 0, -1}
};
void arrowScroll(int currentDirIndex, uint16_t color)
{
static int offset = 0;
static int xOffset = 0;
static int yOffset = 0;
static int prevDirIndex = 0;
if (currentDirIndex < 0 || currentDirIndex > 4) {
return;
}
if (currentDirIndex != prevDirIndex) {
offset = 0;
xOffset = 0;
yOffset = 0;
}
prevDirIndex = currentDirIndex;
// Get current direction data
DirectionData& current = directions[currentDirIndex];
const uint8_t (*currentArrowPattern)[8][8] = current.pattern; // Corrected: pointer to a 2D array
int dx = current.dx;
int dy = current.dy;
matrix.fillScreen(no_color);
for (int y = 0; y < 8; y++) {
for (int x = 0; x < 8; x++) {
int newX = (x + xOffset) % 8;
if (newX < 0) newX += 8;
int newY = (y + yOffset) % 8;
if (newY < 0) newY += 8;
// Corrected: dereference the pointer and then access the 2D array
if ((*currentArrowPattern)[y][x] == 1) {
matrix.drawPixel(newX, newY, color);
}
}
}
matrix.show();
// Update offsets based on direction
xOffset += dx;
yOffset += dy;
offset++;
}
void setup() {
M5StamPLC.begin();
Serial.begin(SERIAL_BAUD);
RS485_SERIAL.begin(RS485_SERIAL_BAUD, SERIAL_8N1, STAMPLC_PIN_485_RX, STAMPLC_PIN_485_TX);
pinMode(STAMPLC_PIN_485_DIR, OUTPUT);
digitalWrite(STAMPLC_PIN_485_DIR, RS485_READ);
pinMode(IR_OUT_PIN, INPUT);
pinMode(IR_ENABLE_PIN, OUTPUT);
digitalWrite(IR_ENABLE_PIN, HIGH);
linearServo.attach(linearServoPin);
tiltServo.attach(tiltServoPin);
linearServo.write(servo_linear_current_angle);
tiltServo.write(servo_tilt_current_angle);
set_push6060_id(push6060_id);
move_push6060_to(0);
matrix.begin();
matrix.setBrightness(50);
matrix.fillScreen(no_color);
matrix.show();
int num_transitions = sizeof(transitions) / sizeof(Transition);
int num_timed_transitions = sizeof(timedTransitions) / sizeof(TimedTransition);
fsm.add(transitions, num_transitions);
fsm.add(timedTransitions, num_timed_transitions);
fsm.setTransitionHandler(transition_cb);
fsm.setInitialState(&s[0]);
M5StamPLC.Display.fillScreen(TFT_BLACK);
M5StamPLC.Display.setTextColor(TFT_WHITE);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setCursor(10, 20);
M5StamPLC.Display.print("Press button A to");
M5StamPLC.Display.setCursor(60, 50);
M5StamPLC.Display.print("start");
Serial.println(fsm.getDotDefinition());
}
void loop() {
M5StamPLC.update();
if (M5StamPLC.BtnA.wasClicked()) {
if (isActive == true) {
isActive = false;
M5StamPLC.setStatusLight(1, 0, 0);
} else {
isActive = true;
M5StamPLC.setStatusLight(0, 1, 0);
M5StamPLC.Display.fillScreen(TFT_BLACK);
M5StamPLC.Display.setTextColor(TFT_WHITE);
M5StamPLC.Display.setTextSize(2);
M5StamPLC.Display.setCursor(10, 20);
M5StamPLC.Display.print("IDLE");
}
}
if (isActive) {
fsm.run(200);
}
}
In-Action DemoConclusionThis project delivers a simple, reliable, and cost-effective sorting solution using computer vision and finite state machines, with future improvements focusing on enhanced machine learning, optimized FSM logic, and broader industrial applications to increase its versatility and efficiency.
Comments