Kutluhan Aktar
Published © CC BY

Multi-Model AI-Based Mechanical Anomaly Detector w/ BLE

Apply sound data-based anomalous behavior detection, diagnose the root cause via object detection concurrently, and inform the user via SMS.

ExpertFull instructions provided1,031

Things used in this project

Hardware components

PCBWay Custom PCB
PCBWay Custom PCB
×1
DFRobot FireBeetle 2 ESP32-S3
×1
DFRobot Beetle ESP32 - C3
×1
DFRobot Fermion: I2S MEMS Microphone
×1
DFRobot Fermion 2.0' IPS TFT LCD Display
×1
LattePanda 3 Delta 864
×1
SparkFun Long-Shaft Linear Potentiometer
×1
Button (6x6)
×2
5 mm Common Anode RGB LED
×1
Anycubic Kobra 2 Max
×1
Arduino Mega 2560
Arduino Mega 2560
×1
Short-Shaft Linear Potentiometer
×2
SG90 Micro-servo motor
SG90 Micro-servo motor
×2
Power Jack
×1
External Battery
×2
Solderless Breadboard Half Size
Solderless Breadboard Half Size
×1
Potentiometer Knob
×3
Jumper wires (generic)
Jumper wires (generic)
×1

Software apps and online services

Edge Impulse Studio
Edge Impulse Studio
Arduino IDE
Arduino IDE
MIT App Inventor 2
MIT App Inventor 2
SMS Messaging API
Twilio SMS Messaging API
Fusion 360
Autodesk Fusion 360
KiCad
KiCad
Ultimaker Cura
Thonny
XAMPP
Visual Studio 2017
Microsoft Visual Studio 2017

Hand tools and fabrication machines

Soldering iron (generic)
Soldering iron (generic)
Hot glue gun (generic)
Hot glue gun (generic)

Story

Read more

Custom parts and enclosures

Edge Impulse Model (Arduino Library)

Edge Impulse FOMO Model (Arduino Library)

Gerber Files

Fabrication Files

Mechanical_Anomaly_Detector.apk

Mechanical_Anomaly_Detector.aia

mechanical_anomaly_detector_main_case.stl

mechanical_anomaly_detector_top_cover.stl

mechanical_anomaly_detector_pcb_holder.stl

mechanical_anomaly_test.stl

mechanical_anomaly_test_part.stl

mechanical_anomaly_test_pulley_connect.stl

logo.h

Schematics

PCB_0

PCB_1

PCB_2

PCB_3

PCB_4

PCB_5

PCB_6

FireBeetle 2 ESP32-S3

Beetle ESP32-C3

Code

AIoT_Mechanical_Anomaly_Detector_Audio.ino

Arduino
         /////////////////////////////////////////////  
        //   Multi-Model AI-Based Mechanical       //
       //        Anomaly Detector w/ BLE          //
      //             ---------------             //
     //           (Beetle ESP32 - C3)           //           
    //             by Kutluhan Aktar           // 
   //                                         //
  /////////////////////////////////////////////

//
// Apply sound data-based anomalous behavior detection, diagnose the root cause via object detection concurrently, and inform the user via SMS.
//
// For more information:
// https://www.theamplituhedron.com/projects/Multi_Model_AI_Based_Mechanical_Anomaly_Detector
//
//
// Connections
// Beetle ESP32 - C3 :
//                                Fermion: I2S MEMS Microphone
// 3.3V    ------------------------ 3v3
// 0       ------------------------ WS
// GND     ------------------------ SEL
// 1       ------------------------ SCK
// 4       ------------------------ DO
//                                Long-Shaft Linear Potentiometer 
// 2       ------------------------ S
//                                Control Button (A)
// 8       ------------------------ +
//                                Control Button (B)
// 9       ------------------------ +
//                                5mm Common Anode RGB LED
// 5       ------------------------ R
// 6       ------------------------ G
// 7       ------------------------ B
//                                FireBeetle 2 ESP32-S3
// RX (20) ------------------------ TX (43) 
// TX (21) ------------------------ RX (44)


// Include the required libraries:
#include <Arduino.h>
#include <ArduinoBLE.h>
#include <driver/i2s.h>

// Include the Edge Impulse neural network model converted to an Arduino library:
#include <AI-Based_Mechanical_Anomaly_Detector_Audio__inferencing.h>

// Define the required parameters to run an inference with the Edge Impulse neural network model.
#define sample_buffer_size 512
int16_t sampleBuffer[sample_buffer_size];

// Define the threshold value for the model outputs (predictions).
float threshold = 0.60;

// Define the anomaly class names:
String classes[] = {"anomaly", "normal"};

// Create the BLE service:
BLEService Anomaly_Detector("e1bada10-a728-44c6-a577-6f9c24fe980a");

// Create data characteristics and allow the remote device (central) to write, read, and notify:
BLEFloatCharacteristic audio_detected_Characteristic("e1bada10-a728-44c6-a577-6f9c24fe984a", BLERead | BLENotify);
BLEByteCharacteristic selected_img_class_Characteristic("e1bada10-a728-44c6-a577-6f9c24fe981a", BLERead | BLEWrite);
BLEByteCharacteristic given_command_Characteristic("e1bada10-a728-44c6-a577-6f9c24fe983a", BLERead | BLEWrite);
 
// Define the Fermion I2S MEMS microphone configurations.
#define I2S_SCK    1
#define I2S_WS     0
#define I2S_SD     4
#define DATA_BIT   (16) //16-bit
// Define the I2S processor port.
#define I2S_PORT I2S_NUM_0

// Define the potentiometer settings.
#define potentiometer_pin 2

// Define the RGB pin settings.
#define red_pin 5
#define green_pin 6
#define blue_pin 7

// Define the control buttons.
#define control_button_1 8
#define control_button_2 9

// Define the data holders:
volatile boolean _connected = false;
int predicted_class = -1;
int pre_pot_value = 0, selected_img_class, audio_model_interval = 80;
long timer;

void setup(){
  Serial.begin(115200);
  // Initiate the serial communication between Beetle ESP32 - C3 and FireBeetle 2 ESP32-S3.
  Serial1.begin(9600, SERIAL_8N1, /*RX=*/20, /*TX=*/21);  

  pinMode(red_pin, OUTPUT); pinMode(green_pin, OUTPUT); pinMode(blue_pin, OUTPUT);
  digitalWrite(red_pin, HIGH); digitalWrite(green_pin, HIGH); digitalWrite(blue_pin, HIGH);

  pinMode(control_button_1, INPUT_PULLUP);
  pinMode(control_button_2, INPUT_PULLUP);
 
  // Configure the I2S port for the I2S microphone.
  i2s_install(EI_CLASSIFIER_FREQUENCY);
  i2s_setpin();
  i2s_start(I2S_PORT);
  delay(1000);

  // Check the BLE initialization status:
  while(!BLE.begin()){
    Serial.println("BLE initialization is failed!");
  }
  Serial.println("\nBLE initialization is successful!\n");
  // Print this peripheral device's address information:
  Serial.print("MAC Address: "); Serial.println(BLE.address());
  Serial.print("Service UUID Address: "); Serial.println(Anomaly_Detector.uuid()); Serial.println();

  // Set the local name this peripheral advertises: 
  BLE.setLocalName("BLE Anomaly Detector");
  // Set the UUID for the service this peripheral advertises:
  BLE.setAdvertisedService(Anomaly_Detector);

  // Add the given data characteristics to the service:
  Anomaly_Detector.addCharacteristic(audio_detected_Characteristic);
  Anomaly_Detector.addCharacteristic(selected_img_class_Characteristic);
  Anomaly_Detector.addCharacteristic(given_command_Characteristic);

  // Add the given service to the advertising device:
  BLE.addService(Anomaly_Detector);

  // Assign event handlers for connected and disconnected devices to/from this peripheral:
  BLE.setEventHandler(BLEConnected, blePeripheralConnectHandler);
  BLE.setEventHandler(BLEDisconnected, blePeripheralDisconnectHandler);

  // Assign event handlers for the data characteristics modified (written) by the central device (via the Android application).
  // In this regard, obtain the transferred (written) data packets from the Android application over BLE.
  selected_img_class_Characteristic.setEventHandler(BLEWritten, get_central_BLE_updates);
  given_command_Characteristic.setEventHandler(BLEWritten, get_central_BLE_updates);

  // Start advertising:
  BLE.advertise();
  Serial.println("Bluetooth device active, waiting for connections...");

  delay(5000);

  // Update the timer.
  timer = millis();
}
 
void loop(){
  // Poll for BLE events:
  BLE.poll();

  // If the potentiometer value is altered, change the selected image class for data collection manually.
  int current_pot_value = map(analogRead(potentiometer_pin), 360, 4096, 0, 10); 
  delay(100);
  if(abs(current_pot_value-pre_pot_value) > 1){
    if(current_pot_value == 0){ adjustColor(true, true, true); }
    if(current_pot_value > 0 && current_pot_value <= 3){ adjustColor(true, false, false); selected_img_class = 0; }
    if(current_pot_value > 3 && current_pot_value <= 7){ adjustColor(false, true, false); selected_img_class = 1; }
    if(current_pot_value > 7){ adjustColor(false, false, true); selected_img_class = 2; }
    pre_pot_value = current_pot_value;
  }

  // If the control button (A) is pressed, transfer the given image class (manually or via BLE) to FireBeetle 2 ESP32-S3 via serial communication.
  if(!digitalRead(control_button_1)){ Serial1.print("IMG_Class=" + String(selected_img_class)); delay(500); adjustColor(false, true, true); }

  // If the control button (B) is pressed, force FireBeetle 2 ESP32-S3 to run the object detection model despite not detecting a mechanical anomaly via the neural network model (microphone).
  if(!digitalRead(control_button_2)){ Serial1.print("Run Inference"); delay(500); adjustColor(true, false, true); }

  // Depending on the configured model interval (default 80 seconds) via the Android application, run the Edge Impulse neural network model to detect mechanical anomalies via the I2S microphone.
  if(millis() - timer > audio_model_interval*1000){
    // Run inference.
    run_inference_to_make_predictions();
    // If the Edge Impulse neural network model detects a mechanical anomaly successfully:
    if(predicted_class > -1){
      // Update the audio detection characteristic via BLE.
      update_characteristics(predicted_class);
      delay(2000);
      // Make FireBeetle 2 ESP32-S3 to run the object detection model to diagnose the root cause of the detected mechanical anomaly.
      if(classes[predicted_class] == "anomaly") Serial1.print("Run Inference"); delay(500); adjustColor(true, false, true);
      // Clear the predicted class (label).
      predicted_class = -1;
      }
      // Update the timer:
      timer = millis();
    }  
}

void run_inference_to_make_predictions(){
  // Summarize the Edge Impulse neural network model inference settings (from model_metadata.h):
  ei_printf("\nInference settings:\n");
  ei_printf("\tInterval: "); ei_printf_float((float)EI_CLASSIFIER_INTERVAL_MS); ei_printf(" ms.\n");
  ei_printf("\tFrame size: %d\n", EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE);
  ei_printf("\tSample length: %d ms.\n", EI_CLASSIFIER_RAW_SAMPLE_COUNT / 16);
  ei_printf("\tNo. of classes: %d\n", sizeof(ei_classifier_inferencing_categories) / sizeof(ei_classifier_inferencing_categories[0]));

  // If the I2S microphone generates an audio (data) buffer successfully:
  bool sample = microphone_sample(2000);
  if(sample){
    // Run inference:
    ei::signal_t signal;
    // Create a signal object from the resized (scaled) audio buffer.
    signal.total_length = EI_CLASSIFIER_RAW_SAMPLE_COUNT;
    signal.get_data = &microphone_audio_signal_get_data;
    // Run the classifier:
    ei_impulse_result_t result = { 0 };
    EI_IMPULSE_ERROR _err = run_classifier(&signal, &result, false);
    if(_err != EI_IMPULSE_OK){
      ei_printf("ERR: Failed to run classifier (%d)\n", _err);
      return;
    }

    // Print the inference timings on the serial monitor.
    ei_printf("\nPredictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
        result.timing.dsp, result.timing.classification, result.timing.anomaly);

    // Obtain the prediction results for each label (class).
    for(size_t ix = 0; ix < EI_CLASSIFIER_LABEL_COUNT; ix++){
      // Print the prediction results on the serial monitor.
      ei_printf("%s:\t%.5f\n", result.classification[ix].label, result.classification[ix].value);
      // Get the imperative predicted label (class).
      if(result.classification[ix].value >= threshold) predicted_class = ix;
    }
    ei_printf("\nPredicted Class: %d [%s]\n", predicted_class, classes[predicted_class]);  

    // Detect anomalies, if any:
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
      ei_printf("Anomaly: ");
      ei_printf_float(result.anomaly);
      ei_printf("\n");
    #endif 

    // Release the audio buffer.
    //ei_free(sampleBuffer);
  }
}

static int microphone_audio_signal_get_data(size_t offset, size_t length, float *out_ptr){
  // Convert the given microphone (audio) data (buffer) to the out_ptr format required by the Edge Impulse neural network model.
  numpy::int16_to_float(&sampleBuffer[offset], out_ptr, length);
  return 0;
}

bool microphone_sample(int range){
  // Display the collected audio data according to the given range (sensitivity).
  // Serial.print(range * -1); Serial.print(" "); Serial.print(range); Serial.print(" ");
 
  // Obtain the information generated by the I2S microphone and save it to the input buffer  sampleBuffer.
  size_t bytesIn = 0;
  esp_err_t result = i2s_read(I2S_PORT, &sampleBuffer, sample_buffer_size, &bytesIn, portMAX_DELAY);

  // If the I2S microphone generates audio data successfully:
  if(result == ESP_OK){
    Serial.println("\nAudio Data Generated Successfully!");
    
    // Depending on the given model, scale (resize) the collected audio buffer (data) by the I2S microphone. Otherwise, the sound might be too quiet.
    for(int x = 0; x < bytesIn/2; x++) {
      sampleBuffer[x] = (int16_t)(sampleBuffer[x]) * 8;
    }
      
    /*
    // Display the average audio data reading on the serial plotter.
    int16_t samples_read = bytesIn / 8;
    if(samples_read > 0){
      float mean = 0;
      for(int16_t i = 0; i < samples_read; ++i){ mean += (sampleBuffer[i]); }
      mean /= samples_read;
      Serial.println(mean);
    }
    */

    return true;
  }else{
    Serial.println("\nAudio Data Failed!");
    return false;
  }
}

void update_characteristics(float detection){
  // Update the selected characteristics over BLE.
  audio_detected_Characteristic.writeValue(detection);
  Serial.println("\n\nBLE: Data Characteristics Updated Successfully!\n");
}

void get_central_BLE_updates(BLEDevice central, BLECharacteristic characteristic){
  delay(500);
  // Obtain the recently transferred data packets from the central device over BLE.
  if(characteristic.uuid() == selected_img_class_Characteristic.uuid()){
    // Get the given image class for data collection.
    selected_img_class = selected_img_class_Characteristic.value();
    if(selected_img_class == 0) adjustColor(true, false, false);
    if(selected_img_class == 1) adjustColor(false, true, false);
    if(selected_img_class == 2) adjustColor(false, false, true);
    Serial.print("\nSelected Image Data Class (BLE) => "); Serial.println(selected_img_class);
    // Transfer the passed image class to FireBeetle 2 ESP32-S3 via serial communication.
    Serial1.print("IMG_Class=" + String(selected_img_class)); delay(500);
  }
  if(characteristic.uuid() == given_command_Characteristic.uuid()){
    int command = given_command_Characteristic.value();
    // Change the interval for running the neural network model (microphone) to detect mechanical anomalies.
    if(command < 130){
      audio_model_interval = command;
      Serial.print("\nGiven Model Interval (Audio) => "); Serial.println(audio_model_interval);
    // Force FireBeetle 2 ESP32-S3 to run the object detection model despite not detecting a mechanical anomaly via the neural network model (microphone).
    }else if(command > 130){
      Serial1.print("Run Inference"); delay(500); adjustColor(true, false, true);
    }
  }
}

void i2s_install(uint32_t sampling_rate){
  // Configure the I2S processor port for the I2S microphone (ONLY_LEFT).
  const i2s_config_t i2s_config = {
    .mode = i2s_mode_t(I2S_MODE_MASTER | I2S_MODE_RX),
    .sample_rate = sampling_rate,
    .bits_per_sample = i2s_bits_per_sample_t(DATA_BIT),
    .channel_format = I2S_CHANNEL_FMT_ONLY_LEFT,
    .communication_format = i2s_comm_format_t(I2S_COMM_FORMAT_STAND_I2S),
    .intr_alloc_flags = 0,
    .dma_buf_count = 8,
    .dma_buf_len = sample_buffer_size,
    .use_apll = false
  };
 
  i2s_driver_install(I2S_PORT, &i2s_config, 0, NULL);
}
 
void i2s_setpin(){
  // Set the I2S microphone pin configuration.
  const i2s_pin_config_t pin_config = {
    .bck_io_num = I2S_SCK,
    .ws_io_num = I2S_WS,
    .data_out_num = -1,
    .data_in_num = I2S_SD
  };
 
  i2s_set_pin(I2S_PORT, &pin_config);
}

void blePeripheralConnectHandler(BLEDevice central){
  // Central connected event handler:
  Serial.print("\nConnected event, central: ");
  Serial.println(central.address());
  _connected = true;
}

void blePeripheralDisconnectHandler(BLEDevice central){
  // Central disconnected event handler:
  Serial.print("\nDisconnected event, central: ");
  Serial.println(central.address());
  _connected = false;
}

void adjustColor(boolean r, boolean g, boolean b){
  if(r){ digitalWrite(red_pin, LOW); }else{digitalWrite(red_pin, HIGH);}
  if(g){ digitalWrite(green_pin, LOW); }else{digitalWrite(green_pin, HIGH);}
  if(b){ digitalWrite(blue_pin, LOW); }else{digitalWrite(blue_pin, HIGH);}
}

AIoT_Mechanical_Anomaly_Detector_Camera.ino

Arduino
         /////////////////////////////////////////////  
        //   Multi-Model AI-Based Mechanical       //
       //        Anomaly Detector w/ BLE          //
      //             ---------------             //
     //         (FireBeetle 2 ESP32-S3)         //           
    //             by Kutluhan Aktar           // 
   //                                         //
  /////////////////////////////////////////////

//
// Apply sound data-based anomalous behavior detection, diagnose the root cause via object detection concurrently, and inform the user via SMS.
//
// For more information:
// https://www.theamplituhedron.com/projects/Multi_Model_AI_Based_Mechanical_Anomaly_Detector
//
//
// Connections
// FireBeetle 2 ESP32-S3 :
//                                Fermion 2.0" IPS TFT LCD Display (320x240)
// 3.3V    ------------------------ V
// 17/SCK  ------------------------ CK
// 15/MOSI ------------------------ SI
// 16/MISO ------------------------ SO
// 18/D6   ------------------------ CS
// 38/D3   ------------------------ RT
// 3/D2    ------------------------ DC
// 21/D13  ------------------------ BL
// 9/D7    ------------------------ SC
//                                Beetle ESP32 - C3
// RX (44) ------------------------ TX (21)
// TX (43) ------------------------ RX (20)


// Include the required libraries:
#include <WiFi.h>
#include "esp_camera.h"
#include "FS.h"
#include "SD.h"
#include "DFRobot_GDL.h"
#include "DFRobot_Picdecoder_SD.h"

// Add the icons to be shown on the Fermion TFT LCD display.
#include "logo.h"

// Include the Edge Impulse FOMO model converted to an Arduino library:
#include <AI-Based_Mechanical_Anomaly_Detector_Camera__inferencing.h>
#include "edge-impulse-sdk/dsp/image/image.hpp"

// Define the required parameters to run an inference with the Edge Impulse FOMO model.
#define CAPTURED_IMAGE_BUFFER_COLS        240
#define CAPTURED_IMAGE_BUFFER_ROWS        240
#define EI_CAMERA_FRAME_BYTE_SIZE         3
uint8_t *ei_camera_capture_out;

// Define the component (part) class names:
String classes[] = {"red", "green", "blue"};

char ssid[] = "<__SSID__>";      // your network SSID (name)
char pass[] = "<__PASSWORD__>";  // your network password (use for WPA, or use as key for WEP)
int keyIndex = 0;                // your network key Index number (needed only for WEP)

// Define the server on LattePanda 3 Delta 864.
char server[] = "192.168.1.22";
// Define the web application path.
String application = "/mechanical_anomaly_detector/update.php";

// Initialize the WiFiClient object.
WiFiClient client; /* WiFiSSLClient client; */

// Define the onboard OV2640 camera pin configurations.
#define PWDN_GPIO_NUM     -1
#define RESET_GPIO_NUM    -1
#define XCLK_GPIO_NUM     45
#define SIOD_GPIO_NUM     1
#define SIOC_GPIO_NUM     2

#define Y9_GPIO_NUM       48
#define Y8_GPIO_NUM       46
#define Y7_GPIO_NUM       8
#define Y6_GPIO_NUM       7
#define Y5_GPIO_NUM       4
#define Y4_GPIO_NUM       41
#define Y3_GPIO_NUM       40
#define Y2_GPIO_NUM       39
#define VSYNC_GPIO_NUM    6
#define HREF_GPIO_NUM     42
#define PCLK_GPIO_NUM     5

// Since FireBeetle 2 ESP32-S3 has an independent camera power supply circuit, enable the AXP313A power output when using the camera.
#include "DFRobot_AXP313A.h"
DFRobot_AXP313A axp;

// Utilize the built-in MicroSD card reader on the Fermion 2.0" TFT LCD display (320x240).
#define SD_CS_PIN D7

// Define the Fermion TFT LCD display pin configurations.
#define TFT_DC  D2
#define TFT_CS  D6
#define TFT_RST D3

// Define the Fermion TFT LCD display object and integrated JPG decoder.
DFRobot_Picdecoder_SD decoder;
DFRobot_ST7789_240x320_HW_SPI screen(/*dc=*/TFT_DC,/*cs=*/TFT_CS,/*rst=*/TFT_RST);

// Create a struct (_data) including all resulting bounding box parameters:
struct _data {
  String x;
  String y;
  String w;
  String h;
};

// Define the data holders:
uint16_t class_color[3] = {COLOR_RGB565_RED, COLOR_RGB565_GREEN, COLOR_RGB565_BLUE};
volatile boolean s_init = true;
String data_packet = "";
int sample_number[3] = {0, 0, 0};
int predicted_class = -1;
struct _data box;

void setup() {
  Serial.begin(115200);
  // Initiate the serial communication between FireBeetle 2 ESP32-S3 and Beetle ESP32 - C3.
  Serial1.begin(9600, SERIAL_8N1, /*RX=*/44,/*TX=*/43);

  // Enable the independent camera power supply circuit (AXP313A) for the built-in OV2640 camera.
  while(axp.begin() != 0){
    Serial.println("Camera power init failed!");
    delay(1000);
  }
  axp.enableCameraPower(axp.eOV2640);

  // Define the OV2640 camera pin and frame settings.
  camera_config_t config;
  config.ledc_channel = LEDC_CHANNEL_0;
  config.ledc_timer = LEDC_TIMER_0;
  config.pin_d0 = Y2_GPIO_NUM;
  config.pin_d1 = Y3_GPIO_NUM;
  config.pin_d2 = Y4_GPIO_NUM;
  config.pin_d3 = Y5_GPIO_NUM;
  config.pin_d4 = Y6_GPIO_NUM;
  config.pin_d5 = Y7_GPIO_NUM;
  config.pin_d6 = Y8_GPIO_NUM;
  config.pin_d7 = Y9_GPIO_NUM;
  config.pin_xclk = XCLK_GPIO_NUM;
  config.pin_pclk = PCLK_GPIO_NUM;
  config.pin_vsync = VSYNC_GPIO_NUM;
  config.pin_href = HREF_GPIO_NUM;
  config.pin_sscb_sda = SIOD_GPIO_NUM;
  config.pin_sscb_scl = SIOC_GPIO_NUM;
  config.pin_pwdn = PWDN_GPIO_NUM;
  config.pin_reset = RESET_GPIO_NUM;
  config.xclk_freq_hz = 10000000;          // Set XCLK_FREQ_HZ as 10KHz to avoid the EV-VSYNC-OVF error.
  config.frame_size = FRAMESIZE_240X240;   // FRAMESIZE_QVGA (320x240), FRAMESIZE_SVGA
  config.pixel_format = PIXFORMAT_RGB565;  // PIXFORMAT_JPEG
  config.grab_mode = CAMERA_GRAB_LATEST;   // CAMERA_GRAB_WHEN_EMPTY 
  config.fb_location = CAMERA_FB_IN_PSRAM;
  config.jpeg_quality = 10;
  config.fb_count = 2;                     // for CONFIG_IDF_TARGET_ESP32S3   

  // Initialize the OV2640 camera.
  esp_err_t err = esp_camera_init(&config);
  if (err != ESP_OK) {
    Serial.printf("Camera init failed with error 0x%x", err);
    return;
  }

  // Initialize the Fermion TFT LCD display. 
  screen.begin();
  screen.setRotation(2);
  delay(1000);

  // Initialize the MicroSD card module on the Fermion TFT LCD display.
  while(!SD.begin(SD_CS_PIN)){
    Serial.println("SD Card => No module found!");
    delay(200);
    return;
  }

  // Connect to WPA/WPA2 network. Change this line if using an open or WEP network.
  WiFi.mode(WIFI_STA);
  WiFi.begin(ssid, pass);
  // Attempt to connect to the given Wi-Fi network.
  while(WiFi.status() != WL_CONNECTED){
    // Wait for the network connection.
    delay(500);
    Serial.print(".");
  }
  // If connected to the network successfully:
  Serial.println("Connected to the Wi-Fi network successfully!");
}

void loop(){
  // Display the initialization screen.
  if(s_init){
    screen.fillScreen(COLOR_RGB565_BLACK);
    screen.drawXBitmap(/*x=*/(240-iron_giant_width)/2,/*y=*/(320-iron_giant_height)/2,/*bitmap gImage_Bitmap=*/iron_giant_bits,/*w=*/iron_giant_width,/*h=*/iron_giant_height,/*color=*/COLOR_RGB565_PURPLE);
    delay(1000);
  } s_init = false;

  
  // Obtain the data packet transferred by Beetle ESP32 - C3 via serial communication.
  if(Serial1.available() > 0){
    data_packet = Serial1.readString();
  }

  if(data_packet != ""){
    Serial.println("\nReceived Data Packet => " + data_packet);
    // If Beetle ESP32 - C3 transfers a component (part) class via serial communication:
    if(data_packet.indexOf("IMG_Class") > -1){
      // Decode the received data packet to elicit the passed class.
      int delimiter_1 = data_packet.indexOf("=");
      // Glean information as substrings.
      String s = data_packet.substring(delimiter_1 + 1);
      int given_class = s.toInt();
      // Capture a new frame (RGB565 buffer) with the OV2640 camera.
      camera_fb_t *fb = esp_camera_fb_get();
      if(!fb){ Serial.println("Camera => Cannot capture the frame!"); return; }
      // Convert the captured RGB565 buffer to JPEG buffer.
      size_t con_len;
      uint8_t *con_buf = NULL;
      if(!frame2jpg(fb, 10, &con_buf, &con_len)){ Serial.println("Camera => Cannot convert the RGB565 buffer to JPEG!"); return; }
      delay(500);
      // Depending on the given component (part) class, save the converted frame as a sample to the SD card.
      String file_name = "";
      file_name = "/" + classes[given_class] + "_" + String(sample_number[given_class]) + ".jpg";
      // After defining the file name by adding the sample number, save the converted frame to the SD card.
      if(save_image(SD, file_name.c_str(), con_buf, con_len)){
        screen.fillScreen(COLOR_RGB565_BLACK);
        screen.setTextColor(class_color[given_class]);
        screen.setTextSize(2);
        // Display the assigned class icon.
        screen.drawXBitmap(/*x=*/10,/*y=*/250,/*bitmap gImage_Bitmap=*/save_bits,/*w=*/save_width,/*h=*/save_height,/*color=*/class_color[given_class]);
        screen.setCursor(20+save_width, 255);
        screen.println("IMG Saved =>");
        screen.setCursor(20+save_width, 275);
        screen.println(file_name);
        delay(1000);
        // Increase the sample number of the given class.
        sample_number[given_class]+=1;
        Serial.println("\nImage Sample Saved => " + file_name);
        // Draw the recently saved image sample on the screen to notify the user.
        decoder.drawPicture(/*filename=*/file_name.c_str(),/*sx=*/0,/*sy=*/0,/*ex=*/240,/*ey=*/240,/*screenDrawPixel=*/screenDrawPixel);
        delay(1000);
      }else{
        screen.fillScreen(COLOR_RGB565_BLACK);
        screen.setTextColor(class_color[given_class]);
        screen.setTextSize(2);
        screen.drawXBitmap(/*x=*/10,/*y=*/250,/*bitmap gImage_Bitmap=*/save_bits,/*w=*/save_width,/*h=*/save_height,/*color=*/class_color[given_class]);
        screen.setCursor(20+save_width, 255);
        screen.println("SD Card =>");
        screen.setCursor(20+save_width, 275);
        screen.println("File Error!");
        delay(1000);
      }
      // Release the image buffers.
      free(con_buf);
      esp_camera_fb_return(fb); 
    // If requested, run the Edge Impulse FOMO model to make predictions on the root cause of the detected mechanical anomaly.       
    }else if(data_packet.indexOf("Run") > -1){
      // Capture a new frame (RGB565 buffer) with the OV2640 camera.
      camera_fb_t *fb = esp_camera_fb_get();
      if(!fb){ Serial.println("Camera => Cannot capture the frame!"); return; }
      // Run inference.
      run_inference_to_make_predictions(fb);
      // If the Edge Impulse FOMO model detects a component (part) class successfully:
      if(predicted_class > -1){
        // Define the query parameters, including the passed bounding box measurements.
        String query = "?results=OK&class=" + classes[predicted_class]
                     + "&x=" + box.x
                     + "&y=" + box.y
                     + "&w=" + box.w
                     + "&h=" + box.h;
        // Make an HTTP POST request to the given web application so as to transfer the model results, including the resulting image and the bounding box measurements.
        make_a_post_request(fb, query);
        // Notify the user of the detected component (part) class on the Fermion TFT LCD display.
        screen.fillScreen(COLOR_RGB565_BLACK);
        screen.setTextColor(class_color[predicted_class]);
        screen.setTextSize(4);
        // Display the gear icon with the assigned class color.
        screen.drawXBitmap(/*x=*/(240-gear_width)/2,/*y=*/(320-gear_height)/2,/*bitmap gImage_Bitmap=*/gear_bits,/*w=*/gear_width,/*h=*/gear_height,/*color=*/class_color[predicted_class]);
        screen.setCursor((240-(classes[predicted_class].length()*20))/2, ((320-gear_height)/2)+gear_height+30);
        String t = classes[predicted_class];
        t.toUpperCase();
        screen.println(t);
        delay(3000);                    
        // Clear the predicted class (label).
        predicted_class = -1;
      }else{
        screen.fillScreen(COLOR_RGB565_BLACK);
        screen.setTextColor(COLOR_RGB565_WHITE);
        screen.drawXBitmap(/*x=*/(240-gear_width)/2,/*y=*/(320-gear_height)/2,/*bitmap gImage_Bitmap=*/gear_bits,/*w=*/gear_width,/*h=*/gear_height,/*color=*/COLOR_RGB565_WHITE);
        delay(3000);            
      }
      // Release the image buffer.
      esp_camera_fb_return(fb);
    }
    // Clear the received data packet.
    data_packet = "";
    // Return to the initialization screen.
    delay(500);
    s_init = true;
  }
}

void run_inference_to_make_predictions(camera_fb_t *fb){
  // Summarize the Edge Impulse FOMO model inference settings (from model_metadata.h):
  ei_printf("\nInference settings:\n");
  ei_printf("\tImage resolution: %dx%d\n", EI_CLASSIFIER_INPUT_WIDTH, EI_CLASSIFIER_INPUT_HEIGHT);
  ei_printf("\tFrame size: %d\n", EI_CLASSIFIER_DSP_INPUT_FRAME_SIZE);
  ei_printf("\tNo. of classes: %d\n", sizeof(ei_classifier_inferencing_categories) / sizeof(ei_classifier_inferencing_categories[0]));
  
  if(fb){
    // Convert the captured RGB565 buffer to RGB888 buffer.
    ei_camera_capture_out = (uint8_t*)malloc(CAPTURED_IMAGE_BUFFER_COLS * CAPTURED_IMAGE_BUFFER_ROWS * EI_CAMERA_FRAME_BYTE_SIZE);
    if(!fmt2rgb888(fb->buf, fb->len, PIXFORMAT_RGB565, ei_camera_capture_out)){ Serial.println("Camera => Cannot convert the RGB565 buffer to RGB888!"); return; }

    // Depending on the given model, resize the converted RGB888 buffer by utilizing built-in Edge Impulse functions.
    ei::image::processing::crop_and_interpolate_rgb888(
      ei_camera_capture_out, // Output image buffer, can be same as input buffer
      CAPTURED_IMAGE_BUFFER_COLS,
      CAPTURED_IMAGE_BUFFER_ROWS,
      ei_camera_capture_out,
      EI_CLASSIFIER_INPUT_WIDTH,
      EI_CLASSIFIER_INPUT_HEIGHT);

    // Run inference:
    ei::signal_t signal;
    // Create a signal object from the converted and resized image buffer.
    signal.total_length = EI_CLASSIFIER_INPUT_WIDTH * EI_CLASSIFIER_INPUT_HEIGHT;
    signal.get_data = &ei_camera_cutout_get_data;
    // Run the classifier:
    ei_impulse_result_t result = { 0 };
    EI_IMPULSE_ERROR _err = run_classifier(&signal, &result, false);
    if(_err != EI_IMPULSE_OK){
      ei_printf("ERR: Failed to run classifier (%d)\n", _err);
      return;
    }

    // Print the inference timings on the serial monitor.
    ei_printf("\nPredictions (DSP: %d ms., Classification: %d ms., Anomaly: %d ms.): \n",
        result.timing.dsp, result.timing.classification, result.timing.anomaly);

    // Obtain the object detection results and bounding boxes for the detected labels (classes). 
    bool bb_found = result.bounding_boxes[0].value > 0;
    for(size_t ix = 0; ix < EI_CLASSIFIER_OBJECT_DETECTION_COUNT; ix++){
      auto bb = result.bounding_boxes[ix];
      if(bb.value == 0) continue;
      // Print the calculated bounding box measurements on the serial monitor.
      ei_printf("    %s (", bb.label);
      ei_printf_float(bb.value);
      ei_printf(") [ x: %u, y: %u, width: %u, height: %u ]\n", bb.x, bb.y, bb.width, bb.height);
      // Get the imperative predicted label (class) and the detected object's bounding box measurements.
      if(bb.label == "red") predicted_class = 0;
      if(bb.label == "green") predicted_class = 1;
      if(bb.label == "blue") predicted_class = 2;
      box.x = String(bb.x);
      box.y = String(bb.y);
      box.w = String(bb.width);
      box.h = String(bb.height);
      ei_printf("\nPredicted Class: %d [%s]\n", predicted_class, classes[predicted_class]); 
    }
    if(!bb_found) ei_printf("\nNo objects found!\n");

    // Detect anomalies, if any:
    #if EI_CLASSIFIER_HAS_ANOMALY == 1
      ei_printf("Anomaly: ");
      ei_printf_float(result.anomaly);
      ei_printf("\n");
    #endif 

    // Release the image buffer.
    free(ei_camera_capture_out);
  }
}

static int ei_camera_cutout_get_data(size_t offset, size_t length, float *out_ptr){
  // Convert the given image data (buffer) to the out_ptr format required by the Edge Impulse FOMO model.
  size_t pixel_ix = offset * 3;
  size_t pixels_left = length;
  size_t out_ptr_ix = 0;
  // Since the image data is converted to an RGB888 buffer, directly recalculate offset into pixel index.
  while(pixels_left != 0){  
    out_ptr[out_ptr_ix] = (ei_camera_capture_out[pixel_ix] << 16) + (ei_camera_capture_out[pixel_ix + 1] << 8) + ei_camera_capture_out[pixel_ix + 2];
    // Move to the next pixel.
    out_ptr_ix++;
    pixel_ix+=3;
    pixels_left--;
  }
  return 0;
}

void make_a_post_request(camera_fb_t * fb, String request){
  // Connect to the web application named mechanical_anomaly_detector. Change '80' with '443' if you are using SSL connection.
  if (client.connect(server, 80)){
    // If successful:
    Serial.println("\nConnected to the web application successfully!\n");
    // Create the query string:
    String query = application + request;
    // Make an HTTP POST request:
    String head = "--AnomalyResult\r\nContent-Disposition: form-data; name=\"resulting_image\"; filename=\"new_image.txt\"\r\nContent-Type: text/plain\r\n\r\n";
    String tail = "\r\n--AnomalyResult--\r\n";
    // Get the total message length.
    uint32_t totalLen = head.length() + fb->len + tail.length();
    // Start the request:
    client.println("POST " + query + " HTTP/1.1");
    client.println("Host: 192.168.1.22");
    client.println("Content-Length: " + String(totalLen));
    client.println("Connection: Keep-Alive");
    client.println("Content-Type: multipart/form-data; boundary=AnomalyResult");
    client.println();
    client.print(head);
    client.write(fb->buf, fb->len);
    client.print(tail);
    // Wait until transferring the image buffer.
    delay(2000);
    // If successful:
    Serial.println("HTTP POST => Data transfer completed!\n");
  }else{
    Serial.println("\nConnection failed to the web application!\n");
    delay(2000);
  }
}

bool save_image(fs::FS &fs, const char *file_name, uint8_t *data, size_t len){
  // Create a new file on the SD card.
  volatile boolean sd_run = false;
  File file = fs.open(file_name, FILE_WRITE);
  if(!file){ Serial.println("SD Card => Cannot create file!"); return sd_run; }
  // Save the given image buffer to the created file on the SD card.
  if(file.write(data, len) == len){
      Serial.printf("SD Card => IMG saved: %s\n", file_name);
      sd_run = true;
  }else{
      Serial.println("SD Card => Cannot save the given image!");
  }
  file.close();
  return sd_run;  
}

void screenDrawPixel(int16_t x, int16_t y, uint16_t color){
  // Draw a pixel (point) on the Fermion TFT LCD display.
  screen.writePixel(x,y,color);
}

AIoT_Mechanical_Anomaly_Detector_Tester.ino

Arduino
         /////////////////////////////////////////////  
        //   Multi-Model AI-Based Mechanical       //
       //        Anomaly Detector w/ BLE          //
      //             ---------------             //
     //             (Arduino Mega)              //           
    //             by Kutluhan Aktar           // 
   //                                         //
  /////////////////////////////////////////////

//
// Apply sound data-based anomalous behavior detection, diagnose the root cause via object detection concurrently, and inform the user via SMS.
//
// For more information:
// https://www.theamplituhedron.com/projects/Multi_Model_AI_Based_Mechanical_Anomaly_Detector
//
//
// Connections
// Arduino Mega :
//                                Short-Shaft Linear Potentiometer (R)
// A0      ------------------------ S
//                                Short-Shaft Linear Potentiometer (L)
// A1      ------------------------ S
//                                SG90 Mini Servo Motor (R)
// D2      ------------------------ Signal
//                                SG90 Mini Servo Motor (L)
// D3      ------------------------ Signal


// Include the required libraries:
#include <Servo.h>

// Define servo motors.
Servo right;
Servo left;

// Define control potentiometers.
#define pot_right A0
#define pot_left A1

// Define the data holders.
int turn_right = 0, turn_left = 0;

void setup(){
  Serial.begin(9600);

  // Initiate servo motors.
  right.attach(2); 
  delay(500);
  left.attach(3);
  delay(500);
}

void loop(){
  // Depending on the potentiometer positions, turn servo motors (0 - 180).
  turn_right = map(analogRead(pot_right), 0, 1023, 0, 180);
  turn_left = map(analogRead(pot_left), 0, 1023, 0, 180);
  right.write(turn_right);                  
  delay(15);                          
  left.write(turn_left);
  delay(15);
}

class.php

PHP
<?php

// Include the Twilio PHP Helper Library. 
require_once 'twilio-php-main/src/Twilio/autoload.php';
use Twilio\Rest\Client;

// Define the _main class and its functions:
class _main {
	public $conn;
	private $twilio;
	
	public function __init__($conn){
		$this->conn = $conn;
		// Define the Twilio account information and object.
		$_sid    = "<__SID__>";
        $token  = "<__TOKEN__>";
        $this->twilio = new Client($_sid, $token);
		// Define the user and the Twilio-verified phone numbers.
		$this->user_phone = "+___________";
		$this->from_phone = "+___________";
	}
	
    // Database -> Insert Model Detection Results:
	public function insert_new_results($date, $img_name, $class){
		$sql_insert = "INSERT INTO `detections`(`date`, `img_name`, `class`) 
		               VALUES ('$date', '$img_name', '$class');"
			          ;
		if(mysqli_query($this->conn, $sql_insert)){ return true; } else{ return false; }
	}
	
	// Retrieve all model detection results and the assigned resulting image names from the detections database table, transferred by FireBeetle 2 ESP32-S3.
	public function get_model_results(){
		$date=[]; $class=[]; $img_name=[];
		$sql_data = "SELECT * FROM `detections` ORDER BY `id` DESC";
		$result = mysqli_query($this->conn, $sql_data);
		$check = mysqli_num_rows($result);
		if($check > 0){
			while($row = mysqli_fetch_assoc($result)){
				array_push($date, $row["date"]);
				array_push($class, $row["class"]);
				array_push($img_name, $row["img_name"]);
			}
			return array($date, $class, $img_name);
		}else{
			return array(["Not Found!"], ["Not Found!"], ["waiting.png"]);
		}
	}
	
	// Send an SMS to the registered phone number via Twilio so as to inform the user of the model detection results.
	public function Twilio_send_SMS($body){
		// Configure the SMS object.
        $sms_message = $this->twilio->messages
			->create($this->user_phone,
				array(
					   "from" => $this->from_phone,
                       "body" => $body
                     )
                );
		// Send the SMS.
		echo("SMS SID: ".$sms_message->sid);	  
	}
}

// Define database and server settings:
$server = array(
	"name" => "localhost",
	"username" => "root",
	"password" => "",
	"database" => "mechanical_anomaly"
);

$conn = mysqli_connect($server["name"], $server["username"], $server["password"], $server["database"]);

?>

update.php

PHP
<?php

include_once "assets/class.php";

// Define the new 'anomaly ' object:
$anomaly  = new _main();
$anomaly->__init__($conn);

# Get the current date and time.
$date = date("Y_m_d_H_i_s");

# Define the resulting image file name.
$img_file = "%s_".$date;

// If FireBeetle 2 ESP32-S3 sends the model detection results, save the received information to the detections MySQL database table.
if(isset($_GET["results"]) && isset($_GET["class"]) && isset($_GET["x"])){
	$img_file = sprintf($img_file, $_GET["class"]);
	if($anomaly->insert_new_results($date, $img_file.".jpg", $_GET["class"])){
		echo "Detection Results Saved Successfully!";
	}else{
		echo "Database Error!";
	}
}

// If FireBeetle 2 ESP32-S3 transfers a resulting image after running the object detection model, save the received raw image buffer (RGB565) as a TXT file to the detections folder.
if(!empty($_FILES["resulting_image"]['name'])){
	// Image File:
	$received_img_properties = array(
	    "name" => $_FILES["resulting_image"]["name"],
	    "tmp_name" => $_FILES["resulting_image"]["tmp_name"],
		"size" => $_FILES["resulting_image"]["size"],
		"extension" => pathinfo($_FILES["resulting_image"]["name"], PATHINFO_EXTENSION)
	);
	
    // Check whether the uploaded file's extension is in the allowed file formats.
	$allowed_formats = array('jpg', 'png', 'bmp', 'txt');
	if(!in_array($received_img_properties["extension"], $allowed_formats)){
		echo 'FILE => File Format Not Allowed!';
	}else{
		// Check whether the uploaded file size exceeds the 5 MB data limit.
		if($received_img_properties["size"] > 5000000){
			echo "FILE => File size cannot exceed 5MB!";
		}else{
			// Save the uploaded file (image).
			move_uploaded_file($received_img_properties["tmp_name"], "./detections/".$img_file.".".$received_img_properties["extension"]);
			echo "FILE => Saved Successfully!";
		}
	}
	
	// Convert the recently saved RGB565 buffer (TXT file) to a JPG image file by executing the rgb565_converter.py file.
	// Transmit the passed bounding box measurements (query parameters) as Python Arguments.
	$raw_convert = shell_exec('python "C:\Users\kutlu\New E\xampp\htdocs\mechanical_anomaly_detector\detections\rgb565_converter.py" --x='.$_GET["x"].' --y='.$_GET["y"].' --w='.$_GET["w"].' --h='.$_GET["h"]);
	print($raw_convert);

	// After generating the JPG file, remove the converted TXT file from the server.
	unlink("./detections/".$img_file.".txt");
	
	// After saving the generated JPG file and the received model detection results to the MySQL database table successfully,
	// send an SMS to the given user phone number via Twilio in order to inform the user of the latest detection results, including the resulting image.
	$message_body = " Anomaly Detected "
	                ."\n\r\n\r Faulty Part: ".$_GET["class"]
			        ."\n\r\n\r Date: ".$date
				    ."\n\r\n\r  http://192.168.1.22/mechanical_anomaly_detector/detections/images/".$img_file.".jpg"
				    ."\n\r\n\r Please refer to the Android application to inspect the resulting image index.";
	$anomaly->Twilio_send_SMS($message_body);
}

?>

results.php

PHP
<?php

include_once "assets/class.php";

// Define the new 'anomaly ' object:
$anomaly  = new _main();
$anomaly->__init__($conn);

// Obtain the latest model detection results from the detections MySQL database table, including the resulting image names.
$date=[]; $class=[]; $img_name=[];
list($date, $class, $img_name) = $anomaly->get_model_results();
// Print the retrieved results as a list separated by commas.
$web_app_img_path = "http://192.168.1.22/mechanical_anomaly_detector/detections/images/";
$data_packet = "";
for($i=0;$i<3;$i++){
	if(isset($date[$i])){
		$data_packet .= $class[$i].",".$date[$i].",".$web_app_img_path.$img_name[$i].",";
	}else{
		$data_packet .= "Not Found!,Not Found!,".$web_app_img_path."waiting.png,";
	}
}

echo($data_packet);

?>

rgb565_converter.py

Python
import argparse
from glob import glob
import numpy as np
from PIL import Image, ImageDraw

# Obtain the passed bounding box measurements (query parameters) as Python Arguments.
if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("--x", required=True, help="bounding box [X]")
    parser.add_argument("--y", required=True, help="bounding box [Y]")
    parser.add_argument("--w", required=True, help="bounding box [Width]")
    parser.add_argument("--h", required=True, help="bounding box [Height]")
    args = parser.parse_args()
    x = int(args.x)
    y = int(args.y)
    w = int(args.w)
    h = int(args.h)

   # Obtain all RGB565 buffer arrays transferred by FireBeetle 2 ESP32-S3 as text (.txt) files.
    path = "C:\\Users\\kutlu\\New E\\xampp\\htdocs\\mechanical_anomaly_detector\\detections"
    images = glob(path + "/*.txt")

    # Convert each RGB565 buffer (TXT file) to a JPG image file and save the generated image files to the images folder.
    for img in images:
        loc = path + "/images/" + img.split("\\")[8].split(".")[0] + ".jpg"
        size = (240,240)
        # RGB565 (uint16_t) to RGB (3x8-bit pixels, true color)
        raw = np.fromfile(img).byteswap(True)
        file = Image.frombytes('RGB', size, raw, 'raw', 'BGR;16', 0, 1)
        # Modify the converted RGB buffer (image) to draw the received bounding box on the resulting image.
        offset = 50
        m_file = ImageDraw.Draw(file)
        m_file.rectangle([(x, y), (x+w+offset, y+h+offset)], outline=(225,255,255), width=3)
        file.save(loc)
        #print("Converted: " + loc)

3gp_to_WAV.py

Python
from glob import glob
import os
from time import sleep

path = "/home/kutluhan/Desktop/audio_samples"
audio_files = glob(path + "/*.3gp")

for audio in audio_files:
    new_path = audio.replace("audio_samples/", "audio_samples/wav/")
    new_path = new_path.replace(".3gp", ".wav")
    os.system('ffmpeg -i ' + audio + ' ' + new_path)

Credits

Kutluhan Aktar

Kutluhan Aktar

79 projects • 291 followers
Self-Taught Full-Stack Developer | @EdgeImpulse Ambassador | Maker | Independent Researcher

Comments