Yi-Tung ChanTai-Wei HouYu-Lun HuangChun-Hao HuangCHUN-HAN,CHEN
Published © GPL3+

Autonomous Drowning-Prevention Edge Device

Implementation of edge computing integrated with “Nano” and drone to, not only prevent drowning, but also reduce risks of rescue.

IntermediateFull instructions provided10 days110
Autonomous Drowning-Prevention Edge Device

Things used in this project

Hardware components

NVIDIA Jetson Nano Developer Kit
NVIDIA Jetson Nano Developer Kit
×1
Flight Controller Cube Pixhawk 2.1
×1
Here2 GPS Module
×1
Transceiver Telemetry Radio V3 915MHz
×1
RadioLink AT9S 2.4GHz 10CH Transmitter
×1
Hobbywing Platinum 30A Pro 2-6S ESC OPTO For RC Model
×1
Eaglepower Brushless Flight Motor KV390
×1
SJCAM SJ4000WIF Camara
×1
Puffer Automatic Inflating Lifebuoy
×1
drone frame ZD550
×1

Software apps and online services

Jetbot
yolov3-tiny-onnx-TensorRT
Mission Planner
Dronekit

Hand tools and fabrication machines

Hex Nut Wrench
Hex Screw Wrench
Soldering iron (generic)
Soldering iron (generic)

Story

Read more

Schematics

Hardware architecture design diagram

Code

main.py

Python
ADED Project Source Code
from implement import image
from ctypes import c_bool
import torch.multiprocessing as mp
import multiprocessing as rmp
from copter_control import takeoff_and_patrolling

# Combining the processes of drowning inference (implement.py) and flight controlling (copter_control.py) by using
# multi-processessing way to run the main program.
if __name__ == '__main__':
    mp.set_start_method('spawn')
    stageA = mp.Value(c_bool, True)
    stageB = mp.Value(c_bool, True)
    p1 = mp.Process(target=image, args=(stageA,stageB,0))
    p2 = rmp.Process(target=takeoff_and_patrolling, args=(5,stageA,stageB))
    p1.start()
    p2.start()
    p1.join()
    p2.join()

copter_control.py

Python
from __future__ import print_function
import time
from dronekit import connect, VehicleMode, LocationGlobalRelative

def takeoff_and_patrolling(aTargetAltitude, stageA, stageB):
    """
    First, this function makes the drone patrolling around four points in a field, such as swimming pool, and hovering
    the drone in the sky while it detect someone. Then, if our device inference someone is drowning, it will throw the
    lifebuoy and hover overhead the drowning person to further indicate the position of drowning person.

    Args:
        aTargetAltitude: determine the altitude of the drone after takeoff
        stageA: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                detects a person, its value will become False and begin to hover at the fixed point.
        stageB: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                consider someone is drowning, its value will become False and start hovering at the fixed point to
                indicate the point of drowning person.
    Return:
         None
    """

    connection_string = '/dev/ttyACM0'
    print('Connecting to vehicle on: %s' % connection_string)
    vehicle = connect(connection_string, wait_ready=True)

    while not vehicle.is_armable:
        print(" Waiting for vehicle to initialise...")
        time.sleep(1)

    print("Arming motors")
    vehicle.mode = VehicleMode("GUIDED")

    while not vehicle.armed:
    # This code is an insurance for drone operator to have time to stay away from the drone before starting the flight.
        print(" Waiting for arming...")
        time.sleep(1)

    print("Taking off!")
    vehicle.simple_takeoff(aTargetAltitude)  # Take off to target altitude

    while True: # Break and return from function just below target altitude.
        print(" Altitude: ", vehicle.location.global_relative_frame.alt)
        if vehicle.location.global_relative_frame.alt >= aTargetAltitude * 0.95:
            print("Reached target altitude")
            break
        time.sleep(1)

    mark = 0
    while stageB.value:
        time.sleep(2)
        if stageA.value:
            location_frame = vehicle.location.global_relative_frame
            if mark == 0:
                print("Going towards first point")
                point1 = LocationGlobalRelative(22.703998, 120.288625, 5)
                vehicle.simple_goto(point1, groundspeed=1)
                if '%.5f' % location_frame.lat == '%.5f' % point1.lat:
                    if '%.5f' % location_frame.lon == '%.5f' % point1.lon:
                        if '%.0f' % location_frame.alt == '%.0f' % point1.alt:
                            mark = mark + 1
                        else:
                            pass
                    else:
                        pass
                else:
                    pass
            elif mark == 1:
                print("Going towards second point")
                point2 = LocationGlobalRelative(22.703687, 120.288624, 5)
                vehicle.simple_goto(point2, groundspeed=1)
                if '%.5f' % location_frame.lat == '%.5f' % point2.lat:
                    if '%.5f' % location_frame.lon == '%.5f' % point2.lon:
                        if '%.0f' % location_frame.alt == '%.0f' % point2.alt:
                            mark = mark + 1
                        else:
                            pass
                    else:
                        pass
                else:
                    pass
            elif mark == 2:
                print("Going towards third point")
                point3 = LocationGlobalRelative(22.703697, 120.288459, 5)
                vehicle.simple_goto(point3, groundspeed=1)
                if '%.5f' % location_frame.lat == '%.5f' % point3.lat:
                    if '%.5f' % location_frame.lon == '%.5f' % point3.lon:
                        if '%.0f' % location_frame.alt == '%.0f' % point3.alt:
                            mark = mark + 1
                        else:
                            pass
                    else:
                        pass
                else:
                    pass
            elif mark == 3:
                print("Going towards the forth point")
                point0 = LocationGlobalRelative(22.704047, 120.288444, 5)
                vehicle.simple_goto(point0, groundspeed=1)
                if '%.5f' % location_frame.lat == '%.5f' % point0.lat:
                    if '%.5f' % location_frame.lon == '%.5f' % point0.lon:
                        if '%.0f' % location_frame.alt == '%.0f' % point0.alt:
                            mark = 0
                        else:
                            pass
                    else:
                        pass
                else:
                    pass
        else:
            point = vehicle.location.global_relative_frame
            vehicle.simple_goto(point, groundspeed=1)
    point = vehicle.location.global_relative_frame
    vehicle.simple_goto(point, groundspeed=1)

implement.py

Python
import time
import numpy as np
import tensorrt as trt
import pycuda.driver as cuda
import pycuda.autoinit
import cv2
import torchvision
import torch
import torch.nn.functional as F
import torch.multiprocessing as mp
from PIL import ImageDraw, Image, ImageFont
from run_motor.main import save_drowning
from image_support.utils import preprocess
from image_support import common
from image_support.data_processing import PreprocessYOLO, PostprocessYOLO, ALL_CATEGORIES
import os

def load_model():
    """
    Introduce the model of Alexnet
    Returns:
        the model of Alexnet
    """
    alx_categories = ['drowning', 'not_drowning']
    device = torch.device('cuda')
    model = torchvision.models.alexnet(pretrained=True)
    model.classifier[-1] = torch.nn.Linear(4096, len(alx_categories))
    model = model.to(device)
    model.load_state_dict(torch.load('image_support/my_model.pth'))
    return model


def draw_bboxes(image_raw, bboxes, confidences, categories, all_categories, model, stageA, stageB):
    global start, danger_index
    """
    Draw the danger index and the results of YOLOV3-Tiny detection on the image.
    Args:
        image_raw: a raw PIL Image
        bboxes: NumPy array containing the bounding box coordinates of N objects, with shape (N,4).
        categories: NumPy array containing the corresponding category for each object, with shape (N,)
        confidences: NumPy array containing the corresponding confidence for each object, with shape (N,)
        all_categories: a list of all categories in the correct ordered (required for looking up the category name)
        model: the model of Alexnet
        stageA: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                detects a person, its value will become False and begin to hover at the fixed point.
        stageB: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                consider someone is drowning, its value will become False and start hovering at the fixed point to
                indicate the point of drowning person.
    Returns:
        the processed image
    """
    original_image = image_raw
    draw = ImageDraw.Draw(image_raw)
    alx_categories = ['drowning', 'not_drowning']
    stageA.value = True
    font = ImageFont.truetype(font='home/jetbot/ttf/time_new_roman.ttf',size=22)
    try:
        for box, score, category in zip(bboxes, confidences, categories):
            x_coord, y_coord, width, height = box
            left = max(0, np.floor(x_coord + 0.5).astype(int))
            top = max(0, np.floor(y_coord + 0.5).astype(int))
            right = min(image_raw.width, np.floor(x_coord + width + 0.5).astype(int))
            bottom = min(image_raw.height, np.floor(y_coord + height + 0.5).astype(int))
            if all_categories[category] == 'person':
                draw.rectangle(((left, top), (right, bottom)), outline='green',width=3)
                draw.text((left, top - 12), '{0} {1:.2f}'.format(all_categories[category], score), fill='green',width=2)
                start = time.time()
            else:
                draw.rectangle(((left, top), (right, bottom)), outline='blue',width=3)
                draw.text((left, top - 12), '{0} {1:.2f}'.format(all_categories[category], score), fill='blue',width=2)
    except:
        pass

    if time.time() - start <= 7:
        image_ndarray = cv2.cvtColor(np.asarray(original_image), cv2.COLOR_RGB2BGR)
        image_ndarray = cv2.resize(image_ndarray, (224, 224))
        preprocessed = preprocess(image_ndarray)
        output = model(preprocessed)
        output = F.softmax(output, dim=1).detach().cpu().numpy().flatten()
        alx_category_index = output.argmax()
        draw.text((10,40), 'If someone is drowning: ' + str(alx_categories[alx_category_index] == 'drowning'),
                         fill=(255, 0, 0), width=8, font = font)
        print('If someone is drowning: ' + str(alx_categories[alx_category_index] == 'drowning'))
        if alx_categories[alx_category_index] == 'drowning':
            stageA.value = False
            danger_index = calculate_danger(danger_index, True, stageB)                   
        else:
            danger_index = calculate_danger(danger_index, False, stageB)  
        draw.text((10,70),'danger confidence: ' + str(danger_index/20), fill=(255, 0, 0), width=8, font=font)
        print('danger_index:', danger_index)

    return image_raw

def get_engine(engine_file_path):
    """
    The aims of the function is used to include trt files of YOLOV3-Tiny.
     Args:
         engine_file_path: file location
     Returns:
         contents of yolov3-tiny.trt
    """
    TRT_LOGGER = trt.Logger()
    with open(engine_file_path, "rb") as f, trt.Runtime(TRT_LOGGER) as runtime:
        return runtime.deserialize_cuda_engine(f.read())


def calculate_danger(danger_index, if_drowning, stageB):
    """
    This function is used to calculate the drowning confidence. The higher confidence values the function inferences,
    the more danger the person is.
     Args:
         danger_index: the danger index is a counter used to calculate drowning confidence of target.
         if_drowning: Whether there is the drowning person in the image.
         stageB: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                inference someone is drowning, its value will become False and start hovering at the fixed point to indicate
                the point of drowning person.
     Returns:
         new risk index
    """

    if if_drowning:
        danger_index = danger_index + 1
    else:
        danger_index = danger_index - 1

    if danger_index < 0:
        danger_index = 0
    else:
        pass

    if danger_index > 20:
        save_drowning()
        danger_index = 0
        stageB.value = False
    else:
        pass

    return danger_index

def image(stageA,stageB,k=0):
    """
    Main function that combines the above functions can decide whether throw a lifebuoy or not.
     Args:
        stageA: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                detects a person, its value will become False and begin to hover at the fixed point.
        stageB: A status condition of the drone. When the value is True, the drone will keep patrolling. When the drone
                inference someone is drowning, its value will become False and start hovering at the fixed point to
                indicate the point of drowning person.
         k: Deciding the video will store in which folder in the Jeston Nano.
     Returns:
         None
    """

    global start, danger_index

    try:
        os.mkdir('video{}'.format(k))
    except:
        pass

    input_size = 416
    batch_size = 1
    engine_file_path = 'image_support/yolov3-tiny.trt'
    input_resolution_yolov3_HW = (input_size, input_size)
    preprocessor = PreprocessYOLO(input_resolution_yolov3_HW)
    output_shapes = [(batch_size, 255, 13, 13), (batch_size, 255, 26, 26)]
    postprocessor_args = {"yolo_masks": [(3, 4, 5), (0, 1, 2)],
                          "yolo_anchors": [(10,14),  (23,27),  (37,58),  (81,82),  (135,169),  (344,319)],
                          "obj_threshold": 0.5, 
                          "nms_threshold": 0.35,
                          "yolo_input_resolution": input_resolution_yolov3_HW}
    postprocessor = PostprocessYOLO(**postprocessor_args)
    print('Finish the setting of yolov3-tiny')
    # The above is the setting of YOLOV3-Tiny

    model = load_model()
    print('Have loaded the model of alexnet')
    # Introducing the model of Alexnet

    danger_index = 0
    # the default of danger index

    start = -8
    # default value for countdown

    fourcc = cv2.VideoWriter_fourcc('m','p','4','v')
    # The filename extension of recorded video sequences.

    with get_engine(engine_file_path) as engine, engine.create_execution_context() as context:
        inputs, outputs, bindings, stream = common.allocate_buffers(engine)

        for i in range(60, 1800, 60):
            print('Start to record the video from {}th second to {}th second'.format(i-60,i))
            cap = cv2.VideoCapture(0)

            video_out = cv2.VideoWriter('./video{}/{}.mp4'.format(k, i), fourcc, 60.0, (640,480))

            start2 = time.time()

            while cap.isOpened():

                ret, image = cap.read()

                start1 = time.time()

                image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
                camera_image = Image.fromarray(image)
                # Conversion the data format based on the difference between Opencv (cv2) and Pillow (PIL) package.

                image_raw, image = preprocessor.process(camera_image)
                inputs[0].host = image
                trt_outputs = common.do_inference(context, bindings=bindings, inputs=inputs, outputs=outputs,
                                                  stream=stream, batch_size=batch_size)
                trt_outputs = [output.reshape(shape) for output, shape in zip(trt_outputs, output_shapes)]
                # Obtain the ROI by using YOLOV3-Tiny

                shape_orig_WH = image_raw.size
                boxes, classes, scores = postprocessor.process(trt_outputs, (shape_orig_WH))
                obj_detected_img = draw_bboxes(image_raw, boxes, scores, classes, ALL_CATEGORIES,
                                                            model, stageA, stageB)

                image = cv2.cvtColor(np.asarray(obj_detected_img), cv2.COLOR_RGB2BGR)

                end1 = time.time()

                fps = 1 / (end1 - start1)
                print('fps :', fps)
                cv2.putText(image, 'fps: '+str(round(fps,2)),(10,30),cv2.FONT_HERSHEY_COMPLEX_SMALL,1,(0,0,255),1)

                video_out.write(image)

                trt_outputs = []

                end2 = time.time()

                if end2 - start2 >= i:
                    video_out.release()
                    cap.release()
                    print('Have recroded the video from {}th second to {} second'.format(i-60,i))
                    break
                # The video is recorded every minute.
            else:
                print('Camera has wrong, please check it')
                video_out.release()
                cap.release()
                break

Credits

Yi-Tung Chan

Yi-Tung Chan

1 project • 2 followers
Tai-Wei Hou

Tai-Wei Hou

0 projects • 1 follower
Yu-Lun Huang

Yu-Lun Huang

0 projects • 1 follower
Chun-Hao Huang

Chun-Hao Huang

0 projects • 0 followers
CHUN-HAN,CHEN

CHUN-HAN,CHEN

0 projects • 1 follower

Comments