ashish_8284
Published © Apache-2.0

Enhance Safety of Conveyor belt using OpenCV

Boosting safety with computer vision Detecting obstacles in real-time saves $40K, safeguarding conveyor systems and preventing costly damage

AdvancedFull instructions providedOver 4 days399

Things used in this project

Hardware components

Bullet Network Camera, 2MP,
Bullet Network camera used as input device for OpenCV Python library.
×1

Software apps and online services

VS Code
Microsoft VS Code
VS Code used to write code.
MQTT
MQTT
MQTT broker used to share data between Python script to Grafana dashboard and OPC gateway for further upward integration with WINCC Scada.
KepServerEX, MQTT to OPC Gateway for Scada Intigration
KepServer used as MQTT to OPC Gateway. Wincc Scada can read data from OPC Server and Python script can easily share data with MQTT broker. hence KEPServer is best candidate for to bridge two protocol data transfer.
SIMATIC WINCC SCADA, Siemens
Wincc Is final UI which is used to generate alarm in Engineering control room.
Grafana Historian visualization application.
Grafana is used to analyze data of OpenCV image processing. this historian data is then used to finetune script for better results.
InfluxDB
Python Script send image processing data to influxDB . this data is then used to finetune data for better results.
Node-RED
Node-RED
node-red is used to Store MQTT data in to influx DB for historian analysis,
OpenCV
OpenCV

Story

Read more

Custom parts and enclosures

System Architecture

System Architecture showing how each component is interconnected with each other and how data is flowing from camera to Scada for alarm generation

Schematics

System Architecture

System Architecture showing how each components are connected with each other and how data is flowing form camera to SCADA for alarm generation

Presentation

Detailed presentation of project. video of project in action is embed for better understanding of project.

Code

openCV Object Detection V01

Python
Python code to read RTSP stream of IP camera installed in front of Conveyor.
This script is used to process captured image. first step is to create ROI using scrollbars.
second step to apply perspective wrapping to make polygon in to rectangle.
Third step is to apply mask on ROI.
Forth step is to detect cantors using OpenCV contour function.
Fifth step is to sort contours and calculate area of contour using height and width of contours.
area and width reading is then filtered using running average algorithm.
finally filtered area and width is compared with set area and width.
if area or width reading along with FPS and ohter readings are converted in to json string.
Final step of python script is to publish json string to MQTT broker on specific topic.
Conveyor SCADA system is subscribed with said topic and parse all date.
width and area reading is then processed in SCADA by comparing captured reading with set redings.
if any one reading is less then set value then Camera live feed is overlay on SCADA screen with alarm on SCADA.
Control room engineer check alarm and live feed of camera and if any suspected object found captured between conveyor he/she immediately stop conveyor to protect belt conveyor belt.
#Import Library
import cv2 ,pyautogui,random,time ,numpy as np ,paho.mqtt.client as mqtt_client ,json
from cv2 import sort
from os import path
from datetime import datetime
import pickle
#Variables
#Mask Setting Gypsum
# mask setting for yellow :- 
#hl,hh,sl,sh,bl,bh = 9,96,47,255,75,255
hl,hh,sl,sh,bl,bh = 12,93,55,250,40,255
#hl,hh,sl,sh,bl,bh = 53,92,23,134,91,202
#hl,hh,sl,sh,bl,bh = 53,103,11,231,30,240
roix,roiy,roiw,roih =162,87,331,95#59,50,538,161
#Reading Screen resolution from OS setting
dw, dh = pyautogui.size()
#Widht & Height of camera frame
width,height,width_per,height_per  = 630, 340,0.5,0.5
#Calculating final frame size
wa,wb,ha,hb = int((width/2)-(width/2*width_per)),int((height/2)-(height/2*height_per)),int((width/2)+(width/2*width_per)),int((height/2)+(height/2*height_per))
red,green,blue,white,black = (0,0,255),(0,255,0),(255,0,0),(0,0,0,),(255,255,255)
blr,krnl = 1,5
file_save_dly =10
#Contor area 
area = 0
brk = 0 
camUrl = 'rtsp://admin:adani@123@192.168.0.161:554/cam/realmonitor?channel=1&subtype=0'
#camUrl  = 'rtsp://admin:admin@123@192.168.0.161:554/cam/realmonitor?channel=1&subtype=0'
base_dir = 'C:/Users/dell/Documents/Python/MFImages'
extension = 'jpg'
frameSts = 0
ret = 0
fps = 25
dtct_cnt = 0
box_Width1,box_width2,box_ht1,box_ht2,new_w1,new_a1 = 0,0,0,0,280,12000
#Running average variables
avg_size = 500
indx,arr,arr1 =0,np.zeros(avg_size,dtype=np.int32),np.zeros(avg_size,dtype=np.int32)
arr[:] = new_a1
arr1[:] = new_w1
#Mqtt Broker setting
broker,port,client_id = '192.168.0.36',1883,f'python-mqtt-{random.randint(0, 1000)}'
topic,topic_sub =    "Ashish/Image_Process/PUB","Ashish/Image_Process/SUB"
username,password = 'openhabian','openhabian'
#Inage Wrapping Setting
eve,mx,my = 0,0,0
x1,x2,x3,x4,x5 = (196,111),(483,135),(199,154),(480,166),0
pt1 = np.float32([[x1[0],x1[1]],[x2[0],x2[1]],[x3[0],x3[1]],[x4[0],x4[1]]])
pt2 = np.float32([[0,0],[width,0],[0,height],[width,height]])
#Store ROI cordinate in .pkl
def Writeroipkl():
    with open("C:/Users/dell/Documents/Python/CAM_MQTT/MCF.py","wb") as roipkl:
    #with open("roi.pkl","wb") as roipkl:
        pickle.dump((x1),roipkl)
        pickle.dump((x2),roipkl)
        pickle.dump((x3),roipkl)
        pickle.dump((x4),roipkl)
def readriopkl():
    global x1,x2,x3,x4
    with open("C:/Users/dell/Documents/Python/CAM_MQTT/MCF.py","rb") as rdpkl:
    #with open("roi.pkl","rb") as rdpkl:
        print("read pickle")
        x1 = rdpklroix1 = pickle.load(rdpkl)
        x2 = rdpklroix2 = pickle.load(rdpkl)
        x3 = rdpklroix3 = pickle.load(rdpkl)
        x4 = rdpklroix4 = pickle.load(rdpkl)
        print (rdpklroix1,rdpklroix2,rdpklroix3,rdpklroix4)
readriopkl()
#Initialized Starting time stamp
old_milli = time.time()
#Position Window for ROI
# Starting X
def cb7(val):
    global roix
    roix=val
# Starting Y    
def cb8(val):
    global roiy
    roiy=val
#Window Width
def cb9(val):
    global roiw
    roiw=val
#Window Height
def cb10(val):
    global roih
    roih=val
#Wrapping image point01
def p1x(val):
    global x1
    x1=(val,x1[1])
    Writeroipkl()
    readriopkl()
def p1y(val):
    global x1
    x1=(x1[0],val)
    Writeroipkl()
    readriopkl()
#Wrapping image point02
def p2x(val):
    global x2
    x2=(val,x2[1])
    Writeroipkl()
    readriopkl()
def p2y(val):
    global x2
    x2=(x2[0],val)
    Writeroipkl()
    readriopkl()
#Wrapping image point03
def p3x(val):
    global x3
    x3=(val,x3[1])
    Writeroipkl()
    readriopkl()
def p3y(val):
    global x3
    x3=(x3[0],val)
    Writeroipkl()
    readriopkl()
#Wrapping image point04    
def p4x(val):
    global x4
    x4=(val,x4[1])
    Writeroipkl()
    readriopkl()
def p4y(val):
    global x4
    x4=(x4[0],val)
    Writeroipkl()
    readriopkl()
#Move Wraped area   
def movx(val):
    global xud
    x4=(val,x4[1])
def movy(val):
    global yud
    x4=(x4[0],val)
#
#Define window named Frame 
    cv2.namedWindow("frame")
    cv2.createTrackbar("move_X","frame",0,width,movx)
    cv2.createTrackbar("move_Y","frame",0,height,movy)
    #cv2.createTrackbar("ROI X","frame",0,width,cb7)
    #cv2.createTrackbar("ROI Y","frame",0,height,cb8)
    #cv2.createTrackbar("ROI WIDTH","frame",0,width,cb9)
    #cv2.createTrackbar("ROI HEIGHT","frame",0,height,cb10)
    #cv2.setTrackbarPos("ROI WIDTH","frame",roiw)
    #cv2.setTrackbarPos("ROI HEIGHT","frame",roih)
    #cv2.setTrackbarPos("ROI X","frame",roix)
    #cv2.setTrackbarPos("ROI Y","frame",roiy)
#Define Wrapping imgae
cv2.namedWindow("Wrap Sele")
cv2.resizeWindow("Wrap Sele",650,400)
cv2.createTrackbar("Wrap p1x","Wrap Sele",0,width,p1x)
cv2.createTrackbar("Wrap p1y","Wrap Sele",0,height,p1y)
cv2.setTrackbarPos("Wrap p1x","Wrap Sele",x1[0])
cv2.setTrackbarPos("Wrap p1y","Wrap Sele",x1[1])
cv2.createTrackbar("Wrap p2x","Wrap Sele",0,width,p2x)
cv2.createTrackbar("Wrap p2y","Wrap Sele",0,height,p2y)
cv2.setTrackbarPos("Wrap p2x","Wrap Sele",x2[0])
cv2.setTrackbarPos("Wrap p2y","Wrap Sele",x2[1])
cv2.createTrackbar("Wrap p3x","Wrap Sele",0,width,p3x)
cv2.createTrackbar("Wrap p3y","Wrap Sele",0,height,p3y)
cv2.setTrackbarPos("Wrap p3x","Wrap Sele",x3[0])
cv2.setTrackbarPos("Wrap p3y","Wrap Sele",x3[1])
cv2.createTrackbar("Wrap p4x","Wrap Sele",0,width,p4x)
cv2.createTrackbar("Wrap p4y","Wrap Sele",0,height,p4y)
cv2.setTrackbarPos("Wrap p4x","Wrap Sele",x4[0])
cv2.setTrackbarPos("Wrap p4y","Wrap Sele",x4[1])

#Mqtt Connect funciton
def connect_mqtt():
    def on_connect(client, userdata, flags, rc):
        if rc == 0:
            print("Connected to MQTT Broker!")
        else:
            print("Failed to connect, return code %d\n", rc)
    client = mqtt_client.Client(client_id)
    client.username_pw_set(username, password)
    client.on_connect = on_connect
    client.connect(broker, port)
    return client

def take_Snap():
    #Construct file name using date and time of system
    date = datetime.now()
    file_name_format = "{:%d-%m-%Y_%H%M%S}.{:s}"
    file_name = file_name_format.format(date,extension)
    file_path = path.normpath(path.join(base_dir,file_name))
    #Resize frame to reduce size
    Small_Frame = cv2.resize(frame,(int(width/2),int(height/2)))
    #Storing image in predefined path
    cv2.imwrite(file_path,Small_Frame)
    #Consol log for identgy file saving instant
    print("Belt rip detected and snap stored at :- ",file_path)

#def publish(client):
#    if(area >1000):
#        msg= {"area":area,"FPS":fps,"Box_W":box_Width1,"Box_H":box_ht1}
#        msg = json.dumps(msg)
#        result = client.publish(topic, msg)
def subscribe(client: mqtt_client):
    def on_message(client, userdata, msg):
        print(f"Received `{msg.payload.decode()}` from `{msg.topic}` topic")
    client.subscribe(topic_sub)
    client.on_message = on_message

def work_with_captured_video(camera):
    global frameSts , brk, old_milli ,area ,ret, frame,fps,last_time,box_Width1,box_width2,box_ht1,box_ht2,new_w1,new_a1,indx,dtct_cnt
    while True:
        ret, frame = camera.read()
        if not ret:
            print("Frame not read")
            camera.release()
            return False
        else:
            #Calculating Loop time
            loop_time  = (time.time()-last_time)
            if loop_time > 0:
                fps_new = (1/loop_time)
                fps = int(fps_new*0.1 + fps*0.9)
            last_time = time.time()
            #Reading Frame
            frame = cv2.resize(frame,(width,height))    
            #Draw Region of intrest on original frame
               #cv2.rectangle(frame,(roix,roiy),((roix+roiw),(roiy+roih)),(green),1)      
            #Crop Region of Intrese on original frame
            frameroi = frame[roiy:roiy+roih,roix:roix+roiw]
            #Convert BGR to HSV of ROI
            framehsv = cv2.cvtColor(frameroi,cv2.COLOR_BGR2HSV)
            #Converting HSV to Blurred if required
            framehsv = cv2.GaussianBlur(framehsv,(krnl,krnl),int(blr))
            #Mask for High value range
            maskL=np.array([hl,sl,bl])
            #Mask for Low Value range
            maskH=np.array([hh,sh,bh])
            ##
            pt1 = np.float32([[x1[0],x1[1]],[x2[0],x2[1]],[x3[0],x3[1]],[x4[0],x4[1]]])
            framehsvwrp = cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)
            matrix = cv2.getPerspectiveTransform(pt1,pt2)
            output = cv2.warpPerspective(framehsvwrp,matrix,(width,height))
            masked = cv2.inRange(output,maskL,maskH)
            contours, hierarchy = cv2.findContours(masked,cv2.RETR_CCOMP, cv2.CHAIN_APPROX_NONE)
            contours = sorted(contours,key=cv2.contourArea,reverse=True)
            wrapped_contor,Wrapped_area,Wrapped_width = 0,0,0
            for cnt in contours:
                area = cv2.contourArea(cnt)
                if wrapped_contor == 0:
                    _,_,wrp_wdt,_ = cv2.boundingRect(contours[0])
                    Wrapped_area = area
                    Wrapped_width = wrp_wdt
                    wrapped_contor = wrapped_contor+1
            cv2.imshow("output",output)  
            cv2.imshow('masked', masked)
            ##
            #Apply Mask on Blurred HSV Region of interest frame
            maskedHSV = cv2.inRange(framehsv,maskL,maskH)
            #Detecting Contoures from Masked image
            #contours, _ = cv2.findContours(maskedHSV,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
            contours, _ = cv2.findContours(masked,cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
            #Shorting of contours
            contours = sorted(contours,key=cv2.contourArea,reverse=True)
            #_,_,a,_ = cv2.boundingRect(contours[0])
            #_,_,b,_ = cv2.boundingRect(contours[1])
            #_,_,c,_ = cv2.boundingRect(contours[2])
            #_,_,d,_ = cv2.boundingRect(contours[3])
            #print(a,"   ",b,"   ",c,"   ",d)
            #Put text on ROI for FPS
            cv2.putText(frameroi,str(fps),(250,15),cv2.FONT_HERSHEY_COMPLEX,0.5,green,1)
            total_conours = 0
            #Extrecting Area, length and Width of each countors
            for i,cnt in enumerate(contours):
                #Counting total countors by looping loop count
                total_conours = total_conours+1
                #Extrecting area of countour
                area = cv2.contourArea(cnt) 
                #validating contor area before processing its value 
                if(area >1000):
                    #Calculating new area by aplying lowpass filter
                    new_a1 = (area*0.01 + new_a1*0.99)
                    #Extrecting cordinates for countour
                    xb1,yb1,wb1,hb1 = cv2.boundingRect(cnt)
                    #Converting area and widht of countor from int to string for display on frame
                    msg = "Pixel Area:" + str(area)
                    msg2 = "Safe Length:"+ str(wb1)
                    #Displaying Area and Width of contour on frame
                    cv2.putText(frameroi,str(msg),(xb1,yb1),cv2.FONT_HERSHEY_COMPLEX,0.5,red,1)
                    cv2.putText(frameroi,str(msg2),(xb1+int(wb1/2),yb1+int(hb1+5)),cv2.FONT_HERSHEY_COMPLEX,0.5,green,1)
                    #Draw countor on original frame
                    cv2.drawContours(frameroi,[cnt], -1, red, 1)  
                    #Extrecting first countor width                   
                    if total_conours > 0:
                        _,_,w2,h2 = cv2.boundingRect(contours[0])
                        box_Width1 = w2
                        box_ht1 = h2
                        # Extracting second contor
                        if total_conours >1:
                            _,_,w3,h3 = cv2.boundingRect(contours[1])
                            box_width2 = w3
                            box_ht2 = h3
                    #Calculating new width by aplying lowpass filter
                    new_w1 = (wb1*0.05) + (new_w1*0.99)   
                    if (indx <= avg_size):
                        arr[indx] = cv2.contourArea(contours[0])
                        arr1[indx] = wb1
                        indx = indx+1
                    if (indx == avg_size):
                        indx = 0 
                    area_per = int((np.sum(arr)/avg_size)*100/25000)
                    width_per = int((np.sum(arr1)/avg_size)*100/300)
                    weight = int(area_per*0.1+ width_per*0.9)
                    #Validating contor length   
                    if int(np.sum(arr1)/avg_size) < 270*2:
                        if dtct_cnt < 151:
                            dtct_cnt = dtct_cnt + 1
                            old_milli = time.time()
                            if(dtct_cnt == 150):
                                take_Snap()
                        else :
                            if (time.time() - old_milli) >= file_save_dly:
                                take_Snap()
                                old_milli = time.time()
                    else:
                        dtct_cnt = 0
                        old_milli = time.time()
                    #Creating Json packet for data sharign 
                    msg= {"FPS":fps,"area":area/10,"run_avg":int(np.sum(arr)/avg_size)/10,"Box_W":wb1/2,"run_avgw":int(np.sum(arr1)/avg_size)/2,"weight":weight,"Wrapped_area":Wrapped_area,"Wrap_Wdt":Wrapped_width}
                    msg = json.dumps(msg)   
                    #Posting Json packet to mqtt broker
                    result = client.publish(topic, msg)
                #Consol log  
                #print(dtct_cnt," ",int(time.time())," ",int(time.time()-old_milli)," ",)
                #print(total_conours," ",int(np.sum(arr)/avg_size)," ",box_Width1," ",int(np.sum(arr1)/avg_size)/2," ",weight," ",box_width2," ",dtct_cnt," ",int(time.time()-old_milli),Wrapped_area,Wrapped_width)
            #Posting all zero value if no any contors are detected.
            if (total_conours == 0):
                msg= {"area":0,"FPS":0,"Box_W":0,"Box_H":0,"Flt_w":0,"run_avgw":0,"area_fltr":0,"run_avg":0}
                msg = json.dumps(msg)   
                #Posting Json packet to mqtt broker
                _ = client.publish(topic, msg)
            #Showing Frames on Screen
            frame_resize = cv2.resize(frame,(int(width*2),int(height*2)))
            #cv2.imshow('frameroi', frameroi)
            #cv2.imshow('framehsv', framehsv)
            #cv2.imshow('maskedHSV', maskedHSV)
            cv2.circle(frame,(x1),5,blue,2)
            cv2.circle(frame,(x2),5,blue,2)
            cv2.circle(frame,(x3),5,blue,2)
            cv2.circle(frame,(x4),5,blue,2) 
            cv2.imshow('frame', frame)
        #Waiting for exit key & stope execution
        if cv2.waitKey(1) & 0xFF == ord('q'):
            frameSts = 1
            brk = 1
            break
    return True
#MQTT connection
client = connect_mqtt()
#Looping MQTT Client
client.loop_start()
while True:
    #Subscribing MQTT Broker
    subscribe(client)
    #Creating Camera Object
    camera = cv2.VideoCapture(camUrl)
    #Printing Width & Height of Screen
    print(width,height)
    #Setting default parameters for camera
    camera.set(cv2.CAP_PROP_FRAME_WIDTH,width)
    camera.set(cv2.CAP_PROP_FRAME_HEIGHT,height)
    camera.set(cv2.CAP_PROP_FPS,25)
    #Executed if camera url is opened
    if camera.isOpened():
        global CamSts ,last_time
        CamSts = 1
        print('Camera is connected')
        last_time = time.time()
        #Working with camera feed. if camera feed is not responding then retrun status 
        response = work_with_captured_video(camera)
        #Validating camera responce and set retry interval of 10 second fro restart
        if response == False:
            print("frame failed wait for 10 Sec")
            time.sleep(10)
            continue
    #If camera is disconnected then print consol log and set retry interval of 10 sec.
    else:
        print('Camera not connected')
        msg= {"area":0,"FPS":0,"Box_W":0,"Box_H":0,"Flt_w":0,"run_avgw":0,"area_fltr":0,"run_avg":0}
        msg = json.dumps(msg)   
        #Posting Json packet to mqtt broker
        _ = client.publish(topic, msg)
        CamSts = 0
        camera.release()
        time.sleep(10)
        continue
    #Exit form all loop and stop script if exit key is detected.
    if brk:
        print("Good Bye")
        break

Credits

ashish_8284

ashish_8284

10 projects • 34 followers

Comments