Hardware components | ||||||
![]() |
| × | 1 | |||
Software apps and online services | ||||||
![]() |
| |||||
![]() |
| |||||
Home fitness intelligent guidance robot
Application background: During the epidemic, home fitness has become the most powerful way to promote national fitness, which has also played a positive role in people's psychology while improving human immunity. Now after the end of the epidemic, home fitness is still favored by many people, and various live broadcasts and software emerge one after another, but there are not many guidance and correction of user posture. For the above situation, we made a fitness intelligent guidance robot to assist people to exercise at home. Introduction to the content of the work: This work is called "Home Fitness Intelligent Guidance Robot", which mainly records the user's movements during movement through the camera, transmits the captured picture to the computer in real time, and begins to read and identify the user's movement movements, and performs human posture estimation and motion action recognition. In this work, models of human bones are created by using key points that make up the structure of the human bone, such as ankles, knees, shoulders, elbows, wrists, and limb orientations. The first is to capture the actions of the characters in the picture, and then analyze the standardity of the characters' actions, and use the progress bar and font to prompt the completion and standard of the actions. The number of eligible sports is also counted. This system can distinguish five exercise states: sit-ups, arm up exercises, open and close jumps, leg raises, and push-ups. When we open the program, the page will first appear, as shown in Figure 1: Figure 1 Interface when opening the page After the user selects according to the exercise he wants to do next, the time and the type of sport selected will be displayed at the top. After determining, it enters the camera's field of view to move, and the computer displays the fluctuation size of the action and the number of standard movements in real time. In sit-ups and push-ups, if the movement is not standard, there will be a "Fix Form" font prompt in the upper right corner and a voice announcement. When the user moves standard, there are "Up" and "down" fonts to indicate the next movement. As shown in Figure 2, when the user does push-ups and the body is lowered, the word "Up" is displayed to remind the user that the body can be lifted. At the same time, the progress bar changes in real time to show the progress of the user's action, and when a standard action is completed, the current number of voice broadcasts will be broadcast to remind the user whether the action is standard. Figure 2 Runtime Interface In arm movement and leg lift movement, there is a progress bar and text prompts on the left and right sides. Judge the movement state and standard situation of the left and right separately. A standard action is completed when the movements on both sides are standardized. Or a certain side doing two standard actions in a row is also considered to have completed one standard action. Key innovations: 1. The robot determines the magnitude of the amplitude of movement by connecting according to the bone key points of each action and counting the angular size of the limbs. Compare with the standard exercise posture and ensure that the angle difference is within a certain allowable range of error, so as to judge whether the movement is standard. 2. The robot realizes real-time display action analysis, directly on the initial uploaded image for motion analysis, by directly displaying the key point connection and angle size, you can intuitively see which part of the movement of the whole body muscle movement has the largest range, that is, which part of the movement is biased towards exercising, so as to help the user better targeted exercise. 3. The robot provides a progress bar, and the percentage shows the progress of the action in a variety of ways, so that people can more intuitively and comprehensively check whether their action has completed an action completely. At the same time, through text voice prompts, let the user's actions be as standard as possible and improve the exercise efficiency. Promotion and application value: This robot can not only be used to judge whether the movement is standard, but also to a certain extent for action warning. For example, a hospital patient can be alarmed after he has an inappropriate range of body movement to remind him not to continue the current posture or not to continue the current movement. If the posture is corrected, an alarm can be issued when he hunches over a large amount to remind him to correct his posture to a suitable state in time, and it can also remind him to pay attention to rest when he is tired and sleepy
import sys
from PyQt5 import QtGui, QtWidgets
import matplotlib as mpl
from matplotlib import pyplot as plt
from mode import Ui_MainWindow
import cv2
import fuwo
import time
import yangwo
import arm2
import ttiao
import tai
class Open_Camera(QtWidgets.QMainWindow, Ui_MainWindow):
def __init__(self):
super().__init__()
self.img1 = None
self.img = None
self.setupUi(self)
self.timer1 = QtCore.QTimer()
self.timer2 = QtCore.QTimer()
self.timer3 = QtCore.QTimer()
self.timer4 = QtCore.QTimer()
self.timer5 = QtCore.QTimer()
self.lianjie()
self.cap_video = cv2.VideoCapture()
self.fuwo = fuwo.fuwo()
self.yang = yangwo.yangwo()
self.shou = arm2.hand()
self.beng = ttiao.he()
self.ku = tai.tui()
# mpl.use('QtAgg')
def lianjie(self):
self.exit.clicked.connect(self.closeEvent) # button_open_camera_clicked()
self.yangwo.clicked.connect(self.open1)
self.tiao.clicked.connect(self.open2)
self.fu.clicked.connect(self.open3)
self.leg.clicked.connect(self.open4)
self.connect.clicked.connect(self.open5)
def open1(self):
self.timer1.timeout.connect(self.show_viedo1)
self.timer1.start(50)
self.show_mode.setText("")
self.cap_video = cv2.VideoCapture(0)
self.show_viedo1()
a = time.strftime('%Y-%m-%d %H:%M', time.localtime())
self.show_count.setText(a)
def open2(self):
self.timer2.timeout.connect(self.show_viedo2)
self.timer2.start(50)
self.show_mode.setText("")
self.cap_video = cv2.VideoCapture(0)
self.show_viedo2()
a = time.strftime('%Y-%m-%d %H:%M', time.localtime())
self.show_count.setText(a)
def open3(self):
self.timer3.timeout.connect(self.show_viedo3)
self.timer3.start(50)
self.show_mode.setText("")
self.cap_video = cv2.VideoCapture(0)
self.show_viedo3()
a = time.strftime('%Y-%m-%d %H:%M', time.localtime())
self.show_count.setText(a)
def open4(self):
self.timer4.timeout.connect(self.show_viedo4)
self.timer4.start(50)
self.show_mode.setText("")
self.cap_video = cv2.VideoCapture(0)
self.show_viedo4()
a = time.strftime('%Y-%m-%d %H:%M', time.localtime())
self.show_count.setText(a)
def open5(self):
self.timer5.timeout.connect(self.show_viedo5)
self.timer5.start(50)
self.show_mode.setText("")
self.cap_video = cv2.VideoCapture(0)
self.show_viedo5()
a = time.strftime('%Y-%m-%d %H:%M', time.localtime())
self.show_count.setText(a)
def show_viedo1(self):
ret, self.img = self.cap_video.read()
if ret:
self.img1 = self.yang.panduan1(self.img)
shrink = cv2.cvtColor(self.img1, cv2.COLOR_BGR2RGB)
QtImg = QtGui.QImage(shrink.data, shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3,
QtGui.QImage.Format_RGB888)
jpg_out = QtGui.QPixmap(QtImg).scaled(self.view.width(), self.view.height())
self.view.setPixmap(jpg_out)
def show_viedo2(self):
ret, self.img = self.cap_video.read()
if ret:
self.img1 = self.beng.zuhe(self.img)
shrink = cv2.cvtColor(self.img1, cv2.COLOR_BGR2RGB)
QtImg = QtGui.QImage(shrink.data, shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3,
QtGui.QImage.Format_RGB888)
jpg_out = QtGui.QPixmap(QtImg).scaled(self.view.width(), self.view.height())
self.view.setPixmap(jpg_out)
def show_viedo3(self):
ret, self.img = self.cap_video.read()
if ret:
self.img1 = self.fuwo.panduan(self.img)
shrink = cv2.cvtColor(self.img1, cv2.COLOR_BGR2RGB)
QtImg = QtGui.QImage(shrink.data, shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3,
QtGui.QImage.Format_RGB888)
jpg_out = QtGui.QPixmap(QtImg).scaled(self.view.width(), self.view.height())
self.view.setPixmap(jpg_out)
def show_viedo4(self):
ret, self.img = self.cap_video.read()
if ret:
self.img1 = self.shou.panduan4(self.img)
shrink = cv2.cvtColor(self.img1, cv2.COLOR_BGR2RGB)
QtImg = QtGui.QImage(shrink.data, shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3,
QtGui.QImage.Format_RGB888)
jpg_out = QtGui.QPixmap(QtImg).scaled(self.view.width(), self.view.height())
self.view.setPixmap(jpg_out)
def show_viedo5(self):
ret, self.img = self.cap_video.read()
if ret:
self.img1 = self.ku.panduan5(self.img)
shrink = cv2.cvtColor(self.img1, cv2.COLOR_BGR2RGB)
QtImg = QtGui.QImage(shrink.data, shrink.shape[1], shrink.shape[0], shrink.shape[1] * 3,
QtGui.QImage.Format_RGB888)
jpg_out = QtGui.QPixmap(QtImg).scaled(self.view.width(), self.view.height())
self.view.setPixmap(jpg_out)
def closeEvent(self, event):
ok = QtWidgets.QPushButton()
cacel = QtWidgets.QPushButton()
msg = QtWidgets.QMessageBox(QtWidgets.QMessageBox.Warning, u"", u"")
msg.addButton(ok, QtWidgets.QMessageBox.ActionRole)
msg.addButton(cacel, QtWidgets.QMessageBox.RejectRole)
ok.setText(u'')
cacel.setText(u'')
if msg.exec_() == QtWidgets.QMessageBox.RejectRole:
None
else:
event.accept()
if __name__ == '__main__':
from PyQt5 import QtCore
QtCore.QCoreApplication.setAttribute(QtCore.Qt.AA_EnableHighDpiScaling) #
app = QtWidgets.QApplication(sys.argv)
ui = Open_Camera()
ui.show()
sys.exit(app.exec_())
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'mode.ui'
#
# Created by: PyQt5 UI code generator 5.15.9
#
# WARNING: Any manual changes made to this file will be lost when pyuic5 is
# run again. Do not edit this file unless you know what you are doing.
from PyQt5 import QtCore, QtGui, QtWidgets
class Ui_MainWindow(object):
def setupUi(self, MainWindow):
MainWindow.setObjectName("MainWindow")
MainWindow.resize(1120, 855)
self.centralwidget = QtWidgets.QWidget(MainWindow)
self.centralwidget.setObjectName("centralwidget")
self.line = QtWidgets.QFrame(self.centralwidget)
self.line.setGeometry(QtCore.QRect(20, 80, 1101, 21))
self.line.setFrameShape(QtWidgets.QFrame.HLine)
self.line.setFrameShadow(QtWidgets.QFrame.Sunken)
self.line.setObjectName("line")
self.view = QtWidgets.QLabel(self.centralwidget)
self.view.setGeometry(QtCore.QRect(40, 150, 581, 641))
self.view.setStyleSheet("")
self.view.setText("")
self.view.setObjectName("view")
self.show_mode = QtWidgets.QLabel(self.centralwidget)
self.show_mode.setGeometry(QtCore.QRect(690, 30, 171, 33))
font = QtGui.QFont()
font.setPointSize(20)
self.show_mode.setFont(font)
self.show_mode.setText("")
self.show_mode.setObjectName("show_mode")
self.fu = QtWidgets.QPushButton(self.centralwidget)
self.fu.setGeometry(QtCore.QRect(750, 700, 120, 50))
font = QtGui.QFont()
font.setPointSize(16)
self.fu.setFont(font)
self.fu.setObjectName("fu")
self.yangwo = QtWidgets.QPushButton(self.centralwidget)
self.yangwo.setGeometry(QtCore.QRect(750, 530, 120, 50))
font = QtGui.QFont()
font.setPointSize(16)
self.yangwo.setFont(font)
self.yangwo.setObjectName("yangwo")
self.tiao = QtWidgets.QPushButton(self.centralwidget)
self.tiao.setGeometry(QtCore.QRect(750, 620, 120, 50))
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding, QtWidgets.QSizePolicy.Fixed)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.tiao.sizePolicy().hasHeightForWidth())
self.tiao.setSizePolicy(sizePolicy)
font = QtGui.QFont()
font.setPointSize(16)
self.tiao.setFont(font)
self.tiao.setObjectName("tiao")
self.leg = QtWidgets.QPushButton(self.centralwidget)
self.leg.setGeometry(QtCore.QRect(930, 530, 120, 50))
font = QtGui.QFont()
font.setPointSize(16)
self.leg.setFont(font)
self.leg.setObjectName("leg")
self.connect = QtWidgets.QPushButton(self.centralwidget)
self.connect.setGeometry(QtCore.QRect(930, 620, 120, 50))
font = QtGui.QFont()
font.setPointSize(16)
self.connect.setFont(font)
self.connect.setObjectName("connect")
self.exit = QtWidgets.QPushButton(self.centralwidget)
self.exit.setGeometry(QtCore.QRect(930, 700, 120, 50))
font = QtGui.QFont()
font.setPointSize(16)
self.exit.setFont(font)
self.exit.setObjectName("exit")
self.count = QtWidgets.QLabel(self.centralwidget)
self.count.setGeometry(QtCore.QRect(50, 30, 102, 33))
font = QtGui.QFont()
font.setPointSize(20)
self.count.setFont(font)
self.count.setObjectName("count")
self.mode = QtWidgets.QLabel(self.centralwidget)
self.mode.setGeometry(QtCore.QRect(550, 30, 99, 33))
font = QtGui.QFont()
font.setPointSize(20)
self.mode.setFont(font)
self.mode.setObjectName("mode")
self.show_count = QtWidgets.QLabel(self.centralwidget)
self.show_count.setGeometry(QtCore.QRect(180, 30, 300, 33))
font = QtGui.QFont()
font.setPointSize(16)
self.show_count.setFont(font)
self.show_count.setText("")
self.show_count.setObjectName("show_count")
self.textEdit = QtWidgets.QTextEdit(self.centralwidget)
self.textEdit.setGeometry(QtCore.QRect(720, 120, 350, 350))
font = QtGui.QFont()
font.setPointSize(16)
self.textEdit.setFont(font)
self.textEdit.setObjectName("textEdit")
self.exit.raise_()
self.line.raise_()
self.view.raise_()
self.show_mode.raise_()
self.fu.raise_()
self.yangwo.raise_()
self.tiao.raise_()
self.leg.raise_()
self.connect.raise_()
self.count.raise_()
self.mode.raise_()
self.show_count.raise_()
self.textEdit.raise_()
MainWindow.setCentralWidget(self.centralwidget)
self.menubar = QtWidgets.QMenuBar(MainWindow)
self.menubar.setGeometry(QtCore.QRect(0, 0, 1120, 26))
self.menubar.setObjectName("menubar")
MainWindow.setMenuBar(self.menubar)
self.statusbar = QtWidgets.QStatusBar(MainWindow)
self.statusbar.setObjectName("statusbar")
MainWindow.setStatusBar(self.statusbar)
self.retranslateUi(MainWindow)
QtCore.QMetaObject.connectSlotsByName(MainWindow)
def retranslateUi(self, MainWindow):
_translate = QtCore.QCoreApplication.translate
MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow"))
self.fu.setText(_translate("MainWindow", ""))
self.yangwo.setText(_translate("MainWindow", ""))
self.tiao.setText(_translate("MainWindow", ""))
self.leg.setText(_translate("MainWindow", ""))
self.connect.setText(_translate("MainWindow", ""))
self.exit.setText(_translate("MainWindow", ""))
self.count.setText(_translate("MainWindow", ":"))
self.mode.setText(_translate("MainWindow", ""))
self.textEdit.setHtml(_translate("MainWindow", "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 4.0//EN\" \"http://www.w3.org/TR/REC-html40/strict.dtd\">\n"
"<html><head><meta name=\"qrichtext\" content=\"1\" /><style type=\"text/css\">\n"
"p, li { white-space: pre-wrap; }\n"
"</style></head><body style=\" font-family:\'SimSun\'; font-size:16pt; font-weight:400; font-style:normal;\">\n"
"<p align=\"center\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"> </p>\n"
"<p align=\"center\" style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<p align=\"center\" style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<p align=\"center\" style=\"-qt-paragraph-type:empty; margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><br /></p>\n"
"<p align=\"center\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:20pt; color:#c82063;\"></span></p>\n"
"<p align=\"center\" style=\" margin-top:0px; margin-bottom:0px; margin-left:0px; margin-right:0px; -qt-block-indent:0; text-indent:0px;\"><span style=\" font-size:20pt; color:#c82063;\"></span></p></body></html>"))
import cv2
import arm
import numpy as np
class hand(arm.arm, arm.drow):
def __init__(self, mode=False, complexity=1, smooth_landmarks=True,
enable_segmentation=False, smooth_segmentation=True,
detectionCon=0.5, trackCon=0.5):
super().__init__(mode, complexity, smooth_landmarks, enable_segmentation, smooth_segmentation, detectionCon,
trackCon)
self.left_drew = arm.drow()
self.right_drew = arm.drow()
self.form_zuo = 1
self.form_you = 1
self.count = 0
self.count1 = 0
self.Count = 0
self.outer = cv2.VideoCapture(1)
self.direction = 0
self.direction1 = 0
self.feedback_you = "Fix Form"
self.feedback_zuo = "Fix Form"
# sp_lock = 0
def panduan4(self, img_outer):
img_outer = cv2.flip(img_outer, 1)
img_outer = cv2.resize(img_outer, (600, 600))
img = self.findPose(img_outer)
Imlmst = self.findPosition(img, False)
if len(Imlmst) != 0:
shoulder_you = self.findAngle(img_outer, 13, 11, 23)
elbow_you = self.findAngle(img_outer, 11, 13, 15)
per_you = np.interp(shoulder_you, (0, 90), (0, 100)) #
bar_you = np.interp(shoulder_you, (0, 90), (380, 50)) #
# self.right_drew.push(per_you)
shoulder_zuo = self.findAngle(img_outer, 14, 12, 24)
elbow_zuo = self.findAngle(img_outer, 12, 14, 16)
per_zuo = np.interp(shoulder_zuo, (0, 90), (0, 100)) #
bar_zuo = np.interp(shoulder_zuo, (0, 90), (380, 50))
# self.left_drew.push(per_zuo)
if elbow_you > 160 and shoulder_you > 90:
self.form_you = 1
if elbow_zuo > 160 and shoulder_zuo > 90:
self.form_zuo = 1
# Check for full range of motion for the pushup
if self.form_you == 1:
if per_you <= 80:
if elbow_you >= 160: #
if self.direction == 0:
self.direction = 1
self.feedback_you = "up"
else:
self.feedback_you = "Fix Form"
if per_you == 100:
if elbow_you > 160 and shoulder_you > 90:
self.feedback_you = "Down"
if self.direction == 1:
self.count += 1
self.direction = 0
else:
self.feedback = "Fix Form"
# form = 0
# Draw Bar
if self.form_you == 1:
cv2.rectangle(img_outer, (575, 50), (600, 380), (0, 255, 0), 3)
cv2.rectangle(img_outer, (575, int(bar_you)), (600, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, f'{int(per_you)}%', (530, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Feedback
cv2.rectangle(img_outer, (550, 0), (550, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img_outer, self.feedback_you, (500, 40), cv2.FONT_HERSHEY_PLAIN, 1.5,
(0, 255, 0), 2)
if self.form_zuo == 1:
if per_zuo <= 80:
if elbow_zuo >= 150: #
if self.direction1 == 0:
self.direction1 = 1
self.feedback_zuo = "up"
else:
self.feedback_zuo = "Fix Form"
if per_zuo == 100:
if elbow_zuo > 160 and shoulder_zuo > 90:
self.feedback_zuo = "Down"
if self.direction1 == 1:
self.count1 += 1
self.direction1 = 0
else:
self.feedback = "Fix Form"
self.Count = (self.count + self.count1) / 2
# form = 0
if self.form_zuo == 1:
cv2.rectangle(img_outer, (0, 50), (25, 380), (0, 255, 0), 3)
cv2.rectangle(img_outer, (0, int(bar_zuo)), (25, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, f'{int(per_zuo)}%', (0, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Pushup counter
cv2.rectangle(img_outer, (0, 480), (100, 580), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, str(int(self.Count)), (15, 555), cv2.FONT_HERSHEY_PLAIN, 4,
(255, 0, 0), 5)
# Feedback
cv2.rectangle(img_outer, (550, 0), (550, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img_outer, self.feedback_zuo, (50, 40), cv2.FONT_HERSHEY_PLAIN, 1.5,
(0, 255, 0), 2)
return img_outer
'''plt.clf()
plt.plot(left_drew.buffer, label="left")
plt.plot(right_drew.buffer, label="right")
plt.legend(loc="upper right")
plt.pause(0.1)
cv2.imshow("1", img_outer)
'''
import cv2
import mediapipe as mp
import math
class arm():
def __init__(self, mode=False, complexity=1, smooth_landmarks=True,
enable_segmentation=False, smooth_segmentation=True,
detectionCon=0.5, trackCon=0.5):
self.mode = mode # false
self.complexity = complexity # 1
self.smooth_landmarks = smooth_landmarks # true
self.enable_segmentation = enable_segmentation # false
self.smooth_segmentation = smooth_segmentation # true
self.detectionCon = detectionCon # 0.5
self.trackCon = trackCon # 0.5
self.mpDraw = mp.solutions.drawing_utils
self.mpPose = mp.solutions.pose
self.pose = self.mpPose.Pose(self.mode, self.complexity, self.smooth_landmarks,
self.enable_segmentation, self.smooth_segmentation,
self.detectionCon, self.trackCon)
def findPose(self, img, draw=True):
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
self.results = self.pose.process(img)
if self.results.pose_landmarks:
if draw:
self.mpDraw.draw_landmarks(img, self.results.pose_landmarks,
self.mpPose.POSE_CONNECTIONS)
return img
def findPosition(self, img, draw=True):
self.lmList = []
if self.results.pose_landmarks:
for id, lm in enumerate(self.results.pose_landmarks.landmark):
h, w, c = img.shape
cx, cy = int(lm.x * w), int(lm.y * h)
self.lmList.append([id, cx, cy])
if draw:
cv2.circle(img, (cx, cy), 5, (255, 0, 0), cv2.FILLED)
return self.lmList
def findAngle(self, img, p1, p2, p3, draw=True): #
x1, y1 = self.lmList[p1][1:]
x2, y2 = self.lmList[p2][1:]
x3, y3 = self.lmList[p3][1:]
# Calculate Angle
angle = math.degrees(math.atan2(y3 - y2, x3 - x2) -
math.atan2(y1 - y2, x1 - x2))
if angle < 0:
angle += 360
if angle > 180:
angle = 360 - angle
elif angle > 180:
angle = 360 - angle
# print(angle)
if draw:
cv2.line(img, (x1, y1), (x2, y2), (255, 255, 255), 3)
cv2.line(img, (x3, y3), (x2, y2), (255, 255, 255), 3)
cv2.circle(img, (x1, y1), 5, (0, 0, 255), cv2.FILLED)
cv2.circle(img, (x1, y1), 15, (0, 0, 255), 2)
cv2.circle(img, (x2, y2), 5, (0, 0, 255), cv2.FILLED)
cv2.circle(img, (x2, y2), 15, (0, 0, 255), 2)
cv2.circle(img, (x3, y3), 5, (0, 0, 255), cv2.FILLED)
cv2.circle(img, (x3, y3), 15, (0, 0, 255), 2)
cv2.putText(img, str(int(angle)), (x2 - 50, y2 + 50),
cv2.FONT_HERSHEY_PLAIN, 2, (0, 0, 255), 2)
return angle
class drow:
def __init__(self, buffer_time=80, default_value=0):
self.buffer = [default_value for _ in range(buffer_time)] #
def push(self, value):
self.buffer.pop(0) #
self.buffer.append(value) #
import cv2
import numpy as np
from arm import arm
import matplotlib.pyplot as plt
class yangwo(arm):
def __init__(self, mode=False, complexity=1, smooth_landmarks=True,
enable_segmentation=False, smooth_segmentation=True,
detectionCon=0.5, trackCon=0.5, form=0, count=0, feedback="Fix Form"):
super().__init__(mode, complexity, smooth_landmarks, enable_segmentation, smooth_segmentation, detectionCon,
trackCon)
self.feedback = feedback
self.form = form
self.direction = None
self.Count = count
self.direction = 0
self.form = form
self.feedback = "Fix Form"
def panduan1(self, img):
img = cv2.flip(img, 1)
img1 = self.findPose(img, False)
lmList = self.findPosition(img1, False)
img1 = cv2.cvtColor(img1, cv2.COLOR_RGB2BGR)
if len(lmList) != 0:
elbow = self.findAngle(img1, 11, 13, 15) #
waist = self.findAngle(img1, 11, 23, 25) #
hip = self.findAngle(img1, 23, 25, 27) #
# Percentage of success of pushup
per = np.interp(waist, (40, 120), (0, 100))
# Bar to show Pushup progress
bar = np.interp(waist, (40, 120), (50, 380))
# Check to ensure right form before starting the program
# Check for full range of motion for the pushup
print(self.direction)
if per == 0:
if hip < 90 and elbow < 50:
self.feedback = "down"
if self.direction == 0:
self.Count += 0.5
self.direction = 1
else:
self.feedback = "Fix Form"
if per == 100:
if hip < 90 and elbow < 50:
self.feedback = "up"
if self.direction == 1:
self.Count += 0.5
self.direction = 0
else:
self.feedback = "Fix Form"
# form = 0
# Draw Bar
cv2.rectangle(img1, (580, 50), (600, 380), (0, 255, 0), 3)
cv2.rectangle(img1, (580, int(bar)), (600, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img1, f'{100-int(per)}%', (565, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Pushup counter
cv2.rectangle(img1, (0, 380), (100, 480), (0, 255, 0), cv2.FILLED)
cv2.putText(img1, str(int(self.Count)), (25, 455), cv2.FONT_HERSHEY_PLAIN, 5,
(255, 0, 0), 5)
# Feedback
cv2.rectangle(img1, (500, 0), (640, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img1, self.feedback, (500, 40), cv2.FONT_HERSHEY_PLAIN, 2,
(0, 255, 0), 2)
return img1
import cv2
import numpy as np
from arm import arm
class fuwo(arm):
def __init__(self, mode=False, complexity=1, smooth_landmarks=True,
enable_segmentation=False, smooth_segmentation=True,
detectionCon=0.5, trackCon=0.5, form=0, count=0, feedback="Fix Form"
):
super().__init__(mode, complexity, smooth_landmarks, enable_segmentation, smooth_segmentation, detectionCon,
trackCon, )
self.feedback = feedback
self.form = form
self.direction = None
self.Count = count
self.direction = 0
self.form = form
self.feedback = "Fix Form"
def panduan(self, img):
# img = cv2.resize(img, (600, 600))
img = cv2.flip(img, 1)
img1 = self.findPose(img, False)
lmList = self.findPosition(img1, False)
img1 = cv2.cvtColor(img1, cv2.COLOR_RGB2BGR)
if len(lmList) != 0:
elbow = self.findAngle(img1, 11, 13, 15)
shoulder = self.findAngle(img1, 13, 11, 23)
hip = self.findAngle(img1, 11, 23, 25)
# Percentage of success of pushup
per = np.interp(elbow, (90, 160), (0, 100))
# Bar to show Pushup progress
bar = np.interp(elbow, (90, 160), (380, 50))
# Check to ensure right form before starting the program
# Check for full range of motion for the pushup
print(self.direction)
if per == 0:
if elbow <= 90 and hip > 160:
self.feedback = "Up"
if self.direction == 0:
self.Count += 0.5
self.direction = 1
else:
self.feedback = "Fix Form"
if per == 100:
if elbow > 160 and shoulder > 40 and hip > 160:
self.feedback = "Down"
if self.direction == 1:
self.Count += 0.5
self.direction = 0
else:
self.feedback = "Fix Form"
# form = 0
# Draw Bar
cv2.rectangle(img1, (580, 50), (600, 380), (0, 255, 0), 3)
cv2.rectangle(img1, (580, int(bar)), (600, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img1, f'{int(per)}%', (565, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Pushup counter
cv2.rectangle(img1, (0, 380), (100, 480), (0, 255, 0), cv2.FILLED)
cv2.putText(img1, str(int(self.Count)), (25, 455), cv2.FONT_HERSHEY_PLAIN, 5,
(255, 0, 0), 5)
# Feedback
cv2.rectangle(img1, (500, 0), (640, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img1, self.feedback, (500, 40), cv2.FONT_HERSHEY_PLAIN, 2,
(0, 255, 0), 2)
return img1
''' def tuxiang(self):
plt.clf()
plt.plot(self.good.buffer, label="Percentage")
plt.legend(loc="upper left")
plt.pause(0.1)'''
import cv2
import arm
import numpy as np
class tui(arm.arm, arm.drow):
def __init__(self, mode=False, complexity=1, smooth_landmarks=True,
enable_segmentation=False, smooth_segmentation=True,
detectionCon=0.5, trackCon=0.5):
super().__init__(mode, complexity, smooth_landmarks, enable_segmentation, smooth_segmentation, detectionCon,
trackCon)
self.left_drew = arm.drow()
self.right_drew = arm.drow()
self.form_zuo = 1
self.form_you = 1
self.count = 0
self.count1 = 0
self.Count = 0
self.outer = cv2.VideoCapture(1)
self.direction = 0
self.direction1 = 0
self.feedback_you = "Fix Form"
self.feedback_zuo = "Fix Form"
# sp_lock = 0
def panduan5(self, img_outer):
img_outer = cv2.flip(img_outer, 1)
img_outer = cv2.resize(img_outer, (600, 600))
img = self.findPose(img_outer)
Imlmst = self.findPosition(img, False)
if len(Imlmst) != 0:
elbow_you = self.findAngle(img_outer, 11, 23, 25)
knee_you = self.findAngle(img_outer, 23, 25, 27)
per_you = np.interp(elbow_you, (90, 160), (0, 100)) #
bar_you = np.interp(elbow_you, (90, 160), (50, 380)) #
# self.right_drew.push(per_you)
elbow_zuo = self.findAngle(img_outer, 12, 24, 26)
knee_zuo = self.findAngle(img_outer, 24, 26, 28)
per_zuo = np.interp(elbow_zuo, (90, 160), (0, 100)) #
bar_zuo = np.interp(elbow_zuo, (90, 160), (50, 380))
# self.left_drew.push(per_zuo)
# Check for full range of motion for the pushup
if self.form_you == 1:
if per_you > 80:
if knee_you >= 150: #
if self.direction == 0:
self.direction = 1
self.feedback_you = "up"
else:
self.feedback_you = "Fix Form"
if per_you == 0:
if knee_you < 90:
self.feedback_you = "Down"
if self.direction == 1:
self.count += 1
self.direction = 0
else:
self.feedback = "Fix Form"
# form = 0
# Draw Bar
if self.form_you == 1:
cv2.rectangle(img_outer, (575, 50), (600, 380), (0, 255, 0), 3)
cv2.rectangle(img_outer, (575, int(bar_you)), (600, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, f'{100-int(per_you)}%', (530, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Feedback
cv2.rectangle(img_outer, (550, 0), (550, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img_outer, self.feedback_you, (500, 40), cv2.FONT_HERSHEY_PLAIN, 1.5,
(0, 255, 0), 2)
if self.form_zuo == 1:
if per_zuo > 80:
if knee_zuo >= 160: #
if self.direction1 == 0:
self.direction1 = 1
self.feedback_zuo = "up"
else:
self.feedback_zuo = "Fix Form"
if per_zuo == 0:
if knee_zuo < 90:
self.feedback_zuo = "Down"
if self.direction1 == 1:
self.count1 += 1
self.direction1 = 0
else:
self.feedback = "Fix Form"
self.Count = (self.count + self.count1) / 2
# form = 0
if self.form_zuo == 1:
cv2.rectangle(img_outer, (0, 50), (25, 380), (0, 255, 0), 3)
cv2.rectangle(img_outer, (0, int(bar_zuo)), (25, 380), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, f'{100-int(per_zuo)}%', (0, 430), cv2.FONT_HERSHEY_PLAIN, 2,
(255, 0, 0), 2)
# Pushup counter
cv2.rectangle(img_outer, (0, 480), (100, 580), (0, 255, 0), cv2.FILLED)
cv2.putText(img_outer, str(int(self.Count)), (15, 555), cv2.FONT_HERSHEY_PLAIN, 4,
(255, 0, 0), 5)
# Feedback
cv2.rectangle(img_outer, (550, 0), (550, 40), (255, 255, 255), cv2.FILLED)
cv2.putText(img_outer, self.feedback_zuo, (50, 40), cv2.FONT_HERSHEY_PLAIN, 1.5,
(0, 255, 0), 2)
return img_outer
''' def tuxiang(self):
plt.clf()
plt.plot(self.good.buffer, label="Percentage")
plt.legend(loc="upper left")
plt.pause(0.1)'''
import cv2
import mediapipe as mp
import jump as jp
import numpy as np
class he(jp.BufferList):
def __init__(self, buffer_time=80, default_value=0):
super().__init__(buffer_time, default_value)
self.cy_shoulder_hip = 0
self.mp_pose = mp.solutions.pose
self.pose = self.mp_pose.Pose()
self.count1 = 0
self.direction = 0
self.form = 0
self.xia = [23, 24] #
self.shang = [11, 12]
self.center_y = jp.BufferList() # buffer_time = 80
self.center_y_up = jp.BufferList()
self.center_y_down = jp.BufferList()
self.center_y_pref_flip = jp.BufferList()
self.center_y_flip = jp.BufferList()
self.cy_max = 100 #
self.cy_min = 100
self.flip_flag = 250
self.prev_flip_flag = 250
self.cy = 0
self.cx = 0
self.cy_shoulder_hip = 0
def zuhe(self, dst1):
dst1 = cv2.flip(dst1, 1)
imgRGB1 = cv2.cvtColor(dst1, cv2.COLOR_BGR2RGB) # cv2
results = self.pose.process(imgRGB1)
dst1 = cv2.cvtColor(imgRGB1, cv2.COLOR_RGB2BGR) # cv2
height, width, connect = dst1.shape
if results.pose_landmarks:
for id, lm in enumerate(results.pose_landmarks.landmark):
cx, cy = int(lm.x * height), int(lm.y * width)
landmarks = [
(lm.x * width, lm.y * height)
for i, lm in enumerate(results.pose_landmarks.landmark)
if i in self.xia
] # 2324
cx = int(np.mean([x[0] for x in landmarks])) #
cy = int(np.mean([x[1] for x in landmarks])) #
# 23 24cxcy
landmarks2 = [
(lm.x * 600, lm.y * 350)
for i, lm in enumerate(results.pose_landmarks.landmark)
if i in self.shang
] # 1112
cy2 = int(np.mean([x[1] for x in landmarks2])) #
self.cy_shoulder_hip = cy - cy2
cy1 = (cy + cy2) / 2
cy = int((cy + self.center_y.buffer[-1]) / 2)
self.center_y.push(cy) # cy
self.cy_max = 0.5 * self.cy_max + 0.5 * self.center_y.max()
self.center_y_up.push(self.cy_max)
self.cy_min = 0.5 * self.cy_min + 0.5 * self.center_y.min()
self.center_y_down.push(self.cy_min)
self.prev_flip_flag = self.flip_flag # flip_flag=250
self.center_y_pref_flip.push(self.prev_flip_flag)
dy = self.cy_max - self.cy_min
if dy > 0.4 * self.cy_shoulder_hip:
if cy > self.cy_max - 0.55 * dy and self.flip_flag == 150:
self.flip_flag = 250
if 0 < cy < self.cy_min + 0.55 * dy and self.flip_flag == 250:
self.flip_flag = 150
self.center_y_flip.push(self.flip_flag)
if self.prev_flip_flag < self.flip_flag:
self.count1 = self.count1 + 1
cv2.circle(dst1, (cx, cy-75), 15, (0, 0, 255), -1)
cv2.rectangle(dst1, (400, 0), (600, 70), (255, 255, 255), cv2.FILLED)
cv2.putText(dst1, "count = " + str(self.count1), (int(420), int(50)), cv2.FONT_HERSHEY_SIMPLEX, 1,
(0, 0, 255),
2,
)
return dst1
import cv2
import mediapipe as mp
import numpy as np
class BufferList:
def __init__(self, buffer_time=80, default_value=0):
self.buffer = [default_value for _ in range(buffer_time)] #
def push(self, value):
self.buffer.pop(0) #
self.buffer.append(value) #
def max(self):
return max(self.buffer) #
def min(self):
buffer = [value for value in self.buffer if value] #
if buffer:
return min(buffer)
return 0











Comments