Gyms are quite popular these days 💪
While prioritizing health and fitness is excellent, safety remains a major concern in fitness center environments. Accidents and sudden health emergencies can occur during workouts, and quick response times are critical.
In this article, we will create an application that detects potentially injured individuals, specifically, people who have stopped moving near training machines. We will use the Raspberry Pi AI Camera, an edge AI camera module for Raspberry Pi powered by Sony's IMX500 Intelligent Vision Sensor.
Raspberry Pi Components
- Raspberry Pi (this article uses Raspberry Pi 5)
- microSD card with Raspberry Pi OS (64-bit) installed
IMPORTANT This application only works on Bookworm.
- AC adapter
- Raspberry Pi AI Camera
- Standard accessories: monitor, keyboard, mouse, HDMI cable, and so on.
Set up camera communication with the Raspberry Pi AI Camera according to the official documentation.
3. Install the Application Module LibraryThe Application Module Library from Sony Semiconductor Solutions is an SDK from Sony that provides tools for data visualization and application development.
Follow the instructions in the Application Module Library README file on GitHub to install the SDK.
Verify the installation by running a sample script provided in the Application Module Library repository:
# Activate the virtual environment where the Application Module Library (modlib) is installed
cd aitrios-rpi-application-module-library
python3 examples/aicam/posenet.pyIf a window opens and human skeletal tracking is displayed correctly, then the environment is set up!
NOTE The complete code for the application introduced in this section is available from Sony Semiconductor Solutions on GitHub.Please refer to that for complete implementation details and the application structure.Definition of an "Injured Person" in the Gym
In this application, we define a person as "injured" if they meet the following conditions:
A. They have not moved their limbs for a certain amount of time.
- Just observing the center position of the body cannot determine if someone is training, so we watch the movement of the limbs.
B. They are near a training machine.
- People resting away from the training machines are not included as "injured."
Key features of this application:
- Uses PoseNet as an AI model
- Specifies part of the field of view for monitoring
- Performs simple time-series processing
- Visualizes processing results as a web application
We can use a model called PoseNet, a model that estimates human skeletons, to detect limb movement.
Set up the model for the Raspberry Pi AI Camera as follows:
# Camera and model initialization function
def initialize_camera():
device = AiCamera()
model = Posenet()
device.deploy(model)
return device, Annotator()
device, annotator = initialize_camera()2. Specifying Part of the Field of View for MonitoringIn app.py, specify the area to monitor in pixel units. The total screen size is 640 × 480 pixels.
monitoring_area = {'x': 100, 'y': 100, 'width': 300, 'height': 300}Calculate the body's center from the keypoints array (the skeletal feature points that PoseNet obtains), and then determine whether it's within the monitoring area (that is, whether the person is near a training machine).
# Function to calculate the center point of a person
def get_center_point(keypoints):
valid_points = [kp for kp in keypoints if kp['x'] > 0.0 and kp['y'] > 0.0]
if not valid_points:
return None
sum_x = sum(kp['x'] for kp in valid_points)
sum_y = sum(kp['y'] for kp in valid_points)
return {'x': sum_x / len(valid_points), 'y': sum_y / len(valid_points)}
# Function to check if a point is in the monitoring area
def is_point_in_area(point, area):
if not point:
return False
return (point['x'] >= area['x'] and point['x'] <= area['x'] + area['width'] and
point['y'] >= area['y'] and point['y'] <= area['y'] + area['height'])3. Performing Simple Time-series ProcessingFor simple time-series processing, compare the oldest and newest frames within a time window to determine whether the limbs are moving.
Define the duration in seconds, movement change threshold, and check interval in seconds:
motion_settings = {'time_window': 6, 'movement_threshold': 10000, 'check_interval': 1}Execute the check_movement() function at the defined check interval:
# Periodically check motion
current_time = time.time()
if current_time - person_tracker.last_check_time >= motion_setting['check_interval']:
check_movement()
person_tracker.last_check_time = current_timeIn the check_movement() function, set the alert flag when the actual movement change is less than the predefined movement change threshold.
# Function to check motion
def check_movement():
global person_tracker, alert_active
# Do not check if no person is in the area
if not person_tracker.in_area or len(person_tracker.positions) < 2:
return
current_time = time.time()
# Get data within a certain time window
cutoff_time = current_time - motion_settings['time_window']
valid_positions = [p for p in person_tracker.positions if p['timestamp'] >= cutoff_time]
# Check if there is enough data
if len(valid_positions) >= 2:
# Calculate the amount of motion
movement_amount = calculate_movement_specific_keypoints(valid_positions)
person_tracker.last_movement = movement_amount
# Trigger alert if below the threshold
if movement_amount < motion_settings['movement_threshold']:
alert_active = True
print("No Motion Detected:", movement_amount, "<", motion_settings['movement_threshold'])
else:
# Clear alert if there is enough motion
alert_active = False
print("Motion Detected :", movement_amount, ">", motion_settings['movement_threshold'])To measure actual movement, focus on keypoints that move frequently during exercises, such as the wrists and knees. You can select different keypoints based on your use case.
Define the wrist and knee IDs as follows, based on the model specifications:
MOTION_CHECK_KEYPOINTS = [9, 10, 13, 14] # leftWrist, rightWrist, leftKnee, rightKneeFind valid keypoints among the wrists and knees, calculate the movement amount (Euclidean distance) between frames, and use it as the motion change amount.
# Function to calculate motion for specific keypoints (both wrists and both knees)
def calculate_movement_specific_keypoints(positions):
# Get the oldest and newest positions
oldest = positions[0]
newest = positions[-1]
# Get frame size
w, h = oldest['frame_size']
total_distance = 0
valid_keypoint_count = 0
# Check only specific keypoints (both wrists and both knees)
for keypoint_idx in MOTION_CHECK_KEYPOINTS:
# Find corresponding keypoints in the old and new frames
old_kp = next((kp for kp in oldest['keypoints'] if kp['id'] == keypoint_idx), None)
new_kp = next((kp for kp in newest['keypoints'] if kp['id'] == keypoint_idx), None)
# If keypoints are detected in both frames
if old_kp and new_kp:
# Convert to pixel coordinates
x0 = int(old_kp['x'] * w)
y0 = int(old_kp['y'] * h)
x1 = int(new_kp['x'] * w)
y1 = int(new_kp['y'] * h)
# Calculate squared distance (Euclidean distance squared)
distance = (x1 - x0)**2 + (y1 - y0)**2
total_distance += np.sqrt(distance) # Take square root for actual distance
valid_keypoint_count += 1
# Return average distance if there are valid keypoints, otherwise 0
if valid_keypoint_count > 0:
return total_distance / valid_keypoint_count
else:
return 04. Visualizing Processing Results as a Web ApplicationThe processing results and other values are stored in a global variable named frame_buffer. Sending this value as an MJPEG stream to the browser displays the processing results in real time.
def generate_frames():
global frame_buffer
while True:
# Wait until the frame buffer is available
if frame_buffer is not None:
with frame_lock:
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame_buffer + b'\r\n')
# Short wait time
time.sleep(0.03)
# Video stream endpoint
@app.route('/video_feed')
def video_feed():
return Response(generate_frames(), mimetype='multipart/x-mixed-replace; boundary=frame')These are the implementation features of the injured person detection application.
For more information and related files, including static files for the web application, see this Sony Semiconductor Solutions repository on GitHub.
Running the ApplicationRun the application with python3 app.py, and then access http://localhost:5000 in your browser to display the application.
If you encounter any issues while reading the article, please feel free to comment on this article. Also, please check the support site below.Please note that it may take some time to respond to comments.
ConclusionWe've created an application that detects injured individuals at the gym, and we're excited about the possibilities!
Here's what makes this special: the features we used—skeletal detection with PoseNet, targeting specific viewing areas, simple time-series processing, and web app delivery—can be applied to countless other scenarios.
Ready to get creative? Start thinking about your own use cases and build something amazing!








Comments