An automatic fall detection system could be a significant asset in safety management, especially for elderly care facilities and hospitals but also for people with mobility needs living at home. Using AI cameras and edge processing provides a robust, non-intrusive, and privacy-conscious real-time solution that can quickly be installed anywhere.
This project employs computer vision skeletal keypoint detection technology integrated with the Raspberry Pi AI Camera to deliver a lightweight, edge-computing fall detection solution.
Project overview- Goal: To monitor human posture in real-time and detect the occurrence of falls.
- Technology used:
- Human keypoint detection: Uses the HigherHRNet model to identify human keypoints (head, knees, feet, and so on).
- Posture analysis: Applies geometric calculations to analyze features of human posture (for example, angles between detected keypoints, such as head to knee to foot).
- Edge computing: Combines Raspberry Pi and Raspberry Pi AI Camera to perform on-device real-time inference and image processing.
The following images show how the system can detect whether a person is standing or falling.
This system uses the HigherHRNet model to detect human keypoints in images and outputs human keypoints in COCO format. HigherHRNet is a high-resolution, efficient, and accurate network that can capture rich details of human postures.
The main steps are as follows:
- Capture real-time video streams using edge devices.
- Perform human detection on each frame image and extract the human keypoint coordinates.
Process keypoints based on the following rules to simplify calculations:
Head Points:
- Select the 5 keypoints of the head in COCO format (eyes, nose, mouth, and ears) that meet the validity condition (score>0).
- Calculate the average of these 5 points to get the final head coordinates.
Knees and Feet Points:
- Calculate the averages for the left and right knee and left and right foot keypoints separately to obtain the overall left and right knee points and left and right foot points.
- The average of the left and right knee points is taken as the overall knee point, while the average of the left and right foot points is taken as the overall foot point.
Analyze human posture through geometric calculations:
- Calculate the angle between the line from the head to the knee and the line from the knee to the foot.
- Set a threshold.
- Determine whether the human is in a fallen state based on the angle.
- If the angle exceeds the pre-set threshold range, it is judged as a fall.
- Processing unit: Raspberry Pi (Raspberry Pi 4 Model B or newer recommended)
- Edge device module: Raspberry Pi AI Camera, installed on Raspberry Pi
- Network connection: Needs to be connected to the internet to download the model and dependencies.
- Python 3.11 (Pre-installed on Raspberry Pi)
- Python 3 - OpenCV 4.6 (Install via APT)
- Python 3 - Munkres (Install via APT)
- Install dependencies:
apt install python3-opencv imx500-all python3-munkresNOTEIf you have conflicting packages make sure to uninstall them or use the right versions.
- Download the HigherHRNet model: If you have already installed imx500-all via APT, you can skip this step. Download the model file from Raspberry Pi Repo.
- Run the system: Ensure you are inside the application's source directory, and then run:
python main.py --model <path to model>- Main parameters explanation:
--model: Location of the RPK model. No need to change it if you installed imx500-all via APT.--detection-threshold: Threshold for the human detector. Typically doesn't need to be changed, within the range of[0,1].--fall-threshold: Fall judgment threshold. Recommended value is 45, within the range of[0,100]. Usually, this doesn't need to be changed but it may need to be adjusted based on the point of view angle of the edge device.
- Model Performance Optimization:
- Since the models in the Raspberry Pi model zoo are not optimized for this scene, performance may be insufficient. Consider training the model using a dataset that includes images where humans are falling for better results.
We tested the system by taking several postures.
This sample application is available to download from Sony Semiconductor Solutions. Please download it from GitHub if you wish to use it.
When in troubleIf you encounter any issues while reading the article, please feel free to comment on this article. Also, please check the support site below.Please note that it may take some time to respond to comments.
If you have questions related to Raspberry Pi, please check and utilize the forum below.
Want to learn moreExperiment further with the Raspberry Pi AI Camera by following the Get Started guide on the AITRIOS developer site.
CodeFalldown detection based on HigherHRNet











Comments