This project aims to develop a device that enhances the safety of visually impaired swimmers in unfamiliar indoor pools. The device will use machine learning to map the pool environment, detect obstacles in real time, and provide audio feedback to the swimmer.
To solve the problem of visually impaired swimmers navigating safely in unfamiliar indoor pools, I will build a device equipped with sensors (like an IMU and camera), utilizing machine learning to map the pool environment and detect obstacles. It will operate in two modes: Learn Mode (for mapping) and Swim Mode (for real-time guidance via audio feedback).
This solution differs from existing ones by combining pool mapping, real-time obstacle detection, and personalized audio feedback. It's more adaptable to different pool environments and swimmer preferences.
This device is useful because it enhances the safety and independence of visually impaired swimmers, allowing them to navigate unfamiliar pools confidently and enjoy a fulfilling swimming experience.
Detailed Solution and PSoC™ 6 AI Dev Kit IntegrationMy solution employs a multi-faceted approach to guide visually impaired swimmers. I perceive the key features as:
Pool Mapping (Learn Mode): A sighted assistant, using the device attached to a boogie board, will traverse the pool's perimeter, allowing a camera, IMU and Radar sensors to capture visual and motion data. This data will be used to train a machine learning model, creating a digital map of the pool's layout, including walls and lane lines.
Real-time Guidance (Swim Mode): The swimmer will hold the device (potentially integrated into a wearable). The camera will continuously capture images, feeding them into the trained ML model for object detection. The IMU and Radar will track the swimmer's movements and orientation.
Audio Feedback: Based on the ML model's analysis and camera, IMU and Radar data, the device will provide real-time audio feedback to the swimmer. This will include cues about proximity to walls, lane boundaries, and potential obstacles. Different audio signals will indicate the direction and urgency of required actions.
Collision Detection: The impact detection feature will provide an additional layer of safety, triggering an immediate alert if the device (and thus the swimmer) collides with an object.
The PSoC™ 6 AI Dev Kit will be instrumental in my solution due to its powerful capabilities and versatility:
Machine Learning Acceleration: The kit's dedicated AI capabilities will enable efficient execution of the trained machine learning model for real-time object detection and classification in Swim Mode.
Sensor Integration: The kit's numerous peripherals and interfaces will facilitate seamless integration with the camera, IMU, and other sensors, ensuring accurate data collection and processing.
Low-Power Operation: The kit's low-power features will be crucial for extending battery life, allowing for longer swim sessions without frequent recharging.
Audio Processing: The kit's audio processing capabilities can be leveraged to generate and output the various audio cues required for swimmer guidance.
Customizable Design: The kit's flexibility will allow for tailoring the hardware and software to the specific requirements of the swimming aid, optimizing performance and user experience.
I am exploring the potential of using the PSoC™ 6 AI Dev Kit to create an innovative solution that enhances the safety and independence of visually impaired swimmers. This device could potentially leverage advanced technologies, such as machine learning and sensor fusion, to provide real-time feedback and guidance to swimmers, alerting them to potential hazards, assisting with navigation, and optimizing their swimming technique.
Components and Tools for DevelopmentIn conjunction with the PSoC™ 6 AI Kit (which includes onboard sensors), Deepcraft Studio, and ModusToolbox, I intend to utilize Avnet's IoTConnect platform. I am intrigued by the potential for integrating the PSoC 6 AI Evaluation Kit with Avnet's IoTConnect platform. I am eager to explore how this integration can facilitate cloud connectivity, data visualization, and remote monitoring of edge AI applications.
Training Data CollectionThe training data will consist of sensor data captured in Learn Mode. This includes and RADAR data: Accelerometer and gyroscope readings capturing the device's movement and orientation as it is guided along the pool's perimeter. This data will be used to train the machine learning model to recognize and classify these features in Swim Mode, enabling accurate and real-time obstacle detection and guidance for the swimmer.
How ML Inference Will Be UsedDetermine proximity to obstacles: By analyzing the detected objects (walls, lane lines) and their distance from the swimmer, the device can assess how close the swimmer is to potential collisions. Generate audio feedback: Based on the proximity and type of obstacle, the device will trigger specific audio cues, such as beeps, tones, or voice prompts, to guide the swimmer. Guide the swimmer: The audio feedback will inform the swimmer of the necessary actions to take, such as changing direction, slowing down, or stopping, to avoid obstacles and navigate safely.
Components and Tools for DevelopmentThe following components and tools will be used:
- PSoC 6 AI Evaluation Kit (CY8CKIT-062S2-AI
- ModusToolbox software v3.2 or later
- DEEPCRAFT Studio - Development Platform for AI / Machine Learning on Edge Devices.NOTE: DEEPCRAFT Studio is only available for Windows OS
- DEEPCRAFT Ready Models - production-ready AI / Machine Learning models
- Avnet IoTConnect for Infineon's ModusToolbox and framework.
- PSoC™ 6 MCU – CY8C624ABZI-S2D44
- Murata LBEE5KL1YN module and Bluetooth® functionality based on AIROC CYW43439
- 512 Mbit external Quad SPI NOR flash that provides fast, expandable memory for data and code
- Two user LEDs, a user button, and a reset button for PSoC™ 6 MCU
On Board Sensors:
- 6-axis motion sensor (BMI270)
- Magnetometer (BMM350)
- High Performance digital MEMS microphone (IM72D128)
- Barometric pressure sensor (DPS368)
- RADAR sensor (BGT60TR13C)
Debugging code:
- KitProg3 onboard SWD programmer/debugger with USB-UART and USB-I2C bridge functionality. One mode selection button and one Status LED for KitProg3
Power:
- 1.8 V and 3.3 V operation of PSoC™ 6 MCU
Explore how to develop code with the PSoC 6 AI kit using this excellent video by Clark Jarvis, watch the recording on YouTube.
- This webinar delves into the end-to-end Machine Learning (ML) model development process with the industry-leading PSOC™ 6 AI Kit and AI/ML software from Infineon and presented by Infineon's Clark Jarvis
- Explore available code examples ( ML examples
- Debug development code directly with the KitProg3 on-board debugger
- Use of Avnet’s IoTConnect software by using IoTConnect libraries within ModusToolbox
- Test out the IoTConnect dashboards for easy visualization of the DEEPCRAFT ready models, not already demonstrated in the Webinars.
By reviewing and following the demonstrations starting at the 38-minute mark of the provided webinar, I gained the necessary skills to begin implementing the AquaGuide project. This involved learning how to utilize the PSoC 6 AI Evaluation Kit, ModusToolbox, and DEEPCRAFT Studio for developing and deploying machine learning models for edge devices, specifically in the context of sensor data processing and real-time inference. This hands-on learning approach enabled me to understand the workflow and capabilities of the development tools, which I then applied to the specific requirements of creating a navigation aid for visually impaired swimmers. The subsequent sections will detail the specific implementation steps and how these learned skills were applied to build the AquaGuide device.
TO be continued
Comments