Developing an AI model for real-world applications is rarely a straightforward path. My current project, lifeAId, aims to bridge a critical gap in emergency response, particularly for vulnerable populations. At its heart, lifeAId is a discreet, wearable device designed to automatically detect medical emergencies and summon help, ensuring that precious moments are never lost.
My decision to embark on this project stems from a profound personal experience. When my father passed away at home, my mother was with him. In those terrifying moments after he collapsed, her instinct was to try and help him herself, administering CPR. However, her efforts did not work as much as she tried. Those initial four or five minutes, spent attempting CPR instead of immediately calling emergency services, were likely agonizingly critical. While I don't blame her for trying, I've often wondered if those lost minutes could have made a difference in saving his life. This heartbreaking scenario, where calls to emergency services are delayed due to panic, lack of training, or physical limitations, is unfortunately common, especially among the elderly living alone or without immediate, capable assistance. I realized there had to be a better way to ensure prompt medical intervention.
This is where lifeAId comes in. My solution is a dedicated medical device worn on the wrist, designed for continuous use. Unlike smartwatches, which offer a multitude of non-medical functions and require frequent charging, lifeAId is solely focused on emergency detection. This singular purpose allows it to consume significantly less power, meaning it can be worn for much longer periods without needing a charge – even during sleep, a time when many emergencies occur and smartwatches are typically removed. The final product would be a smaller, more comfortable design specifically made for this purpose alone, and thus making it more likely that users, especially the elderly, will wear it constantly.
So, how does lifeAId work? The core of the device leverages a PSoC™ 6 AI Dev Kit for this experiment, utilizing its onboard Inertial Measurement Unit (IMU) and, in future iterations, a heart rate monitor, temp sensor, and skin humidity sensor. These sensors continuously collect mobility and physiological data from the wearer's wrist. The raw data is then fed into an advanced AI model created using DeepCraft Studio. This is where my journey gets technically challenging, and occasionally, quite comical in its frustration.
The DeepCraft Studio: Training lifeAId's AITraining an AI model to reliably detect falls and differentiate that from other movements or lack thereof (resting/sleeping) and potential unconsciousness from IMU data is a complex task. My journey through DeepCraft Studio has been a deep dive into the underrated, but absolutely critical, phase of data preparation and preprocessing. Here's a candid look at the hurdles I've faced and the lessons learned, often the hard way.
The First Dimensionality PuzzleMy first steps involved setting up my DeepCraft Studio project, connecting my IMU sensor, and getting ready to collect data. I quickly ran into my first major technical hurdle when trying to connect my IMU data to the labeling tools:
Error: "Label track input tensor must be of rank 1"
This was my first "facepalm" moment. It taught me a crucial lesson about data shapes in DeepCraft Studio. While the IMU outputs multi-dimensional data (X, Y, Z for acceleration, X, Y, Z for gyroscope), the "Data Track" component expects a simpler, single-stream input for its primary purpose of timeline annotation. The problem was that I was attempting to combine all IMU data into a single input stream for the Track. The solution was to pump distinct accelerometer data to one data track and distinct gyroscope data to a separate data track. This separation allowed the labeling tools to correctly interpret the input, finally fixing the problem. Who knew a tensor could be so picky about its data!?
With raw data now flowing (and not complaining about its rank), the next big challenge emerged during preprocessing – preparing the continuous sensor stream for model training.
Error: "The preprocessor output shape needs to be 2D or 3D. Add a window layer at the end of the preprocessor if your data is 1D."
This was the next "wall" I hit, and it felt like the preprocessor was telling me my data was too... linear. My raw IMU data, despite having multiple features, was essentially a continuous, "1D" stream to the model. Machine learning models for time-series data don't look at single moments; they need "windows" or "frames" of data to understand context and patterns over time. Think of it like trying to understand a movie by looking at one pixel at a time – you need the whole frame!
The solution was to introduce a "Sliding Window (data points)" layer into my preprocessing pipeline. Configuring it properly was key to turning my continuous stream into digestible chunks:
Window Shape: I set this to [150,3]
. This means each window would contain 150 samples, and each sample would have 3 features (likely Accel X,Y,Z), giving the model 3 seconds of context at my IMU's 50 Hz sampling rate. It's like giving the AI a 3-second mini-movie clip to analyze.
Stride: I set this to 225
. This translates to a 50% overlap between windows, meaning the window advances by 75 samples (75 samples * 3 features/sample = 225 data points
). This overlap is vital for capturing events accurately and generating enough training examples. It's like ensuring no important action falls between the cracks of my mini-movie clips.
Buffer Multiplier: This was kept at the recommended 1.0
. Because, you know, sometimes the simplest things just work.
The Hidden Trap: Input Shape and Frequency Inference
Even with the Sliding Window configured, the error persisted. This led to a deeper understanding of DeepCraft Studio's data pipeline. I discovered that the top-level "Input Shape" and "Input Frequency" settings in the preprocessor dialog were locked and inferred from my collected data. Shockingly, despite intending to record 6 IMU features, it was showing "Input Shape: 1" and "Input Frequency: 1 Hz”. Luckily, that was a clue!
This revealed a critical recording-time error: my .imsession
files were being recorded with only 1 feature per sample. The solution involved going back to the data collection setup, ensuring my IMU sensor was configured to output all 6 channels, and re-collecting all my data (See The First Dimensionality Puzzle above). Only then could I create a new dataset where DeepCraft Studio correctly inferred the Input Shape
as 6
and the correct Input Frequency
.
With the preprocessing flow seemingly fixed, I finally moved to the training tab. But another error awaited, because of course it did:
Error: "set does not contain labels from all classes. Use redistribute sets tool in data tab to resolve"
This meant that not all my defined activity labels (classes) were represented across all the training, validation, and test sets. My attempt to reduce the data down to use only "fall data" across 7 recording sets, and then use the "Redistribute Sets" tool, successfully cleared the error, allowing me to start a cloud training job. It was a small victory, a brief moment of "I've got this!", but I knew this meant that I actually had a long road ahead training for other modes and movement patterns (classes)! :-(
The Ultimate Truth: "Not Enough Data"
My initial cloud training job, however, wound up failed. The error message was blunt, direct, and humbling:
Error: "There is not enough data in training set."
This was the final, and perhaps most important, lesson. While solving the class distribution problem for my limited "fall data," I had overlooked the sheer quantity and diversity required for a robust machine learning model. Even 7 sets of fall data, once windowed and split, didn't provide enough examples for the model to learn effectively. It turns out, AI models are like teenagers – they need a lot of input to learn anything useful.
Current Status & Key Learnings
My project is currently paused as I collect more data and diversify it with resting, walking, running, sleeping data. The journey has been an intense learning curve, revealing that:
Data Quality and Quantity are Paramount: A model is only as good as the data it's trained on. No amount of clever model architecture or parameter tuning can compensate for insufficient or unrepresentative data. It's like trying to bake a cake with only half the ingredients – it just won't work.
Preprocessing is Complex but Essential: Understanding concepts like windowing, strides, and feature counts is non-negotiable for time-series sensor data.
The Debugging Loop is Real: Expect errors, embrace them as learning opportunities, give yourself more time than you think you will need, and be prepared to iterate between data collection, preprocessing, and training. It's a dance, and sometimes you step on your own feet.
DeepCraft Studio's Workflow is Specific: Learning the nuances is key and will determine your success. I plan to spend a lot of time with DeepCraft to learn its ins and outs.
My next step is to embark on an expanded dedicated data collection phase, gathering significantly more diverse data for all the classes needed for true unconsciousness detection (normal activities, falls with recovery, non-fall stillness, and more instances of fall-to-unconsciousness). Only then can I build a model that's truly functional and reliable for lifeAId.
Despite the challenges, I must say that Infineon has done a fantastic job with the PSoC 6 board, DeepCraft Studio, its models, templates, and profiles. The comprehensive ecosystem and well-designed tools make it a super powerful platform. It's genuinely a great way for hobbyists and professionals alike to get involved with AI at the edge, even if it occasionally teaches you tough love. I definitely plan to continue using this platform and learning it more to develop other exciting projects.
I want to extend a huge thank you to the Infineon developers, hackster.io, and all the individuals who have put out valuable information and resources. Your dedication and support have been incredibly helpful throughout this competition, making complex concepts accessible and guiding us through the intricacies of edge AI development.
Stay tuned for updates on the next phase of this project! I believe lifeAId has the potential to make a real difference, and I'm committed to overcoming these technical challenges to bring it to fruition.
Comments