A portable, intelligent safety sign that notices you first—with bright, adaptive LEDs, clear voice alerts, and seamless ESPHome/Home Assistant integration.
Smart Signage SentientA portable, intelligent safety sign that notices you first—with bright, adaptive LEDs, clear voice alerts, and seamless ESPHome/Home Assistant integration.
The Moment Before a SlipIt’s raining outside. Inside a busy mall, a cleaner has just mopped the walkway. A small yellow “Wet Floor” sign stands quietly—easy to miss, easier to ignore. Someone’s texting, a kid darts past, a shopper glances down at a bag… and a heel slips.
Static signs don’t adapt to human attention. They can be invisible in noise and clutter. Some people can’t clearly distinguish the yellow-on-gloss contrast. Others assume the sign’s been there “forever” and stop noticing it entirely. Meanwhile, one slip can mean injuries, lost time, and angry calls.
What if the sign noticed you first?What if it reacted to your approach—lit up dynamically, spoke clearly, and escalated if you got too close?
Meet Smart Signage Sentient. In the story he’s the guard on duty—seeing, reacting, and adapting in real time. In the engineering docs, it’s a modular platform you can reuse, extend, and control.
- Problem: Passive signs rely on perfect human attention in imperfect places—malls, hospitals, warehouses, event venues. Distraction is normal; visibility isn’t guaranteed.
- Vision: A human-aware, active sign that engages sight and sound at the right moment, not just a printed warning.
- Approach: Build a platform, not a one-off gadget—profiles for different situations, modular hardware, event-driven firmware, and Home Assistant automation out of the box.
- Outcome: A portable device that can run standalone, or join a building’s automation system, and that you (or anyone) can customize without surgery on the code base.
This project was born for a Seeed Studio challenge—and honestly, the best part was discovering how far their platform could go. I was introduced to this challenge by Seeed Studio Ranger Salman Faris- and that spark set the whole journey in motion.
At first, I prototyped with an my own ESP32-C3 (single core, 4MB Flash) board. It worked, but I quickly realized I wanted more horsepower. My buggy code hogging the CPU and was causing OTA updates fail over WiFi as both stack ran on same core. Also I needed bigger flash to enable TTS as well as to save mp3 files in littlefs. That’s when I moved to the Seeed Studio XIAO ESP32S3—with dual cores and larger flash, it opened up the headroom I needed for a proper platform, not just a proof-of-concept.
Here I have to thank another Seeed Studio Ranger, Abhinav Krishna N, who kindly provided the XIAO ESP32S3 and XIAO ESP32C6 boards. His support helped me hit the ground running and explore features much faster.
From there, the project grew beyond a talking wet-floor sign. With the S3’s power and Seeed’s 24 GHz mmWave radar, it evolved into a full smart signage platform built around the idea of profile.
A profile is like a role the sign takes on for a specific situation. Each profile defines how the LEDs behave, how the voice sounds, and what messages are spoken. Here are a few examples:
Wet Floor Profile
- “Heads up! This floor is slicker than a buttered slide.”
- “Caution! Wet floor ahead — step like a cat.”
- “Don’t rush! I’d hate to see you moonwalk unintentionally.”
And if the sign itself tips over (a very real problem with wet floor boards):
- “Tip-over alert! Please stand me back up.”
- “I’ve collapsed — irony level: maximum.”
- “Warning! The warning sign needs help.”
Halloween Profile(Imagine the Sentient parked right next to the candy bowl…)
- “Only one piece, mortal — the spirits are watching!”
- “Take one… or face the curse of sticky fingers.”
- “Greedy hands awaken the skeleton dance — don’t test me.”
Gone for Lunch Profile
- “On a snack quest — will return victorious.”
- “Lunch in progress. Productivity resumes shortly.”
Construction Zone Profile
- “Warning! Hard hats, not hard heads.”
- “Noise and dust ahead — you’ve been warned.”
And these are just surface-level examples. In reality, profiles can go much deeper—combining LED patterns, audio cues, radar behavior, and even Home Assistant automation to suit the exact context.
The best part? You can add new profiles without touching the code—just drop them into the profile catalog, push your audio files with the provided script, and the sign takes on its new role instantly. No fuss, no mess.
Feeling lazy to record an audio and push it to the device? No worries. With a single compilation flag you can enable Text-to-Speech (TTS). Instead of uploading audio files, just write the line directly in the profile catalog (in place of the file path):
"Watch your step, the floor is wet!"
…and the sign will generate the voice on the fly. TTS is disabled by default since it consumes a lot of flash memory and makes firmware updates slower.
Key CapabilitiesSmart Signage Sentient isn’t just one trick — it’s a platform with multiple senses and outputs working together. Each component was added for a reason:
Radar (Seeed Studio 24 GHz mmWave)
- Detects people approaching, even in cluttered or low-light environments.
- More reliable than PIR (no false alarms from sunlight, HVAC, or shadows).
- Enables adaptive responses: escalate when someone gets too close.
- Uses a Kalman filter to smooth radar distance measurements, reducing noise and avoiding false triggers.
- Detection distance can be set per profile at runtime, and it’s remembered automatically across reboots and profile changes — making the behavior completely intuitive.
- Source code
Inertial Measurement Unit (MPU 6500)
- Real-world issue: in malls and offices, when a wet floor sign tips over, people usually ignore it and don’t set it back upright — leaving the hazard completely unmarked.
- The MPU-6500 solves this by instantly detecting tip-over or falls.
- On a fall, the sign reacts with special voice lines like “Help! I’ve fallen over…” and a bright LED blinking pattern you can’t miss.
- This way, the sign never goes invisible — and once someone sets it back up, it resumes duty automatically.
- Instead of the usual pitch-and-roll calculation, which needs a fixed axis parallel to Earth’s gravity and constrains the hardware design, I treat accelerometer data as 3D vectors, use the initial vector as reference, and track the angle against it — enabling fall detection in any direction.
- Source code
LED System
- Red light grabs attention — and when it blinks, it’s almost impossible to ignore. The sign uses this to ensure people always notice it.
- Supports multiple patterns — blinking pulses, smooth breathing (fade in/out), or combinations of both — with fully configurable duty cycle and cycle count (or continuous/infinite mode).
- Hardware-accelerated control ensures smooth lighting effects without burdening the main processor or memory.
- Adjustable brightness makes it effective both indoors and outdoors, and the setting is saved so it persists across reboots and profile changes.
- Each event and profile can have its own LED effect — for example, a unique pattern when starting, in error, active warning, or tip-over state.
- Source code
Audio System (MAX98357A)
- Flexible playback: supports both pre-recorded audio files and TTS (can enable either one now, but can have both work in parallel in future, by checking the prefix "say" word or suffix ".mp3". Eg: audioSrc="/warning.mp3" or "say: warning, wet floor").
- Flexible playback: supports both pre-recorded audio files and TTS (can enable either one now, but can have both work in parallel in future, by checking the prefix "say" word or suffix ".mp3". Eg: audioSrc="/warning.mp3" or "say: warning, wet floor").
- High-quality output: clear audio without pops or clicks(thanks to arduino-audio-tools).
- Flexible playlist: play a single audio multiple or infinite times, or play a sequence of audios any number of times — with a unique, per-track configurable delay between each playback.
- Composable speech: the same logic can be reused in other projects that need to concatenate audio snippets (e.g., numbers, battery percentage, or dynamic status messages), especially in cases where TTS or continuous audio streaming isn’t practical.
- Source code
ESP Home integration
<TBD>
Demo 1: Setup and Warning
Demo 1: Setup and WarningIn this demo, Smart Signage Sentient is mounted on a wet floor sign.
The device is powered on and confirms it’s ready.
- The device is powered on and confirms it’s ready.
With a press of the green Start button, the system enters active mode.
- With a press of the green Start button, the system enters active mode.
After a short delay, the radar begins monitoring the surroundings.
- After a short delay, the radar begins monitoring the surroundings.
As I approach, the sign detects my movement, triggering blinking LEDs and voice alerts.
- As I approach, the sign detects my movement, triggering blinking LEDs and voice alerts.
The closer I get, the LED breathing frequency increases, escalating the warning to grab attention.
- The closer I get, the LED breathing frequency increases, escalating the warning to grab attention.
Demo 2: Fall Detection
Demo 2: Fall DetectionIn this demo, Smart Signage Sentient is running in active mode.
When tipped over, it instantly detects the fall.
- When tipped over, it instantly detects the fall.
The sign announces the user defined fallen messages repeatedly, while switching to a special blinking LED pattern.
- The sign announces the user defined fallen messages repeatedly, while switching to a special blinking LED pattern.
It keeps calling for help until restored.
- It keeps calling for help until restored.
Once set upright again, the sign returns to its active state and confirms with a voice message.
- Once set upright again, the sign returns to its active state and confirms with a voice message.
Demo 3: Dashboard
Demo 3: DashboardThis is where you giveSmart Signage Sentient his marching orders. Set detection radius, LED brightness, speaker volume, and session duration, then start him with either the physical or virtual button. Profiles remember their own settings, so your tweaks persist across reboots and profile switches.
Browse the code:
here(components in dashboard defined in yaml),
- here(components in dashboard defined in yaml),
here(enabling it access in my code using python),
- here(enabling it access in my code using python),
here(registering its call backs) and finally
- here(registering its call backs) and finally
here (saving to NVS).
- here (saving to NVS).
Demo 4: Build & Flash (Docker) via USB
Demo 4: Build & Flash (Docker) via USBI made a little script to make my life easier. One command, and it finds the board, spins up a clean Docker workspace, compiles, flashes to device, and streams logs—no toolchain juggling, no mess on the host. The video shows the flow; this is just my “press once, relax” button.
Code:Explore the script here
Demo 5: Build & Flash (Docker) via OTA
Demo 5: Build & Flash (Docker) via OTARelax even more — no USB cable needed. I use the same one-liner (./esphome_docker.sh run). The script checks for a serial port first; if none, it automatically targets smart-signage.local and pushes the update over Wi-Fi. Build happens in Docker, firmware uploads OTA, device reboots, and logs roll in.
Tip: OTA is super convenient but a bit slower. I use USB during rapid dev, then switch to OTA once things are stable.
Sample Profile Catalog:
Sample Profile Catalog:This file is a catalog of profiles. Each profile names the mode and lists what to do on key moments like start, detected, tip-over, recovered, and error (e.g., which LED effect to show and which audio to play — or a say: line for TTS).
profiles:
- name: WetFloor
events:
Ready:
audio: { playCnt: 1, playList: [ { src: /ready.mp3, delayMs: 100 } ] }
led: { pattern: blink, periodMs: 800, cnt: 1 }
Error:
audio: { playCnt: 1, playList: [ { src: /err1.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 150, cnt: 0 }
Start:
audio: { playCnt: 1, playList: [ { src: /start.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 120, cnt: 3 }
Stop:
audio: { playCnt: 1, playList: [ { src: /stop.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 600, cnt: 1 }
UiUpdate:
audio: { playCnt: 1, playList: [ { src: /ui.mp3, delayMs: 0 } ] }
Detected:
audio:
playCnt: 0
playList:
- { src: /test/warning_1.mp3, delayMs: 2000 }
- { src: /test/warning_2.mp3, delayMs: 2000 }
- { src: /test/warning_3.mp3, delayMs: 2000 }
DetectedDistanceMax:
led: { pattern: twinkle, periodMs: 2000, cnt: 0 }
DetectedDistanceMin:
led: { pattern: twinkle, periodMs: 120, cnt: 0 }
Fell:
audio:
playCnt: 0
playList:
- { src: /test/fallen_1.mp3, delayMs: 5000 }
- { src: /test/fallen_2.mp3, delayMs: 5000 }
led: { pattern: blink, periodMs: 300, cnt: 0 }
Rose:
audio: { playCnt: 1, playList: [ { src: /ready_3.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 1000, cnt: 2 }
SessionEnd:
audio: { playCnt: 1, playList: [ { src: /eod.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 800, cnt: 2 }
- name: Halloween # next profile
events:
.
.
-
profiles:
- name: WetFloor
events:
Ready:
audio: { playCnt: 1, playList: [ { src: /ready.mp3, delayMs: 100 } ] }
led: { pattern: blink, periodMs: 800, cnt: 1 }
Error:
audio: { playCnt: 1, playList: [ { src: /err1.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 150, cnt: 0 }
Start:
audio: { playCnt: 1, playList: [ { src: /start.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 120, cnt: 3 }
Stop:
audio: { playCnt: 1, playList: [ { src: /stop.mp3, delayMs: 0 } ] }
led: { pattern: blink, periodMs: 600, cnt: 1 }
UiUpdate:
audio: { playCnt: 1, playList: [ { src: /ui.mp3, delayMs: 0 } ] }
Detected:
audio:
playCnt: 0
playList:
- { src: /test/warning_1.mp3, delayMs: 2000 }
- { src: /test/warning_2.mp3, delayMs: 2000 }
- { src: /test/warning_3.mp3, delayMs: 2000 }
DetectedDistanceMax:
led: { pattern: twinkle, periodMs: 2000, cnt: 0 }
DetectedDistanceMin:
led: { pattern: twinkle, periodMs: 120, cnt: 0 }
Fell:
audio:
playCnt: 0
playList:
- { src: /test/fallen_1.mp3, delayMs: 5000 }
- { src: /test/fallen_2.mp3, delayMs: 5000 }
led: { pattern: blink, periodMs: 300, cnt: 0 }
Rose:
audio: { playCnt: 1, playList: [ { src: /ready_3.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 1000, cnt: 2 }
SessionEnd:
audio: { playCnt: 1, playList: [ { src: /eod.mp3, delayMs: 0 } ] }
led: { pattern: twinkle, periodMs: 800, cnt: 2 }
- name: Halloween # next profile
events:
.
.
-
Smart Signage SentientA portable, intelligent safety sign that notices you first—with bright, adaptive LEDs, clear voice alerts, and seamless ESPHome/Home Assistant integration.
The Moment Before a SlipIt’s raining outside. Inside a busy mall, a cleaner has just mopped the walkway.A small yellow “Wet Floor” sign stands quietly—easy to miss, easier to ignore.Someone’s texting, a kid darts past, a shopper glances down at a bag… and a heel slips.
Static signs don’t adapt to human attention. They can be invisible in noise and clutter. Some people can’t clearly distinguish the yellow-on-gloss contrast. Others assume the sign’s been there “forever” and stop noticing it entirely. Meanwhile, one slip can mean injuries, lost time, and angry calls.
What if the sign noticed you first?What if it reacted to your approach—lit up dynamically, spoke clearly, and escalated if you got too close?
Meet Smart Signage Sentient. In the story he’s the guard on duty—seeing, reacting, and adapting in real time. In the engineering docs, it’s a modular platform you can reuse, extend, and control.
- Problem: Passive signs rely on perfect human attention in imperfect places—malls, hospitals, warehouses, event venues. Distraction is normal; visibility isn’t guaranteed.
- Vision: A human-aware, active sign that engages sight and sound at the right moment, not just a printed warning.
- Approach: Build a platform, not a one-off gadget—profiles for different situations, modular hardware, event-driven firmware, and Home Assistant automation out of the box.
- Outcome: A portable device that can run standalone, or join a building’s automation system, and that you (or anyone) can customize without surgery on the code base.
This project was born for a Seeed Studio challenge—and honestly, the best part was discovering how far their platform could go. I was introduced to this challenge by Seeed Studio Ranger Salman Faris—and that spark set the whole journey in motion.
At first, I prototyped with an my own ESP32-C3 (single core, 4MB Flash) board. It worked, but I quickly realized I wanted more horsepower. My buggy code hogging the CPU and was causing OTA updates fail over WiFi as both stack ran on same core. Also I needed bigger flash to enable TTS as well as to save mp3 files in littlefs. That’s when I moved to the Seeed Studio XIAO ESP32S3—with dual cores and larger flash, it opened up the headroom I needed for a proper platform, not just a proof-of-concept.
Here I have to thank another Seeed Studio Ranger, Abhinav Krishna N, who kindly provided the XIAO ESP32S3 board. His support helped me hit the ground running and explore features much faster.
From there, the project grew beyond a talking wet-floor sign. With the S3’s power and Seeed’s 24 GHz mmWave radar, it evolved into a full smart signage platform built around the idea of profile.
A profile is like a role the sign takes on for a specific situation. Each profile defines how the LEDs behave, how the voice sounds, and what messages are spoken. Here are a few examples:
Wet Floor Profile
- “Heads up! This floor is slicker than a buttered slide.”
- “Caution! Wet floor ahead — step like a cat.”
- “Don’t rush! I’d hate to see you moonwalk unintentionally.”
And if the sign itself tips over (a very real problem with wet floor boards):
- “Tip-over alert! Please stand me back up.”
- “I’ve collapsed — irony level: maximum.”
- “Warning! The warning sign needs help.”
Halloween Profile(Imagine the Sentient parked right next to the candy bowl…)
- “Only one piece, mortal — the spirits are watching!”
- “Take one… or face the curse of sticky fingers.”
- “Greedy hands awaken the skeleton dance — don’t test me.”
Gone for Lunch Profile
- “On a snack quest — will return victorious.”
- “Lunch in progress. Productivity resumes shortly.”
Construction Zone Profile
- “Warning! Hard hats, not hard heads.”
- “Noise and dust ahead — you’ve been warned.”
And these are just surface-level examples. In reality, profiles can go much deeper—combining LED patterns, audio cues, radar behavior, and even Home Assistant automations to suit the exact context.
The best part? You can add new profiles without touching the code—just drop them into the profile catalog, push your audio files with the provided script, and the sign takes on its new role instantly. No fuss, no mess.
Feeling lazy to record an audio and push it to the device? No worries. With a single compilation flag you can enable Text-to-Speech (TTS). Instead of uploading audio files, just write the line directly in the profile catalog (in place of the file path):
"Watch your step, the floor is wet!"
…and the sign will generate the voice on the fly. TTS is disabled by default since it consumes a lot of flash memory and makes firmware updates slower.
Key CapabilitiesSmart Signage Sentient isn’t just one trick — it’s a platform with multiple senses and outputs working together. Each component was added for a reason:
Radar (Seeed Studio 24 GHz mmWave)
- Detects people approaching, even in cluttered or low-light environments.
- More reliable than PIR (no false alarms from sunlight, HVAC, or shadows).
- Enables adaptive responses: escalate when someone gets too close.
- Uses a Kalman filter to smooth radar distance measurements, reducing noise and avoiding false triggers.
- Detection distance can be set per profile at runtime, and it’s remembered automatically across reboots and profile changes — making the behavior completely intuitive.
- Source code
Inertial Measurement Unit (MPU 6500)
- Real-world issue: in malls and offices, when a wet floor sign tips over, people usually ignore it and don’t set it back upright — leaving the hazard completely unmarked.
- The MPU-6500 solves this by instantly detecting tip-over or falls.
- On a fall, the sign reacts with special voice lines like “Help! I’ve fallen over…” and a bright LED blinking pattern you can’t miss.
- This way, the sign never goes invisible — and once someone sets it back up, it resumes duty automatically.
- Instead of the usual pitch-and-roll calculation, which needs a fixed axis parallel to Earth’s gravity and constrains the hardware design, I treat accelerometer data as 3D vectors, use the initial vector as reference, and track the angle against it — enabling fall detection in any direction.
- Source code
LED System
- Red light grabs attention — and when it blinks, it’s almost impossible to ignore. The sign uses this to ensure people always notice it.
- Supports multiple patterns — blinking pulses, smooth breathing (fade in/out), or combinations of both — with fully configurable duty cycle and cycle count (or continuous/infinite mode).
- Hardware-accelerated control ensures smooth lighting effects without burdening the main processor or memory.
- Adjustable brightness makes it effective both indoors and outdoors, and the setting is saved so it persists across reboots and profile changes.
- Each event and profile can have its own LED effect — for example, a unique pattern when starting, in error, active warning, or tip-over state.
- Source code
Audio System (MAX98357A)
- Flexible playback: supports both pre-recorded audio files and TTS (to be unified in future updates).
- High-quality output: clear audio without pops or clicks(thanks to arduino-audio-tools).
- Flexible playlist: play a single audio multiple or infinite times, or play a sequence of audios any number of times — with a unique, per-track configurable delay between each playback.
- Composable speech: the same logic can be reused in other projects that need to concatenate audio snippets (e.g., numbers, battery percentage, or dynamic status messages), especially in cases where TTS or continuous audio streaming isn’t practical.
- Source code
P3 playback for pre-recorded voice lines and effects.
Optional TTS mode for quick updates without recording audio.
Clear voice guidance reinforces the visual warnings, covering accessibility for people who may miss or misinterpret visual cues.
ESPHome + Home Assistant Integration
Profiles and parameters (range, brightness, volume, session time) can be selected from Home Assistant.
No custom coding needed — everything exposed through ESPHome components.
Allows integration with larger automation systems (e.g., turn on building lights when someone is detected).
Key CapabilitiesDynamic LED EffectsSmooth breathing, pulsing, and flashing patterns (ramp-up/hold/ramp-down/hold) that cut through visual noise without wasting CPU time.
Voice AlertsClear voice lines from local files; enable “say:...” text-to-speech when you have flash space to spare. Clean start/stop—no pops or clicks.
Human Detection & Distance24 GHz radar tracks approach and distance; moving-distance is Kalman-filtered for responsiveness without jitter. IMU adds tip-over detection and recovery prompts.
ESPHome External Component → Home AssistantNo heavy lifting: expose controls (profile, session time, volume, brightness, detection range) and automations right in your dashboard.
Portable PowerRechargeable 1S Li-ion pack (~4000 mAh). Fast-charge module with external bi-color LED indicators wired to the panel.
Thoughtful UX Details
- Virtual Start button that the physical button maps into—so one code path handles both.
- Per-profile persistence in NVS—switching profiles restores that profile’s last-used settings.
- Knob override at Start (planned): assign the hardware knob to volume/brightness/range; it’s sampled once at session start to save power.
- Boot-time debug logs: partition table + LittleFS listing so you can trust what’s on the device.
What you can take from it (even if you don’t build it all)
Reusable LED pattern engine, audio playback queue, radar smoothing, IMU fall logic, NVS profile storage, Docker build + LittleFS flasher scripts.
Clean Active Object + FSM firmware pattern (no if/switch sprawl; compile-time safe, ETL-based).
[Image: Simple block diagram—ESP32S3 at center; arrows to Radar, IMU, LED driver, Audio DAC + Speaker, Battery/Charger, LittleFS/Storage, ESPHome/HA]
System ArchitectureHardware, at a glance.At the heart is a Seeed Studio XIAO ESP32S3. It talks to a 24 GHz mmWave radar for approach/distance sensing, an IMU for tip-over detection, a MAX98357a I²S DAC feeding a 4 Ω/3 W speaker for voice alerts, and a hardware-accelerated LED channel driven via LEDC and an NMOS stage (12 V decorative strips repurposed as “eyes”). Power comes from a 1S Li-ion pack (~4000 mAh) with a fast-charge module; the device boosts to 5 V for logic and to 12 V for LEDs. Audio assets live in LittleFS inside a custom partition; OTA stays intact.
Software, at a glance.The firmware is event-driven. Each subsystem (radar, IMU, LED, audio) runs as an Active Object with its own queue and a strict FSM. The Controller orchestrates sessions and profiles. UI handles (selects, numbers, start button) are defined in ESPHome YAML but injected into C++, so all behavior lives in code. No if/switch sprawl: transitions and actions are explicit in the FSMs. Compile-time safety is enforced with ETL library.
Design principles.Separation of concerns (FSM logic vs RTOS/task shell), no heap (ETL), per-profile persistence, and minimal wake time (hardware timers and LEDC do the heavy lifting). The result is a platform you can repurpose module by module.
Meet the Modules:RadarThe radar reports three distances: moving, stationary, and a combined value that shifts with signal energy. For real-time safety, the moving distance is the most responsive—so it’s the primary signal. A lightweight Kalman filter smooths its noise without adding lag. During development, two libraries were evaluated; one had a register mix-up between moving and stationary ranges, which was reported upstream. The radar AO initializes hardware, applies profile thresholds, reads frames on a tick, filters, and posts clean EvtRadarData to the Controller.
What it buys you: responsive approach detection that can ramp LEDs and escalate audio as people get closer.
IMURather than assuming a fixed axis for pitch/roll, the IMU captures an initial gravity vector when upright. Each sample computes the angle between the current vector and that reference. If the angle exceeds a threshold for N consecutive samples (debounced in ETL), Fallen event fires; when it returns within bounds for N samples, upright event fires. This works regardless of mounting orientation. Thresholds, sample counts, and intervals are constants—no magic numbers lurking in code.
What it buys you: reliable tip-over detection that triggers local and remote alerts only when it truly matters.
LED EngineThe LED driver code programs LEDC fades and timers to compose a four-phase cycle: ramp-up, hold-high, ramp-down, hold-low. By adjusting durations you get square, triangle, or “sine-ish” patterns without burning CPU. When audio playback runs concurrently on some cores/versions, fade-end interrupts can contend, so the engine can switch to a software-timer callback during audio. Patterns are profile-defined and can dim or escalate on events (tip-over, battery-low, proximity).
What it buys you: bright, legible motion cues that cut through visual noise with near-zero runtime overhead.
AudioAudio playback uses Arduino Audio Tools. A small FreeRTOS task feeds I²S DMA buffers cleanly—no pops on start/stop. Prompts can be file-based, and TTS can be enabled behind a compile flag using a say:... spec when you have flash to spare. Bluetooth A2DP is a future option; note that A2DP links expect continuous frames, so the feeder task would idle with zeros between prompts.
What it buys you: voice lines that people actually hear in a noisy space, with clean control from the Controller FSM.
UIDashboard controls (profile select, session time, detection range, volume, LED brightness) are defined in YAML but passed into C++ as handles. A physical GPIO button simply triggers the same virtual Start, so there’s one code path. Settings persist per profile in NVS: switch to “Wet Floor, ” and its last-used levels return; switch to “Halloween, ” and those return. A hardware knob can be assigned to a single parameter; to save power it’s sampled once at Start and takes ownership for that session.
Controller FSMRoleThe Controller is the conductor. It loads/stores per-profile settings, arms sensors and LEDs, starts/stops sessions, routes sensor/UI events, and decides when to speak, flash, warn, or end based on the profile catalog the user has defined in a yaml file.
Typical flowBoot up all the hardwares and UI. Waits for Start press. Once start it, react to the sensor inputs based on the user defined params, like for each event how a led and speaker output should respond. Keeps track of the session period and ends when required.
Why the FSM matters.Every transition is explicit. Guards prevent illegal jumps; actions are testable; and because the FSM is pure logic and the task shell is separate, you can unit-test behavior without RTOS while keeping RTOS details in one place.
Code Reuse Guide — lift just the parts you needInterfaces and AOs.Take the radar, IMU, LED, or audio AO headers and their FSMs as drop-in patterns: each AO exposes a queue, a post(...) API for events, and a straight-line task loop that dispatches to a pure FSM.
UI patterns.Keep YAML thin. Define the widgets, inject their handles into C++, and centralize behavior in code. Map any physical button to a virtual action so you don’t duplicate logic.
Per-profile persistence.Reuse the Storage + ProfileSettings layer to restore distinct settings on profile switches. It’s a tiny upgrade that makes devices feel “remembering” instead of “stateless.”
Utilities that save hours.The boot-time partition table + LittleFS listing prints exactly what’s on the device, which removes guesswork when you’re pushing assets or adjusting partitions.
Build helpers.Use the Docker wrapper to freeze your toolchain and the LittleFS script to pack/flash assets. Even if you never adopt the rest, these two scripts make embedded life calmer.
Filesystem & PartitionsWhy LittleFS.Directories, resilience, and flash-friendly behavior make LittleFS the right choice for on-chip assets (MP3). SPIFFS is legacy; FAT is best on removable media like SD cards. Here, voice lines live in LittleFS and can be updated independently of firmware.
Custom partitioning.Default ESPHome layouts tend to allocate a lot to OTA A/B and too little to FS. This project keeps OTA but carves a ≥2 MB LittleFS partition for audio. The device prints the partition table on boot so you can confirm offsets, sizes, and names at a glance.
Two-stage build gotchaESPHome first generates a PlatformIO project, then PlatformIO compiles it. Pointing to a partitions CSV by name isn’t enough—you must ensure the actual CSV is copied into the generated build. The YAML override used here forces that copy so PlatformIO truly builds with your custom table. After that, LittleFS mounts with the expected size and paths, and assets are exactly where you think they are.
In college, I once wrote a valid FAT32 file to an SD card using a tiny PIC16F877A with too little RAM for even a single page. The trick was understanding the spec and streaming carefully. The same mindset applies here: be explicit about layout, verify at boot, and you’ll never wonder where your files went.
Developer ergonomics.A helper script builds a LittleFS image from an assets/ folder and flashes it to the correct offset automatically. On reboot, the device lists top-level files so you can see your new prompts immediately.
I began on Windows with Arduino IDE, moved to VS Code + PlatformIO (Arduino framework), and finally to ESPHome external component for seamless Home Assistant integration. That last step exposed the classic embedded nemesis: mismatched cores and libraries.
- Arduino Core 2.x vs 3.x: Audio Tools (which I use for MP3/TTS) pulled
WiFiSecureClientsymbols that no longer exist under Arduino Core 3.x (renamed toNetworkClientSecure). Even with local playback (no streaming), the header gets included. - Attempted fixes that didn’t stick: downgrading ESPHome’s Arduino core to 2.x (broke elsewhere), providing local stubs (compiled but failed at link), and partial component overrides.
- Two-stage build trap: ESPHome first generates a PlatformIO project, then PlatformIO compiles it. A fix that “works” in your source can still fail in the generated project if the override didn’t propagate.
- S3/PSRAM quirk: On ESP32-S3, an I²S/DMA buffer issue existed in Arduino Core 3.1.3 and was fixed in 3.2.2. Pinning to a good 3.x release + forcing the generated project to use it stabilized audio.
Takeaway: Pin versions intentionally, make overrides flow through the ESPHome → PlatformIO generation, and verify by reading the generated platformio.ini after build. Treat toolchains like code: controlled and documented.
I split runtime and behavior so each part stays testable and reusable:
Events: a single etl::variant<...> type lists every event in the system (compile-time checked, no heap).
Queue wrapper: transports Event via FreeRTOS Queue/MessageBuffer; it doesn’t know the FSM or RTOS details beyond that.
FSM (pure logic): owns state data and actions/guards; no RTOS calls inside.
Active Object (AO): the task shell. It blocks on the queue, then dispatches to the FSM. Only the AO touches timers, HALs, or other AOs.
Minimal shape (illustrative):
// events.h
using Event = etl::variant<
CmdSetup, CmdStart, CmdStop, CmdTeardown,
EvtUiProfileUpdate, EvtUiAudioVolUpdate, EvtUiLedBrightUpdate, EvtUiRangeCmUpdate, EvtUiStartPressed,
EvtRadarReady, EvtRadarError, EvtRadarData,
EvtImuFell, EvtImuRose,
EvtBatteryLow, EvtBatteryOk,
EvtTimerEnd, EvtAudioDone
>;
// queue.h
class Q {
public:
bool post(const Event& e, TickType_t wait = 0);
bool get(Event& e, TickType_t wait = portMAX_DELAY);
};
// fsm.h (pure)
class FSM {
public:
void handle(const Event& e); // std::visit dispatcher
private:
State state_{State::Idle};
// guards/actions (no RTOS here)
};
// active_object.h
class ActiveObject {
public:
ActiveObject(Q& q, FSM& f) : q_(q), f_(f) {}
void taskLoop() {
for (;;) { Event e; if (q_.get(e)) f_.handle(e); }
}
private:
Q& q_;
FSM& f_;
};
// events.h
using Event = etl::variant<
CmdSetup, CmdStart, CmdStop, CmdTeardown,
EvtUiProfileUpdate, EvtUiAudioVolUpdate, EvtUiLedBrightUpdate, EvtUiRangeCmUpdate, EvtUiStartPressed,
EvtRadarReady, EvtRadarError, EvtRadarData,
EvtImuFell, EvtImuRose,
EvtBatteryLow, EvtBatteryOk,
EvtTimerEnd, EvtAudioDone
>;
// queue.h
class Q {
public:
bool post(const Event& e, TickType_t wait = 0);
bool get(Event& e, TickType_t wait = portMAX_DELAY);
};
// fsm.h (pure)
class FSM {
public:
void handle(const Event& e); // std::visit dispatcher
private:
State state_{State::Idle};
// guards/actions (no RTOS here)
};
// active_object.h
class ActiveObject {
public:
ActiveObject(Q& q, FSM& f) : q_(q), f_(f) {}
void taskLoop() {
for (;;) { Event e; if (q_.get(e)) f_.handle(e); }
}
private:
Q& q_;
FSM& f_;
};Why this matters: You can unit-test FSM behavior on a desktop compiler, swap transports without touching logic, and avoid fragile cross-includes.
Profiles & Per-Profile Persistence (no magic numbers)A “Profile” is a use-case bundle (e.g., Wet Floor, Event Control) that chooses LED pattern, audio prompts, and sensor thresholds. Each profile remembers its last values across reboots and switches.
- Catalog flow: YAML → JSON (ArduinoJson v7) during build/init → ETL maps at runtime (no heap).
- On profile switch: load last-used values from NVS; if first time, apply defaults from the catalog.
- Constants: thresholds, debounce counts, and mapping ranges live in
interface/constants.h. No magic numbers scattered in code.
// load once (ArduinoJson v7)
StaticJsonDocument<4096> doc;
deserializeJson(doc, catalog_json);
for (JsonObject p : doc["profiles"].as<JsonArray>()) {
ProfileName name = p["name"].as<const char*>();
profile_defaults_[name] = {
.volume_pct = (uint8_t)p["audio"]["volume"],
.led_pct = (uint8_t)p["led"]["brightness"],
.range_cm = (uint32_t)p["radar"]["range_cm"],
.session_m = (uint32_t)p["session"]["min"]
};
}
// on switch
bool loadFromNVS(const ProfileName& name, Settings& out);
void saveToNVS(const ProfileName& name, const Settings& in);This makes profiles first-class: switch context, and the device feels like it remembers how you used it last time.
Knob & Battery ADC (sample once, warn reliably)To avoid legacy wrappers, the ADC layer binds directly to the ESP-IDF ADC oneshot driver.
Knob behavior (power-friendly): If assigned to a parameter (Volume/LED/Range), the knob is sampled only once when a session starts. That value takes ownership for the session; dashboard writes are ignored until the next session.
Battery behavior (debounce + hysteresis): Convert raw → mV using calibrated Vref and divider scale. Emit EvtBatteryLow only after N consecutive low samples; clear on EvtBatteryOk with hysteresis.
Dockerized ESPHome/PlatformIO toolchainBuilds happen inside a container, so host Python/packages don’t drift. Typical use:
./esphome_docker.sh compile # build firmware from YAML
./esphome_docker.sh run # build + upload (serial if present, else OTA)
./esphome_docker.sh logs # device logs
./esphome_docker.sh clean # wipe caches; force a clean rebuild
./esphome_docker.sh nuke # all tools, env, will be removedittleFS assets: pack & flash in one step
Voice files live under data/, tracked using git lfs to avoid bloating.git file.
The helper script packs them into a LittleFS image and flashes to the correct partition offset (read from your partitions CSV).
./lfs.sh build assets/ # produce LittleFS image
./lfs.sh push assets/ # build + flash to the right offset
./lfs.sh erase # erase FS (careful)Debug on boot
The firmware prints the partition table and a top-level LittleFS listing at startup. If anything’s wrong with partitions or assets, you’ll see it immediately in logs.
Hardware ArchitectureModular prototype today, single PCB tomorrow.This build uses three boards connected with keyed, polarized connectors so nothing can be plugged in backwards.
- Main board: XIAO ESP32S3, I²S DAC (MAX9xxx series), sensor headers (radar, IMU).
- Power & LED board: 1S Li-ion input, DC-DC boost to 5 V (logic) and 12 V (LEDs), NMOS LED driver with a small series current-limit resistor.
- Charger board: TP4056-class charger/protection module for 1S packs.
Power path.Battery → Charger/Protection → Main Power Switch → DC-DC 5 V → schottky diode → ESP32S3(internal 3.3 V regulator→ Radar), DAC, IMUBattery → DC-DC 12 V for the LED.Both USB and battery can be safely connected.
Why modular: fast iteration, easy swaps, and no soldering onto a loaned S3 dev board. The next revision will merge everything onto a single PCB with the same keyed connectors at the edges.
Charging & Status IndicationCell configuration.Two matched 18650 cells in parallel (1S) for ~4000 mAh. 1S keeps the voltage simple for ESP32S3 and DC-DC converters.
Charger choice.A TP4056-class module (often sold as “4056/1S protection+charge”) set for a higher charge current than typical dev-board chargers. This reduces charge time for daily use. Note: these modules are for 1S only (parallel cells are fine; series 2S is not).
Panel LEDs you can actually see.The module’s tiny on-board LEDs are useless inside an enclosure, so their pads are wired to a front-panel bi-color LED (common anode or common cathode).
- Red = charging
- Green = full
Safety notes.Use genuine cells, add a fuse if available, match cells before paralleling, and never charge unattended. The protection board guards against over/under-voltage, but treat Li-ion with respect.
Enclosure & Mechanical
CAD & slice: Modeled in Fusion 360. Exported as OBJ for predictable scale handling. Sliced in Cura.
Ender-3 brought back to life: This build doubled as a printer rehab: Z-screw shimmed with a simple washer to eliminate binding, belts tensioned, rails cleaned and lubed, and a ball-bearing spool roller added so the feeder stopped skipping. The takeaway: you can fix your own printer. Basic hand tools and patience go a long way.
Assembly notes: Ribbon harnesses with keyed connectors keep the inside neat and serviceable. The speaker is a 4 Ω / 3 W unit mounted behind a grille; the radar and IMU sit on standoffs to avoid mechanical stress.
Attention to Detail- Per-profile memory: when you switch profiles, the device restores that profile’s last volume/brightness/range/session time from NVS.
- One entry point for Start: the physical button triggers the same virtual Start as the dashboard, so there’s no duplicated logic.
- LED/audio coexist nicely: when audio plays, the LED engine can switch to a timer-based fade callback to avoid interrupt contention.
- Dual-core sanity: Wi-Fi tasks on one core, app AOs on the other during development for smooth OTA/logging.
- Style & readability: clang-format, consistent naming, and functions written so even a new engineer can follow the flow.
- Dependency injection: HALs for radar/IMU/LED/audio are swappable; modules don’t know each other exists.
- Logs: ASCII-only, short, and useful (partition table + LittleFS listing at boot).
Dependency Management
<TBD>
Build It Yourself
<design>
flashing steps
Future Work- Bluetooth speaker mode (A2DP) with session-aware streaming.
- Faster charging with a higher-current, thermally-managed module.
- Richer profiles: crowd density cues, multi-language prompts, time-of-day behavior.
- Solar add-on for outdoor signs.
- Simple cloud dashboard for fleet health (battery, uptime, falls).
Credits & Community
- Seeed Studio for the XIAO ESP32S3 and radar module.
- Libraries: Arduino Audio Tools, FastIMU, ETL (Embedded Template Library), ArduinoJson v7.
- Thanks to Studio Ranger Abhinav for the loaned S3 board.
- Community contribution: radar library register-mapping bug identified and created a pull request with fix
- Identified a bug in esp home script and reported as issue.



Comments