The venerable touchscreen has been with us for decades. This has created a myriad of latest attempts to find new ways to interact with our stuff, particularly stuff with computing power. From AI to VR and even to BCI (brain-computer interface), many companies are exploring options.
These are my experiments using sensors, a little Machine Learning and some algorithms to automate my interaction with my Philips WiZ smart lights, so that I do not have to sift through a smartphone app; so that I do not have to get up and use the wall switch; so that I do not have to search for that darn mobile switch I left here somewhere; without Internet access; and get an immediate response.
I want my lights to give me the service I need and when I need it. Without any input from me. I want my lights to know and act accordingly.
WHAT- Automated Light ControlThis first experiment uses a Person Sensor from Useful Sensors, a low cost & easy to use hardware module that detects nearby peoples’ faces, and returns information about how many there are, where they are relative to the device, and performs facial recognition.
It achieves this using a highly efficient TinyML model that keeps all data inside the chip and provides inference results via I2C, thus ensuring the privacy of the face information. It's development guide provides straightforward instructions and examples for several languages and platforms.
Because the Person Sensor uses a CMOS camera, it is subject to ambient & back light conditions, and although the TinyML model mitigates most of these effects, I added a PIR motion sensor to fuse its output with the results from the Person Sensor in determining if there's someone present or if it's the light playing games on the CMOS sensor.
This provides a method to determine if the lights should be on or off that is inmune to someone (or some pet) just passing by or by someone actually being there but sitting perfectly still (like reading), and this, even in very low ambient lighting conditions that would render a CMOS sensor difficult to use.
Once the lights are on, the next step is to determine the characteristics of the light provided, that is, brightness and temperature. The effects of light on human circadian rhythms, sleep and mood are widely studied, so based on the time of day the system sets the smart bulbs to the appropriate settings.
- Intentional Light ControlTo maintain the objective of a touchless, screenless and offline user interaction, two methods for intentional light control are provided.
The first way to modify the automated setting is a personalized setting based on face recognition. The Person Sensor's model can be trained to hold up to 7 faces, each identified only by a label (image raw data is inaccesible). The system then determines if the face in front of the sensor is known and modifies the light settings accordingly.
The second method to modify the automated settings is using hand gestures, or rather hand movements that aim to be intuitive: up/down for brightness, left/right for temperature, circular for color, etc. For this, the system uses a gesture sensor based on Pixart's PAJ7620 (several are commercially available, I'm using a breakout from DFRobot that provides a good C++ library)
Out of the many options for smart, connected lights, the Philips WiZ products provide reliable quality with an easy interface using asynchronous UDP methods (i.e., no Internet required) as exposed by the pywizlight library on Github. I simply migrated the discovered methods in that library to C++ & Async UDP, specifically to the ESP32-Arduino framework, and implemented it for the automated and intentional methods already described.
HOWThe accessory is meant to operate as a stand-alone device. For now, it's a little box that can live on its pedestal on a desk or hang w/o the pedestal on a wall or a screen. Whatever the final shape these experiments produce, it could still operate as a stand-alone device, or it could be embedded in a desk lamp or so.
For testing purposes, a user interface to define certain configuration parameters is necessary. The simplest way was to generate a one-page web client accessed via mDNS and listing the required parameters. This web client was developed in JavaScript (plus HTML & CSS) with the files stored on the ESP32 and provided to the web browser. The user (or rather, "tester") manual is also available on the Github repo (see "Code", below).
Besides the 3 sensors already described, the accessory provides the following elements:
- Button to enroll faces for the face recognition function.
- Button to reset all settings to factory defaults (including WiFi credentials acquired during provisioning)
- Button to reboot the unit.
- Face & gesture detected LED.
- Battery status LEDs.
- RGB LED for various status and processes feedback.
The ESP32 Feather V2 from Adafruit includes an RGB NeoPixel, a programmable button, a reboot button, an addressable red LED, a battery-charging circuit with status yellow LED and good breakout of GPIOs and energy pins.
Both the Person Sensor and the DFRobot Gesture sensor have a green indicator LED. I added a button for Face Enrollment and a main power switch. The battery is a chunky 2500 mAh LiPO
All components are soldered and joined using a protoboard and hot glue (it is a prototype / experiment, after all). The enclosure is designed using Fusion360 and 3D printed on a Prusa MK3S+ in PLA.
The code and its full documentation (including generous comments in code, so that a future me, or a present you, can follow it) is available on Github. According to Github, it's C++ 59.2%, C 20.8%, JavaScript 14.5%, CSS 3.3%, HMTL 2.2%, so out of my 1600 (approx.) lines of code, 80% are running on the ESP32 and 20% on the web client. Interesting how heavy a screen-based GUI can be. And that's with barely any validation, meant for testing, not a real end user.
The ESP32 code was edited & compiled using the Arduino IDE 2. The web client files (i.e., HTML, JS & CSS) were edited using Visual Studio Code.
Some of the (new for me) C++ elements used in this project include Multicast DNS & Webserver, Lambda Functions, ArduinoJSON, Finite State Machines, vector containers, LittleFS, Wi-Fi disconnect callbacks, Async UDP methods, ESP32 timer interrupts, and so many more, mmm simply delicious...
NEXT EXPERIMENTFor the next iteration (already started), I'm swapping the PIR sensor for a 24Ghz mm Wave radar sensor that is inmune to ambient light or IR signatures. I'm also swapping the gesture sensor for Pixart's latest PAG7646J1 + PAG7661QN SoC for it adds a hand-shape algorithm (on top of hand movements) to continue searching for combinations that create intuitive gestures. Seeed Studio has breakouts for both of these.
I'm adding an APDS9960 to determine the existing ambient light, both intensity and color, and fuse this to the auto on/off and the auto temperature procedures respectively. A large capacitive surface could also become interesting for user interaction.
Since the user interface is meant for the field tester, I might be done with a web GUI and use instead a CLI via web serial.
Comments