I started this project as a personal maker challenge after discovering xiaozhi-esp32.I was impressed by how easily and inexpensively it enables access to LLMs like DeepSeek, and I wanted to explore how far I could push this tiny device.
While xiaozhi-esp32 works well on its own, its built-in display is quite small.That inspired me to combine it with a projector—creating a large, lively avatar that can talk and respond in real time.
The result is LightPersona: a projected avatar that wakes up with the keyword “HiESP”, then engages in free conversation powered by LLM.During interaction, the avatar animates while speaking, and subtitles are displayed on the projection screen.
A potential use case is as a language learning partner.For example, practicing English or Chinese with a friendly projected character makes learning more immersive.Because the persona can be customized, the avatar’s personality can be adjusted to match the situation—such as playful, serious, or supportive.
Technically, the project is built on AtomS3R + EchoBase together with xiaozhi-esp32.This environment makes voice interaction extremely simple and low-cost, while the projector brings scale and personality to the experience.
At the current stage, LightPersona is still a demo-level prototype, but the core interaction already works.Looking ahead, I plan to add more interactivity by integrating a Raspberry Pi camera—to sense the projection environment and capture the conversation partner—making the experience more dynamic.
Whether it evolves into a dedicated language learning companion or a more general chatbot-style partner, LightPersona aims to explore how projected characters can become part of our everyday lives.
Comments