Have you ever had this experience?
At two o'clock in the morning, you stare at the computer screen with sore eyes, troubled by code errors all day long. You sigh, casually open the AI assistant, and send a message: "How on earth do I fix this bug?"
The AI replies: "Alright, let me help you analyze this issue. It may be caused by the following reasons: 1... 2... 3..."Have you ever had this experience?At two o'clock in the morning, you stare at the computer screen with sore eyes, troubled by code errors all day long. You sigh, casually open the AI assistant, and send a message: "How on earth do I fix this bug?"
Everything it says is correct, yet something just feels off.
You've been anxious all day, but it still enthusiastically lists outlines for you; you're too exhausted to read any text, yet it sends you a long paragraph. What you need is not more information, but a simple sentence: "Don't worry, let's take it slow."Everything it says is correct, yet something just feels off.
This is not real communication—it is merely question and answer.
All mainstream AI assistants operate in a "preset mode". No matter you are happy, depressed, angry or exhausted, they always speak in the same tone: enthusiastic, professional and omnipotent.
But humans don't communicate this way. When two humans talk, they naturally take each other's emotional state into consideration:All mainstream AI assistants operate in a "preset mode". No matter you are happy, depressed, angry or exhausted, they always speak in the same tone: enthusiastic, professional and omnipotent.
- When your boss is in a bad mood, you will report your work more cautiously
- When your friend is feeling down, you will comfort them first instead of reasoning with them
- When you feel anxious yourself, you also hope the other person can slow down the pace
Why can't AI do the same?
This is exactly what we are building today — an emotion-aware AI companion.
It runs on Seeed reComputer Industrial R2145 with all data processed locally, no data uploaded to the cloud at all, truly becoming your intimate personal steward.This is exactly what we are building today — an emotion-aware AI companion.
Why use an AI Box?Alright, AI needs to perceive emotions — but how do we make it happen? Wouldn’t it be enough to just run an AI app directly on your computer? Why go out of your way to set up an AI Box? The answer is simple: privacy.
Modern AI assistants, whether ChatGPT, Claude, or Cursor, all require internet access to function. Your conversations are uploaded to cloud servers for processing. This means:
Your code, your documents, and all your chat history with AI — everything ends up on third-party servers. Do you really trust these platforms entirely? Does your company allow sending internal code to external servers? Have you actually read through these platforms’ data policies carefully?
A more practical issue: if you use AI on a public computer, a shared device, or over a company network, your AI usage activity can be fully monitored.
But if you run AI entirely on an independent dedicated device, everything changes completely:
The AI Box can only access its own files, keeping your work computer completely isolated from the AI assistant. You can safely use AI on any device without worrying about data leaks.
SolutionTaking all the above considerations into account, I decided to build an AI Box. It runs on Seeed reComputer Industrial R2145 — an industrial-grade edge computing device, which is compact, stable and low power consumption, and can run 7×24 hours perfectly. It runs OpenClaw as the AI Agent framework and conducts LLM inference through MiniMax API.
The core function is emotion perception: it captures the user's facial expressions in real time via a USB camera, and recognizes seven basic emotions — happy, sad, angry, fearful, surprised, disgusted and neutral. Then it dynamically adjusts the Agent's response style based on the emotional state. When you feel anxious, it will explain more patiently and clearly; when you feel frustrated, it will express empathy first instead of outputting content all at once; when you are happy, it will communicate in a more lively and relaxed way.
Core FeaturesThe core features of this AI Box are emotion perception plus emotional response.
It does not merely recognize emotions and tell you "I detect that you are feeling anxious" — that would be overly mechanical.
What it does is treat emotional state as contextual information and integrate it into daily conversations.It does not merely recognize emotions and tell you "I detect that you are feeling anxious" — that would be overly mechanical.
For example: When you look anxious and ask a code-related question, the Agent might reply:
"Don't worry, I've seen this issue many times. It's easy to fix. Let's check the error message first..."
Instead of replying like this:
"I have detected your current emotional state as anxious. I will switch my response mode to calm and patient style. After analysis, this problem may be..."
Perceive emotions, and embed them naturally into dialogue.
This is our core value: not a fancy emotion detection gimmick, but making AI truly understand people and possess human warmth.
1.1Download and Flash the Ubuntu Image
① The reComputer Industrial R2145 supports the Ubuntu system. First, download the Ubuntu image from the official Seeed website.
② Set the hardware to Boot mode, connect the device to your computer with a USB-Type C cable, and connect the power cable.
③ Open rpiboot and Raspberry Pi Imager, then proceed with image flashing.
1.2 First Boot and Configuration
After finishing the image flashing, set the hardware to non-Boot mode. Connect the device and your computer to the same network via an Ethernet cable, then use the following command to connect (password: 12345678).
ssh reComputer@raspberrypi.localStep 2: Environment and OpenClaw Configuration on reComputer2.1 Install Python and Dependencies
pip install tensorflow>=1.7 opencv-contrib-python==3.3.0.9 fer2.2 Install OpenClaw
OpenClaw Download:
curl -fsSL https://openclaw.ai/install.sh | bashDownload Verification:
openclaw --version # confirm the CLI is available
openclaw doctor # check for config issues
openclaw gateway status # verify the Gateway is running
openclaw --version # confirm the CLI is available
openclaw doctor # check for config issues
openclaw gateway status # verify the Gateway is running
2.3 Configure the Agent Brain
Method 1: API of Mainstream Existing Large Models
- If you selected to run the beginner tutorial during installation, select the corresponding model at this position in the tutorial and enter your personal API key.
- If the installation is complete and you need to change the model, you can enter the
openclaw onboardcommand to reconfigure.
Method 2: Local Model Deployment (Ollama)
Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama --versioConnect OpenClaw to Ollama
- OpenClaw requires an API Key to identify the Ollama service. Since Ollama is a local service, this Key can be any string.
① Set environment variables
export OLLAMA_API_KEY="ollama-local② Configure via OpenClaw
openclaw config set models.providers.ollama.apiKey "ollama-local"- Confirm that the Ollama service is running
① Check the Ollama Service
curl http://localhost:11434/api/tags② If it is not running, you can start it
ollama serve- Check OpenClaw Models
① View currently available models in OpenClaw
openclaw models list② If no Ollama model is available, you can add it
ollama launch openclaw
ollama launch openclaw --model qwen3:8b- Run the OpenClaw Onboarding Wizard
① Run the Configuration Wizard
openclaw onboard② OpenClaw will automatically detect the local Ollama service. Make selections in the wizard as follows
◆ I understand this is personal-by-default and shared/multi-user use requires lock-down. Continue?
│ ● Yes / ○ No
◆ Onboarding mode
│ ● QuickStart (Configure details later via openclaw configure.)
│ ○ Manual③ It will then display the currently connected model. This document takes the ollama/qwen3:8b model as an example
◇ Existing config detected ─────────╮
│ │
│ workspace: ~/.openclaw/workspace │
│ model: ollama/qwen3:8b │
│ gateway.mode: local │
│ gateway.port: 18789 │
│ gateway.bind: loopback │
│ │
├────────────────────────────────────╯④ You can choose either Use existing values or Update values here, but do not select Reset
◆ Config handling
│ ● Use existing values
│ ○ Update values
│ ○ Reset⑤ Select Skip for now for Model provider
◆ Model/auth provider
│ ...
│ ○ Cloudflare AI Gateway
│ ○ Custom Provider
│ ● Skip for now⑥ Select All providers
◆ Filter models by provider
│ ● All providers
│ ○ amazon-bedrock
│ ○ anthropic
│ ○ azure-openai-responses
...⑦ In Default Model, the connected Ollama model will appear at the top and serve as the default model; just select it, for example
◆ Default model
│ ● Keep current (ollama/qwen3:8b)
│ ○ Enter model manually
│ ○ amazon-bedrock/anthropic.claude-3-haiku-20240307-v1:0
│ ○ amazon-bedrock/anthropic.claude-3-5-haiku-20241022-v1:0
...⑧ At this point, OpenClaw has successfully connected to the local Ollama model. You can configure the subsequent steps according to your own needs
...
...
◆ How do you want to hatch your bot?
│ ○ Hatch in TUI (recommended)
│ ● Open the Web UI
│ ○ Do this later- Start Using OpenClaw + Ollama
① After the wizard completes, OpenClaw will automatically open the Web UI
② Your robot can now run with the local Ollama model. If you want to switch models, you can
ollama launch openclaw③ You may also specify the model directly
ollama launch openclaw --model qwen3:8bStep 3: Develop Visual Recognition AlgorithmEmotion recognition can be implemented in three ways. Select the appropriate solution according to your resources and needs.
Method 1: Multimodal Large Model (Most convenient, but high resource consumption and slow running speed)
Adopt multimodal models such as GPT-4V, which have built-in emotion recognition functions and do not require local models.
Method 2: Small Model (Most stable)
We can implement emotion recognition with OpenCV and FER, or use open-source/self-built small models such as FaceNet.
This paper adopts the efficient implementation scheme of OpenCV + FER to encapsulate the inference code:
To make the emotion recognition service run automatically when reComputer starts up, configure the systemd user service, modify the emotion recognition script, and save it to a fixed file readable by OpenClaw. This allows OpenClaw to obtain emotional information in real time during subsequent conversations:
Enable and start the service:
mkdir -p ~/.config/systemd/user
systemctl --user daemon-reload
systemctl --user enable emotion.service
systemctl --user start emotion.serviceStep 5: Configure Emotion Settings- Configure OpenClaw SOUL.md
Edit OpenClaw's SOUL.md file to add emotion perception capabilities:
Emotional Awareness Mode
You have an emotional awareness AI companion mode. When the user starts the emotion detection service:
Read emotion state file:
/home/seeed/.openclaw/emotion_state.json
Before every reply, check the emotion and style fields in emotion_state.json, then adjust your tone based on the user’s emotion.- Restart OpenClaw to Apply the Configuration
Core Operations of OpenClaw
Terminal:
openclaw --Start OpenClaw
openclaw onboard --Reconfigure OpenClaw
Chat interface:
chat --Enter the previous conversation
/new --Create a new conversation interfaceStep 6: Verification and Testing- Reboot the Device
sudo rebootThe device should now automatically run the emotion recognition service
- Camera and Emotion Detection Verification
Make different facial expressions in front of the camera, and check the emotion update status via the file /home/seeed/.openclaw/emotion_state.json
cd /home/seeed/.openclaw
vim emotion_state.jsonExpected result:
- Full Conversation Test
Start an OpenClaw conversation:
openclaw
chatTest Scenario:
- Scenario 1: User appears anxious
User (frowning): This code keeps throwing errors, please help me check it.
Agent (Anxiety emotion detected): "Don’t worry, let me take a look at this problem for you. It appears to be a type conversion error. Let me fix it for you:It should work properly after this adjustment. Feel free to tell me if you still run into issues."
Emotion Update → User’s relaxed expression detected after several seconds
Agent: "I see the issue is resolved and you look relieved~ Is there anything else I can assist you with?"- Scenario 2: User looks sad
User (sighing): My coding skills seem to be getting worse lately.
Agent (Frustration emotion detected): "I understand how you feel. It’s perfectly normal to hit a bottleneck sometimes. The fact that you can recognize this means you have high standards for code quality, and that in itself is progress. Would you like me to check exactly what you’re unsatisfied with? Let’s optimize it together."System ArchitectureAfter completing this project, I have been pondering one question: Do AI assistants truly need "emotional intelligence"?
Nowadays, AI assistants all advertise how intelligent and powerful they are, yet few pay attention to user experience. Humans are emotional beings, and our way of communication largely depends on the other person’s emotional state. A truly good assistant should not merely answer questions, but understand your current state and adjust its communication style accordingly.
Current AI is very "intelligent", but sometimes rather "annoying". When you are exhausted, it keeps outputting nonstop; when you are in a hurry, it still lists outlines unhurriedly. This is not a flaw of AI itself, but a problem with design philosophy.
Future PlansI am still a newcomer on the Hackster platform, and this is my first official practical project. Aside from the core advantage of privacy protection, this project still has great potential for expansion. Below are several key directions in my plan:
① Full edge-side deployment with zero API calls
Currently, AI Box performs LLM inference via the MiniMax API. This means although your conversation data is processed locally, the inference still requires an internet connection to call external services. Future versions plan to integrate fully edge-side large models, such as:
- Open-source models like Llama 3 / Mistral
- Quantized versions optimized for edge devices (e.g., Q4_K_M)
- Accelerate inference on the NPU of R2145
After implementing pure local inference, AI Box will truly become an independent AI device with zero dependencies and zero data uploads. It can work normally even in a completely isolated intranet environment.
② Voice Interaction — Integrate ReSpeaker device
The current version relies on text input and output, which is not natural enough for daily use. The next step is to integrate the Seeed ReSpeaker series USB microphone array to realize:
- Voice wake-up: Activate the AI with a keyword (such as "Hey V") without manual triggering
- Voice input: Talk to the AI directly without typing
- Voice output: AI responses are read aloud directly with multiple voice styles supported
- Noise suppression: Reduce ambient noise interference by leveraging the directivity of the microphone array
- Interruption function: Interrupt the AI’s reply at any time just like a real conversation
This will greatly enhance the interactive experience and turn AI Box into a genuine voice assistant.
③ Multimodal Perception
Beyond facial emotion recognition, more perception capabilities can be expanded:
- Posture detection: Identify whether the user is fatigued or in a daze
- Tone analysis: Analyze emotions from voice, not just facial expressions
- Environmental perception: Environmental factors such as lighting and room temperature also affect emotion judgment
④ Long-term Memory and Personalization
The current AI Box is stateless, with each conversation independent of others. Future updates will support:
- Remembering user preferences, habits, and frequently asked questions
- Building user portraits based on historical interactions
- Creating a truly personalized assistant
These features will be gradually implemented in subsequent versions. This project is just a starting point, aiming to create a personalized AI companion that truly belongs to and understands you.











Comments