We’re excited to share our latest build — CamThink NeoEyes NE101 Vision AI Camera, an ultra‑low‑power, modular smart camera built on the ESP32‑S3, designed to run on batteries for years (yep, literally years, depending on how you set it up and use it).
The secret? Event‑triggered wake‑up. Most of the time, the camera stays in deep sleep, sipping almost no power. When it detects a preset event (like motion), it “wakes up, ” snaps a photo, and sends it over MQTT to the IoT platform of your choice. Then it quickly goes back to sleep to save power.
This workflow makes it a perfect fit for projects like crop monitoring, utility meter inspections, or any scenario where you need long‑term, low‑maintenance, event‑based image capture.
NE100‑MB01 Development BoardAt the heart of Camthink NE101 is the NE100‑MB01 development board, built around an ESP32‑S3 module. It supports both event‑triggered and scheduled image capture while maintaining ultra‑low power consumption for long‑term battery operation. The NE101 is fully open‑source and modular, allowing you to swap camera modules (wide‑angle or telephoto), choose connectivity options (Wi‑Fi, Cat‑1, or long‑range Wi‑Fi HaLow), and add extra sensors like PIR or radar — making it a flexible platform for rapidly testing ideas or building custom camera solutions.
The device is made up of modular parts — the front cover, camera module, mainboard, communication module, and battery — all linked by standardized interfaces. This means you can easily swap, upgrade, or tinker with components, making it perfect for hands‑on makers and developers.
Currently supported interchangeable camera modules include:
- OV5640, 60° FOV, 8 cm — Close‑range shooting
- OV5640, 60° FOV, 3 m — Standard distance shooting
- OV5640, 120° FOV, 15 cm — Wide‑angle close‑range shooting
- OV5640, 120° FOV, 4 m — Wide‑angle standard distance shooting
- 2 MP USB Module, 60° FOV, 9 m — Long‑range shooting
- 2 MP USB Module, 120° FOV, 5 m — Standard distance shooting
Beyond its modular design, the CamThink NE101 also offers excellent software ecosystem compatibility.
It can seamlessly integrate with AI application‑building platforms like Dify and Flowise, enabling advanced capabilities such as image understanding and Q&A reasoning. For smart home scenarios, it works well with platforms like Home Assistant, making it possible to implement home security features and sensor‑based automations. Additionally, it can connect to IoT platforms like ThingsBoard and BeaverIOT to achieve end‑to‑end data workflows — from device‑level image capture to AI‑powered inference. You can play around with these open‑source tools to unlock new creative ideas and build your own smart automations.
System OverviewHere’s an example of how the camera’s application logic works:
System Block Diagram:
[Power Module] → [Mainboard(ESP32-S3)] ←→ [Camera Module]
↑↓ ↑
[Comm. Module] ←→ [16Pin Expansion I/F] ←→ [PIR Sensor]
↑↓
[MQTT Server]
Step-by-Step GuideStep 1. Power Up- Open the back cover, insert 4x AA batteries
- The front LED will blink when the device boots up
- The camera broadcasts a Wi-Fi AP:
NE101_ABC123
- Connect your PC or phone, then open
192.168.1.1
in a browser
Tip: If using a mobile hotspot, enable 2.4GHz compatibility (on iOS: “Maximum Compatibility”).
Open the Web UI to customize it to your needs. You can adjust:
- Capture Modes – Choose between interval, scheduled, PIR-triggered, or manual button capture.
- MQTT Settings – Set your broker address and topic for data transmission.
- Network – Connect via Wi‑Fi, Cat‑1, or long‑range Wi‑Fi HaLow.
- Image Settings – Fine‑tune exposure, resolution, and LED behavior.
Here are two example use cases for your reference.
Use Case 1: Balcony Plant Time-Lapse1. Mount the CameraPick a good spot and securely mount the camera so it can clearly capture your plants.
2. Power On & ConnectInsert the battery and turn on the device. On your phone or computer, connect to the camera’s Wi‑Fi hotspot (it usually looks like NE101_ABC123).
3. Configure Capture SettingsOpen a browser and go to 192.168.1.1. In the web interface, navigate to Capture Settings.
4. Set Up a Schedule
- Check Enable Scheduled Capture.
- For Capture Mode, select Interval Capture.
- Set the Capture Interval. For plants, one photo per day is often enough — here, we set it to 7d.
5. Start RecordingSave the settings. Now your camera will automatically wake up once a day, take a picture, and upload it to your configured MQTT server — the perfect start to a long‑term photo diary!
In this example, we use the CamThink NE101 for scheduled image capture, BeaverIOT as the IoT platform, and an AI inference service for analysis. This workflow is built manually within BeaverIOT.
- Deploy NE101 to capture periodic water meter images
- Automate the workflow with BeaverIOT for local or remote processing.
- Create a workflow: receive MQTT payload → extract image → call AI inference → return results
- Visualize everything on a dashboard, showing both the raw images and AI‑processed outputs side by side.
it’s mostly about software setup. Deploy the BeaverIOT platform (either locally or on your cloud server). You can follow the official docs here: BeaverIOT Docs. I’ll assume you already have a basic workflow running in Beaver:beaverIOT
Here’s an example of the workflow output:beaverIOT-NE101WorkFlow
And here’s a record of it running:
Step 1: Configure the NE101
Go to the web configuration page and set the image capture interval (I used one capture every 7 days for this demo). Configure MQTT parameters and Wi‑Fi. Make sure the MQTT settings match the BeaverIOT workflow node you’ll create.
Step 2: Test the MQTT Connection
Use the single‑node test tool in BeaverIOT workflows to verify that the MQTT data from the camera is coming through.
Sample payload from the device:
{
"ts": 1740640441620,
"values": {
"devName": "NE101 Sensing Camera",
"devMac": "D8:3B:DA:4D:10:2C",
"battery": 84,
"snapType": "Button",
"localtime": "2025-02-27 15:14:01",
"imageSize": 74371,
"image": "data:image/jpeg;base64,..."
}
}
Field details:
ts
:Timestamp (ms)devName
:Device namedevMac
:Device MAC addressbattery
:Battery level (%)snapType
:Image capture(e.g., Button, Scheduled, PIR, etc.)localtime
:Local time (string format)imageSize
:Image size in bytesimage
:Base64‑encoded JPEG image data (prefixed withdata:image/jpeg;base64,
Step 3: Create the first entity in the Entities section, and add an Entity node in the workflow to receive the raw data.
Step 4: Add a Code node in the workflow to parse the payload (MQTT data) and extract the Base64‑encoded image.
Code Node Example:
import json
def main():
try:
if isinstance(arg1, str):
data = json.loads(arg1)
else:
data = arg1
except json.JSONDecodeError:
return {"result": None}
payload = data.get("payload", data)
if isinstance(payload, str):
try:
payload_data = json.loads(payload)
except json.JSONDecodeError:
return {"result": None}
else:
payload_data = payload
image_data = payload_data.get("values", {}).get("image")
return {
"result": image_data
}
Step 5: Create a second entity in the Entities section to display the raw image, and add an Entity node in the workflow to receive the previous node’s output for subsequent AI inference.
Step 6: Send the Base64‑encoded image to the AI inference service (here, we use an internally wrapped request for image inference).
Step 7: Call the AI image synthesis service to overlay the inference results onto the original image, enabling visualized AI outputs.
Step 8: Create a third entity to display the AI‑processed image. This will serve as the final data node in the workflow.
Step 9: Create a visualization dashboard to display both the original image and the AI‑processed inference results.
CamThink NE101 offers a rich set of interfaces, a flexible modular design, and ultra‑low‑power operation with excellent scalability. We look forward to seeing you explore and experience what this camera can do!
Comments