The idea behind OpenClaw is fascinating: giving an AI agent access to all the resources of a computer so it can autonomously solve tasks. OpenClaw researches information, installs software, writes and executes code when necessary, and can even perform online tasks such as answering emails or creating accounts on forums and services to reach its goals.
In practice, it can do almost everything a person sitting in front of a computer could do—except imagine someone tireless and persistent, able to split their attention across research, experimentation, analysis, improvement, and communication all at once.
But this raises an interesting question: could OpenClaw help create devices—or perhaps something even more interesting—transform the very machine it is running on into a new device?
The InstallationIn the typical use case, the OpenClaw documentation recommends installing it on a Mac Mini, which costs about $600 in the United States. In reality, however, OpenClaw can run on many different platforms: from your own personal computer—which is not recommended—to an old Android smartphone, or even a hosting service.
My choice was a very affordable x86 mini PC called the LattePanda IOTA.
This model sells for around $170, although there are more and less expensive versions depending on the hardware configuration. It includes an Intel N150 quad-core processor and also an RP2040 coprocessor for real-time operations.
This means OpenClaw can control both the mini PC and an onboard microcontroller—essentially something like an embedded Arduino (technically a Raspberry Pi Pico).
In theory, you could simply connect standard hardware such as a relay, buzzer, or sensor and ask OpenClaw to:
- learn how the component works
- write the software required to control it
- verify whether the objective was achieved
But is that actually viable with the current state of OpenClaw?
Installing OpenClaw on the LattePanda IOTAMy LattePanda IOTA came with Windows. Although OpenClaw can run on Windows, I decided to replace it with Ubuntu for more flexibility.
First I flashed the Ubuntu image onto a USB drive using balenaEtcher. Then I connected the USB drive to the LattePanda and pressed F7 to change the boot device.
After the installation finished, I opened a terminal and ran:
curl -fsSL https://openclaw.ai/install.sh | bashI also installed a firewall:
sudo apt install ufw -y
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow OpenSSH
sudo ufw enableAnd fail2ban to reduce intrusion attempts:
sudo apt install fail2ban -y
sudo systemctl enable — now fail2banAt this point I launched OpenClaw onboard utility and reached the most important decision: which LLM to use and how usage would be billed.
LLMs, Tokens, and CostsThe horror stories around OpenClaw usually involve two things:
• security concerns (which can be mitigated with some attention)• excessive token consumption (which can also be managed with proper configuration)
OpenClaw needs access to a capable LLM to operate, and the API costs of such models can be significant—sometimes high enough to make you reconsider using OpenClaw at all.
It is possible to install Ollamaand run a smalllocal model for free, but on modest hardware without a GPU, both performance and the “intelligence” of OpenClaw degrade noticeably.
My first attempt to control costs was using Google Gemini’s free tier. To obtain an API key you must visit:
https://aistudio.google.com/api-keys
It worked initially, but very quickly I started receiving “Resource Exhausted” errors.
My second attempt was adding $10 of credit to OpenRouter, which allows access to several free models with a daily usage limit. Unfortunately OpenClaw quickly exhausted the available quota, and the free models available through OpenRouter didn’t perform particularly well.
I experimented with spacing out heartbeats, reducing LLM calls, and minimizing context size, but the quota still lasted only a couple of days.
The best option I found so far is ChatGPT Go, a subscription plan available in more than 170countries. It costs around $6 per month, provides better limits than the free tier, and most importantly allows configuring authentication through Codex OAuth instead of API keys.
It’s still important to space out heartbeats, reduce concurrency, avoid reasoning mode, and avoid unnecessary interactions—but none of this is particularly difficult, and with these adjustments the monthly quota can last the entire month.
Here is an example of the heartbeat configuration from my JSON config:
“heartbeat”: {
“every”: “2h”,
“includeReasoning”: false,
“target”: “last”,
“prompt”: “Read HEARTBEAT.md if it exists ….”,
“ackMaxChars”: 300
},
“maxConcurrent”: 1,
“subagents”: {
“maxConcurrent”: 2
}When configuring Codex OAuth, it’s best to log in directly on the machine rather than through SSH, since the authentication flow opens a browser locally.
I also recommend installing the qmd skill that reduces input tokens for long sessions.
Once the model was configured, I selected Telegram for messaging and enabled Brave Search by obtaining a free API key so OpenClaw could search the web.
I also installed a couple of additional skills:
- find skills (allows OpenClaw to search and install new skills)
- super memory (improves memory management)
Beyond the usual experimentation and using OpenClaw for research, creating contentor administrative assistance, what really interested me was exploring whether OpenClaw could transform the machine it runs on into another device.
A few years ago, that idea would have sounded like something out of a Philip K. Dick story, such as Autofac.
First I installed Python tools and enabled serial access so OpenClaw could use all hardware resources (Now I realized that OpenClaw could have solved this without my "setup" intervention.)
sudo apt install python3-pip
pip3 install pyserial
sudo usermod -a -G dialout roniThen I connected a LED between GND and GPIO pin GP1.
My request was simple: whenever the OpenClaw heartbeat runs, the LED should turn on.
This LattePanda runs Ubuntu, but the LED and mostheader-connected hardware are controlled by the RP2040 coprocessor. Programming it requires interacting with a REPL or mpremote—something that isn’t entirely straightforward.
OpenClaw managed to figure out how to control the LED and synchronize it with the heartbeat without any trouble.
Then I asked something more complex: use the LED to display messages in Morse code. Combining its existing knowledge with a bit of web research, OpenClaw quickly had the LED blinking.
Encouraged by this, I raised the stakes.
I connected a 4-digit 7-segment display and asked it to show the local time. OpenClaw downloaded the necessary library, wrote the Python code, and created a routine that updated the display every minute.
Both the LED and the display were output devices. How would it handle capturing data from a less conventional source?
For the third experiment, I connected a MEMS methane gas sensor (CH4) and asked OpenClaw to build a system that monitors air quality, stores the measurements, analyzes the results, and sends reports through messaging.
The gas sensor had a single output pin. I told OpenClaw the sensor model and which pin it was connected to. It quickly noticed the sensor had been connected to a digital pin and asked me to move it to an analog input instead according to LattePanda IOTA specifications.
After that, it tested the incoming values, wrote code to store the measurements in a CSV file, and scheduled a cron job to compile readins. Also sent a report in both text and graph format.
These early projects are very simple, but there is still something captivating—and slightly unsettling—about the idea of a machine capable of creating other machines.
The work of a maker is varied, challenging, and complex. We research technical specifications, analyze examples, connect components, run experiments, write software, design circuits, create enclosures, document everything, and then start again.
Asking the machine itself how to connect components—and having it carry out the design, install libraries, write software, and run tests—is a huge shift.
It raises a deeper question about our own role as makers.
What should I upload to GitHub from this experience? A collection of prompts? OpenClaw json config? the SOUL.MD file?
Will we still call ourselves makers when most of the making is delegated to machines?
What will we actually understand when our role becomes little more than physically connecting parts?
In which direction—and for what purpose—will it still make sense to keep building things?
I don’t have good answers. For now, just like the air quality monitor that OpenClaw built for itself suggests, I’m going to open the window.









_3u05Tpwasz.png?auto=compress%2Cformat&w=40&h=40&fit=fillmax&bg=fff&dpr=2)
Comments