An Easy Button for AI

The World's Easiest GPT-like Voice Assistant is powered by an LLM, runs 100% locally on a Raspberry Pi, and can be built in under an hour.

Nick Bild
2 months agoMachine Learning & AI
It has never been easier to run your own LLM (📷: Nick Bild)

Machine learning has experienced a meteoric rise over the past decade or so. There has been a feeling in the air that this technology has finally arrived, with advancements in algorithms and hardware allowing it to live up to expectations for the first time. While these events have been unfolding for some time now, it was the release of OpenAI’s ChatGPT a little over a year ago that really caused the floodgates to open. The release of a highly accessible and powerful large language model (LLM) captured the interest of the general public in a way that no AI tool ever had before.

But after the initial honeymoon phase was over, LLMs like ChatGPT gave rise to many questions and concerns. These algorithms are notorious for the massive amount of compute resources and energy they consume, for example, leaving many to wonder how wise it is for these models to proliferate. Privacy concerns also came into the spotlight — how safe are our conversations with these chatbots, and do we really want to send sensitive information over the Internet to a cloud service provider? Questions about censorship also arose when it was discovered that many of the “guardrails” built into the models looked an awful lot like the biases of the developers being intentionally baked in.

The pace of progress has not let up for a second, however, and open source models have dramatically changed the landscape. What required massive computing resources and teams of experts in machine learning and computer science to operate a year ago can now be reproduced on a typical laptop by a technically-inclined individual, resolving many of the concerns about cloud-based LLMs. In fact, it can be so simple to set up an LLM for personal use now, that I recently built what I call The World's Easiest GPT-like Voice Assistant. In this project, I provide step-by-step instructions to build an LLM-based voice assistant that runs 100% locally on a Raspberry Pi.

To use the system, one presses a button and then speaks their request. A microphone captures the audio and forwards it into the Whisper automatic speech recognition software, which creates a transcript of the audio. That transcript is then fed into a TinyLlama-1.1B (1.1 billion parameter) LLM that has been packaged up as a llamafile. If you are not familiar with llamafile yet, it is well worth checking out. They are entirely self-contained LLM-based chatbot executables that run on multiple hardware architectures and operating systems. The response from the LLM is then forwarded into the free and open-source eSpeak speech synthesizer to produce audio that is played on a speaker connected to the Raspberry Pi.

After setup is complete, the Raspberry Pi can be completely disconnected from the Internet. Everything runs 100% locally, keeping your conversations private. There is a price to pay for this, however. A brief request can easily take 15 seconds of processing before the response is ready. And, of course, if you make a complicated request, or ask for a lengthy story to be generated, it will take longer. My testing was completed on a Pi 4 — if you have a Pi 5, the same instructions should work, and the system should be faster, but I have not tested it.

All you need is a Raspberry Pi, a USB microphone, and a speaker to set up your own LLM-based voice assistant. The software is all free and the build process should take under an hour. For bonus points, you could even try porting The World's Easiest GPT-like Voice Assistant to other, faster single board computers for better performance. Using llamafile, that should not be too challenging.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles