Your Next AI Assistant Might Not Need an Internet Connection
Jansky is a Raspberry Pi AI assistant that runs 100% offline to keep your data secure without compromising on convenience.
We’ve all been burned by tech companies so many times that it is hard to trust them. No matter what assurances they give us about protecting our information, it seems inevitable that a data breach will eventually happen. So, in the age of modern AI when cloud-based platforms want all of our data to provide us with more personalized service, coughing it up seems like a big gamble. As the old saying goes: Fool me once, shame on you; fool me twice, shame on me.
Recent advances in AI have made it very useful, and the prospect of going without it doesn’t sound so good. The natural solution to this problem would be to run AI algorithms locally so that private data can stay offline. However, actually doing this is much easier said than done. But it doesn’t have to be overly complex, says Mayukh Bagchi. To prove it, he designed and built an offline AI agent using highly accessible tools.
The device, named “Jansky,” is a desktop assistant that listens, thinks, and speaks without constantly sending recordings to the internet. At its core is a Raspberry Pi 5 with 8 GB of RAM mounted directly behind a 5-inch 800×480 touchscreen. The compact build keeps everything self-contained: a USB microphone captures voice commands, while a small USB speaker delivers spoken replies. Even the display draws power directly from the Pi’s GPIO pins, minimizing extra cables and giving the unit the look of a purpose-built appliance rather than a prototype.
A custom wake-word system constantly monitors audio for the phrase “Hey Jansky.” Once triggered, it records the user’s speech, transcribes it locally using Whisper.cpp speech-recognition software, and then hands the text to a language model running directly on the Pi through Ollama. The model — a compact Qwen 2.5 LLM with 1.5 billion parameters — handles everyday requests such as telling the time, reporting system temperature, or answering simple questions.
If the assistant needs to act, it calls specific software tools. One module checks weather forecasts, another retrieves news headlines, and another reads the Pi’s hardware statistics. The result is turned back into natural-sounding speech using a local neural voice engine called Piper and played through the speaker. A simple animated face on the screen changes expression as the assistant moves between idle, listening, thinking, and speaking states, giving the illusion of personality.
Only when faced with complex questions does Jansky optionally reach out to a cloud AI service, and even that feature can be disabled. The majority of interaction — including voice recognition and responses — remains entirely offline.
Bagchi suggests performance could improve further with a small neural-processing add-on board, but even in its current form the project demonstrates that modern AI doesn’t have to live exclusively in distant data centers. With inexpensive hardware and open software, a private, always-available assistant can now live right on a desk.
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.