Home Is Where the Smart Is

Adrian Todorov built a private AI voice assistant at home with a Jetson Orin Nano and the Nomad orchestrator.

Nick Bild
14 days agoMachine Learning & AI
NVIDIA Jetson Orin Nano (📷: NVIDIA)

Generative artificial intelligence (AI) tools are improving by the week, and with these advances, the jabs and skepticism of the earlier days are dying away. It seems like everyone wants to integrate these tools into their daily lives in one way or another now. One particularly popular application of the technology is in upgrading voice assistants. The limited understanding and awkward interactions that characterized past voice assistants can be swept away by using a large language model (LLM) to respond to our requests.

But the cutting-edge AI models required to power these applications tend to be major resource hogs. As such, for most people, the only way to harness them is via a cloud-based service. That creates a problem for anyone that is concerned about their privacy, however. Do you really want all of your conversations being sent over the internet to a black box somewhere in the cloud?

Feeling on edge about privacy

Adrian Todorov is an engineer with an interest in running an LLM voice assistant as part of his Home Assistant setup. But Todorov did not want to connect to any remote services to make this happen, so he had to come up with another solution. After a bit of research, he landed on a very practical solution that is relatively inexpensive and simple to implement. And fortunately for us, he has written up the solution so that we can reproduce the setup in our own homes.

Todorov needed a hardware platform that could handle the AI workload without costing thousands of dollars, so he settled on the NVIDIA Jetson Orin Nano. Built on the NVIDIA Ampere architecture with 1,024 CUDA cores and 32 tensor cores, this little computer can perform up to 67 trillion operations per second. That is more than enough horsepower to run a wide range of models available via the Ollama local LLM hosting server.

Tying it all together

In order to tame the complexity and keep everything up and running and playing nicely with Home Assistant, Todorov decided to use Nomad for orchestration. After installing Ollama on the Jetson, and Open WebUI (an LLM GUI) on another machine, they were both deployed with Nomad to get the benefits of orchestration. As both are available as Docker containers, the deployment only required the creation of a pair of structured configuration files.

When all is said and done, both services are available on their local network. From there, they can be plugged into any other workflows or applications, like Home Assistant, without any reliance on remote, cloud-based services. Be sure to check out the full project write-up for all the details you need to build your own edge AI infrastructure.

Nick Bild
R&D, creativity, and building the next big thing you never knew you wanted are my specialties.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles