Forget pre-programmed paths. What if you could simply tell your robot where to go and what to do—just like instructing a person? Hiwonder JetAuto is not your typical wheeled robot; it's an open-source development platform that merges state-of-the-art AI comprehension with advanced autonomous navigation, turning high-level voice commands into real-world actions.
The Core Idea: Conversation as ControlThe goal is intuitive: interact with your robot using natural language. Thanks to integrated support for major multimodal AI models—such as DeepSeek, Qwen, and 01.AI via cloud APIs—JetAuto can understand complex, context-rich commands. Combined with its six-microphone array for clear voice pickup, you can give instructions like “Go to the zoo area and tell me what animals you see” or “Find the red tool on the workbench.”
JetAuto robotic tutorial: schematics, codes, videos & projects.From Words to Waypoints: SLAM Meets Semantics
Understanding the command is only the first step. Executing it requires translating semantics into navigation. This is where JetAuto’s integrated SLAM (Simultaneous Localization and Mapping) system comes in. Using LiDAR and a 3D camera, it builds a map of its surroundings in real-time.
The breakthrough is layering AI-driven semantic understanding on top of this geometric map. The robot doesn't just see an obstacle; it can be trained to recognize it as a "table" or a "door." When you say “go to the zoo, ” it correlates that verbal label with a specific area on its map, autonomously planning and following a path there.
Arrival at the destination doesn't mark the end of the task. Once in the target zone—like our “zoo”—JetAuto’s onboard vision, powered by visual AI models, kicks in. It doesn't just capture images; it analyzes the scene, identifying and classifying objects. It can then generate a verbal report via its AI “brain, ” completing the interactive loop: “I can see a giraffe and an elephant.”
JetAuto packages these advanced capabilities into an accessible, ROS 2-based platform ideal for prototyping and learning. It demonstrates a practical implementation of embodied intelligence—where AI isn't just processing data, but is guiding a physical body through a dynamic environment. This fusion opens project possibilities in:
- Human-Robot Interaction (HRI): Creating more natural and intuitive control interfaces.
- Semantic Navigation: Researching how machines understand and label spaces.
- AI-Assisted Exploration: Deploying robots for search, inspection, or inventory tasks using high-level directives.
The platform is built for expansion. Out of the box, it supports advanced navigation, voice interaction, and visual recognition. Its expandable interface allows you to add robotic arms (like the JetArm) for manipulation tasks, turning it into a complete mobile manipulation system. Hiwonder provides comprehensive resources, including ROS 2 packages, SLAM configurations, and tutorials for AI model integration, helping you move from concept to functional prototype rapidly.
Ready to build a robot that truly listens, understands, and explores? JetAuto provides the hardware and software foundation to start experimenting with the next generation of autonomous, AI-driven robotics today.









Comments