This Compact Espressif ESP32-Powered Autonomous Robot Has a Machine Learning Brain Written in PHP

Streaming live video to a remote web server, this robot receives its commands from a PHP-based machine learning model.

Pseudonymous maker "bbtrash" has been building an autonomous robot, powered by an Espressif ESP32 microcontroller and a machine learning model written in an unusual choice of programming language: PHP.

"Its a small robot controlled by [a] machine learning script," bbtrash explains of the project. "[It] can go forward, left, right, stop, backward, all controlled by data from camera processed over ML. And there is a fun part: [the] ML code is written in PHP using [the Rubix ML] library. (Yes, I know Python, yes I known OpenCV, this is slow and basic but simple). [The] robot [is] sending images using [a] REST API to [the] server and gets move commands as a response."

Developed by Rasmus Lerdorf and launched in 1995, PHP — originally Personal Home Page, later the PHP Hypertext Preprocessor — is a language developed firmly for the web. In bbtrash's project, though, it serves as the robot's brains: a low-framerate video stream is transmitted from the robot to a web server, where the PHP application processes it to locate objects and transmit control commands to steer the robot towards its goal.

The robot itself is built around an Espressif ESP32 microcontroller with a camera sensor — and plans for a collision sensor in the future, bbtrash adds — with two 360-degree servo motors driving tank-style tracks assembled around a 3D-printed housing. "I just started and its really fun to play with it," the maker says.

"I can control [the] robot from internet and add training data for ML. When I have trained [the] ML model I can test it on [the] robot and see [its] behavior. ML training and analyzing images from camera is running on my laptop which communicates with [the] server which communicates with [the] robot. I'm preparing models which can drive robot autonomously, line following, object following and more. If I add [a] collision sensor I can train [a] new ML model after collision with new images automatically without human interaction."

More information on the project is available on bbtrash's Reddit post; source code has not been published as it is "a little ugly," the maker admits, but is available on request.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire:
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles