Today we are building a Company Assistant Robot that can deliver Documents , food to employees , autonomously map and navigate the whole office with A Chatting System that connect all the Employees and the Manager and Voice Support to make it even more intelligent
Gathering the tools
to build the car i simply followed the official Documents link
Attaching the frame
Connecting Servo Shield to Raspberry Pi.
attached the power bank to the back , kinect sensor on top with tape and here is the final image
Building from the Ground Up :
when i first started working on the project i used the prebuilt raspbian image for the donkey car only to know after installing the ROS that most ros packages are not compitable with the raspbian os and only for ubuntu so i had to start all over again with Ubuntu Mate and ROS Kinetic Distro then install all the donkey car required libraries , TensorFlow , Keras , .... on the raspberry pi from the start
Note : i am writing the tutorial for anybody with no prior knowledge to ROS or Raspberry pi , Tesnorflow , Keras , .... so you if you already know how to deal with them feel free to skip to Kinect driver setup in point 2 Also if you find problem installing any of the packages especially ROS , kinect driver and TensorFlow follow the steps very carefully you may find it's not the ordinary way for installing them but i am adding some tips and direct links with versions i tested to get over with the Errors i faced while installing and took me couple weeks banging my head against the wall till i figured it out so it should be straightforward for you ... also feel free to comment if you faced another error not mentioned here i may be able to help you with it , with this noted let's start :
1- Get the Raspberry Pi working
Update : an Easier Way is to get a Ubuntu mate with ROS Distro pre-installed , i didn't test it but found it on the internet after finishing the project
- installing Ubuntu Mate
just a straightforward process like installing any os on the raspberry pi
go to this website and download ubuntu mate image for the raspberry pi http://ubuntu-mate.org/
to copy it to sd card for
Mac users use : Etcher
Windows : extract it with 7zip and write to sd card with Win32DiskImager
install tools to extract the image :
sudo apt-get install gddrescue xz-utils
Extract it :
Copy to sd card you need to replace /dev/sdx with the path of card,:
sudo ddrescue -D --force ubuntu-mate-16.04.2-desktop-armhf-raspberry-pi.img /dev/sdx
You’ll be prompted with the “System Configuration” screen. complete the setup and let's jump to installing the ROS Distro
Enable Camera and ssh Interfaces :
in mate termianl write :
then enter your password
and choose interfacing then enable both Camera and ssh server as in the images
if you have an hdmi you can write all the terminal commands using usb keyboard and mouse or you can now use the ssh server by typing on your desktop linux Pc
if you are on windows you can use putty to log in to ssh server. but on mac or linux write the following command where pi is the user name and 192.16.... is your raspberry pi ip
you can get it by many ways either using hdmi write ifconfig as in the picture your raspberry pi ip is the address next to wlan0 interface
- Installing ROS :
- Reference : ROS Kinetic install Guide : http://wiki.ros.org/kinetic/Installation/Ubuntu
Update : an Easier Way is to get an Ubuntu mate with ROS Distro pre-installed , i didn't test it but found it on the internet after finishing the project
Start Installing :
- Configure your Ubuntu repositories to allow "restricted," "universe," and "multiverse." You can follow the Ubuntu guide
- Setup your sources.list
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
- Set up your keys
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116
Note : Also Check the Mirror links in the reference , other servers based on you location the keyserver may not work , it happened with me at least .
sudo apt-get update sudo apt-get install ros-kinetic-desktop sudo rosdep init rosdep update echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc source ~/.bashrc sudo apt-get install python-rosinstall python-rosinstall-generator python-wstool build-essential
Test the installion by typing roscore in the mate terminal
you should get a result like this that roscore is working on the raspberry pi
- Installing tensorflow , keras
python version 3.5
wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.8.0/tensorflow-1.8.0-cp35-none-linux_armv7l.whl sudo pip3 install tensorflow-1.8.0-cp35-none-linux_armv7l.whl sudo pip3 uninstall mocksudo pip3 install mock
python version 2.7
wget https://github.com/lhelontra/tensorflow-on-arm/releases/download/v1.8.0/tensorflow-1.8.0-cp27-none-linux_armv7l.whl sudo pip install tensorflow-1.8.0-cp27-none-linux_armv7l.whl sudo pip uninstall mocksudo pip install mock
sudo apt-get install python3-numpysudo apt-get install libblas-devsudo apt-get install liblapack-devsudo apt-get install python3-dev sudo apt-get install libatlas-base-devsudo apt-get install gfortransudo apt-get install python3-setuptoolssudo apt-get install python3-scipy sudo apt-get updatesudo apt-get install python3-h5py sudo pip3 install keras
test TensorFlow and keras installation
python -c 'import tensorflow as tf; print(tf.__version__)' # for Python 2python3 -c 'import tensorflow as tf; print(tf.__version__)' # for Python 3
python -c 'import keras; print(keras.__version__)'# python 2python3 -c 'import keras; print(keras.__version__)' # for Python 3
2- Kinect setup to work with the rasppberry pi and ROS integration
You will need an adapter to run the kinect 360 from ordinary usb port
this is the one i am using
it 's 12 volts not like normal usb from pcs and raspberry pi 5 volts so to keep this project portable we have to modify/damage the adapter splitter ( for testing and not having a 12 volt battery i didn't modify the adapter cable used it with a long extension cable to main source )
- you will need Voltage regulator 12V 1A (e.g. NTE966).
- 0.33μF and 0.1μF capacitors.
here is the link and final result if you wish to proceed this way with a 12 volt battery
to make the kinect work with raspberry pi we'll need to install a library called
freenect , which will make it very easy to access both the regular and depth images from the Kinect 360. To do this, type
sudo apt-get install freenect
now to test it connect the kinect sensor to the raspberry pi and hdmi cable to the screen or use VNC viewer "google it it 's easy to use" and run
you should see the camera image like here
now the kinect sensor is working on raspberry pi it's time to intgerate it with ROS
3- Mapping and Navigation
- Adding ROS Package for Kinect sensor so ROS can get depth data
sudo apt-get install -y ros-kinetic-freenect-camera ros-kinetic-freenect-launch sudo apt-get install -y ros-kinetic-freenect-stack ros-kinetic-libfreenect
- Adding ROS Packages for mapping and navigation
sudo apt-get install ros-kinetic-rtabmap-ros
now we are ready to test our sensor
we cannot run Rviz or any Gpu intensive task off raspberry pi , it will become very slow and also later the donkey car is going to use image processing from camera feed and neural network runnnig the raspberry pi 3 can't handle all of this also we need laptop screen to get the map and our current location image so we will setup the laptop as Master to run all ROS GPU intensive processing while the raspberry pi is a Slave used to get the Kinect Sensor's Data and send it to the laptop to process it .
Use IP address as value for ROS_IP variable:
And for ROS_MASTER_URI:
If you want to use these values for future sessions, you can save exports to .bashrc in your home dir
echo 'export ROS_IP="192.168.1.4"' >> ~/.bashrc echo 'export ROS_MASTER_URI="http://192.168.1.4:11311"' >> ~/.bashrc
for RPi as a slave add master IP for ROS_MASTER_URI
And Raspberry Pi's IP address as ROS_IP
export ROS_IP="192.168.1.9" export ROS_MASTER_URI="http://192.168.1.4:11311"
echo 'export ROS_IP="192.168.1.9"' >> ~/.bashrc echo 'export ROS_MASTER_URI="http://192.168.1.4:11311"' >> ~/.bashrc
run the following on the master laptop now :
run the following nodes on the raspberry pi :
roslaunch freenect_launch freenect.launch depth_registration:=true
if you get this error you may need to add device id to the commad you will find the device id before the error line
roslaunch freenect_launch freenect.launch depth_registration:=true device_id:=A00366A10048040A
[ INFO] [1540635582.110116333]: No matching device found.... waiting for devices. Reason: [ERROR] Unable to open specified kinect
if you reached this line saying : Opened "xbox NUI Camera " this means the kinect is detected and working now
notice : i am switching between both the master and slave from 2 tabs on the master laptop
now on the master laptop write
roslaunch rtabmap_ros rtabmap.launch rtabmap_args:="--delete_db_on_start"
you should see something like this
to start mapping then we can use the map for navigation and localizing the Donkey car an here is the result of mapping the room
notice : i am changing the 3d view to a more 2d
this is a gif video of the Mapping process "it may need more time loading"
the map will be saved here
and you can run it anytime using this tool:
from all of this we need to get the room top view as png files sequence to train our model with the map provided and it's position is updating inside the map
for this we use a python script using pyautogui to take a screenshot of the map region every time the car is moving
Just an idea :
Extra Step to make this project even more practical is to map different floors and add a pressure sensor to the raspberry to detect the altitude and based on it it can use different maps while navigating multi floors company "I am still trying to figure it out how to make the elevator detect the car and open automatically and get car's repose which floor it want to reach "
it's just interesting.
import imutils im = pyautogui.screenshot(region=(600,150, 600, 600)) im.save("\home\wwe\Desktop\screenshot.png")
but how to get that the car is moving ?
we need to modify the Donkey car files specifically the web.py
let's start by getting the donkey car installed directory in python 3
python -m site
i found it installed in :
after changing it with the supplied file in code section
generate the drive script, config and folder structure for your car.
Step 2 : Back to our Donkey Car And The Neural Network
donkey createcar ~/mycar
Now we are Building the Neural Network with camera feeds , Mapping , localization data
- Install donkeycar on Linux
Install dependencies, setup virtualenv
sudo apt-get install virtualenv build-essential python3-dev gfortran libhdf5-devvirtualenv env -p python3source env/bin/activatepip install tensorflow==1.8.0
- Install donkey source and create your local working dir:
git clone https://github.com/wroscoe/donkey donkeycarpip install -e .
Callibrate your car
nano ~/mycar/config.py donkey calibrate --channel <your_steering_channel>
this step takes some time to calibrate it well but the result is rewarding
Start the Engine
cd ~/mycar python3 manage.py drive
if you get an error like this please check that you typed the ip and user name and password correct also that ssh server is installed on your machine and active:
File "/usr/local/lib/python3.5/dist-packages/paramiko/client.py", line 362, in connect raise NoValidConnectionsError(errors)
to install ssh on your machine
sudo apt update sudo apt install openssh-server
then check ssh status you should see "Active" ,and it should be like this
also ssh from raspberry pi once to your local machine to save the authentication fingerprint you should see the result like this
access your car from the webbrower i am usng chrome and tested a generic usb joystick and it was driving the car very smoothly better than keyboard / touch
now try driving and see the map screenshots are added on your desktop files
now we got the mapping and camera feed image finally we need to train a model with these data we will add the image to the neural network so it can process it along with the camera images drive the car 10~20 time and train the model as on the official Documentations page .
there were 2 options to feed the new images to the neural network :
1- The lazy option i went with was to combine both images as one using photoshop batch action . and use the same training and driving script
2- the standard one is to modify the scripts . but i am too lazy :D
Training the model
- In the same terminal you can now run the training script on the latest tub by passing the path to that tub as an argument. You can optionally pass path masks, such as
./data/tub_?_17-08-28to gather multiple tubs. For example:
python ~/mycar/manage.py train --tub <tub folder names comma separated> --model ./models/mypilot
Optionally you can pass no arguments for the tub, and then all tubs will be used in the default data dir.
python ~/mycar/manage.py train --model ~/mycar/models/mypilot
- Now you can use rsync again to move your pilot back to your car.
rsync -r ~/mycar/models/ pi@<your_ip_address>:~/mycar/models/
- Now you can start your car again and pass it your model to drive.
Step 3 : Employees face Recognition
python manage.py drive --model ~/mycar/models/mypilot
opencv is used to recognize employees faces i followed this tutorial
https://www.pyimagesearch.com/2018/09/24/opencv-face-recognition/Step 4 : Self- Hosted Chat System
sudo snap install rocketchat-server. This will take a couple of mintues. Wait about 2 minutes after everything has completed.
- Then, access
http://<server ip>:3000to access your Rocket.Chat server! Create the first user, which will become the server's adminsitrator. Have fun!
this is the chatting interface supported by multiple mobile and desktop platforms
"The Extra Feature are not yet implement /tested to finish the project before the deadline"Step 5: Follow Me (Not Implemented yet)
( Future additional Features updates)
make the Donkey Car Follow youStep 6 : ALL by Voice Control (Not Implemented yet)
( Future additional Features updates)
Adding Alexa support to navigate by voice command