One of Walabot's great features is the capability to detect tubes embedded in walls. The current version of the Walabot allows you to detect pipes, but does not differentiate water pipes from electrical wires. The differentiation of these two types of ducts is important in construction works. For example, finding an electric wire inside the wall makes it possible to make an electrical branch, not foreseen in the initial design of the house. The same is the case with respect to a bypass for a new water line. The possibility of Walabot differentiating conduits is thus an added value in construction works. On the other hand, it can sometimes be time consuming and laborious to perform the search for such plumbing manually. This project features a machine-based learning system that allows Walabot to move along a wall and map the scanning area in search of liquid plumbing and electrical conduits autonomously. The location map is presented aligned on the wall, using a color laser projector. The water pipes are shown in green and the electrical pipes in red.
The system has the ability to learn from examples using machine learning. The more diverse the set of examples, the more assertive the system will become, vis-à-vis the variety of plumbing you can find on the walls. This capacity can be distributed by plumbing types, so that the user can choose the identification model that best suits the wall structures in question.
General ComponentsThe system consists of 5 physical and operant subsystems:
- Raspberry Pi 3 – signal processing and data control;
- Walabot - radio frequency transceiver based on ToF;
- Raspberry camera - optical sensor (calibration of the geometrical coordinates of the robot on the wall);
- Robot - actuator (wall scanning);
- Laser – result presenter (detection map projection).
The program was developed in C ++ and consists of 5 classes. The system employs OpenCV computer vision libraries, specifically homography, machine learning and image processing modules.
It has developed these software classes:
- WalabotSys
- CameraSys
- RobotSys
- LaserSys
- AFeatures
The following is a description of the operation of these classes:
WalabotSys and AFeatures classes - Machine learning with SVM on Walabot signs:RaspberryPi performs and processes most of the signals and controls the entire flow of information between subsystems.
The Learning Machine part runs on Raspberry Pi, uses a Support-Vector-Machine (SVM) and works as follows:
The signals from the Walabot antennas can be received in two distinct phases. In the first phase, the system acquires examples of varied plumbing and classification is supervised by a human.
At this stage, the operator has to put the Walabot over a known plumbing and indicate by means of a touch key, as to which class of plumbing the example of Walabot antenna signs belongs.
The second phase corresponds to the use of the training product, in which the identification is made by the system according to the trained model and in agreement with the Walabot antenna signs.
To enhance differentiation, only the 50 pairs of antennas that are closest to one another are selected. This process ensures that the most differentiated reflected signals of the class in identification are used to classify the plumbing. The functional basis is provided by the trigonometry of reflection: reflections of closer pairs better describe close objects; on the other hand, reflections of more distant pairs better describe distant objects.
In order to reduce the size of the features space, to minimize the problem of dimensionality in high-dimensional spaces and to increase the capacity of generalization of the classifier, were derived from the signal of pairs of antennas statistical descriptors of signal, namely: the average of the energy; energy deviation; energy skewness and energy kurtosis.
SVM has the ability to map the input characteristics (descriptor signals) into convex and larger dimension spaces. Larger dimension mapping (on the classification space) simplifies the process of class separation and can increase the generalization capacity by increasing the margin of error between classes. These facts allow us to obtain a non-stochastic training, with a low parameterization requirement. These advantages place the SVMs at a high point, relative to the neural networks for certain applications. In this system, the SVM's were used successfully because they did not require a new parameterization for training and presented a reasonable average error on the test data set.
Three classes are trained (can be easily adapted for more classes):
- Class 1: background;
- Class 2: water pipe;
- Class 3: electrical cable.
Note: Walabot has been configured to work with short distance signals; antenna signals are numerically described by the class AFeatures only in the range from 2cm to 15cm and this increases the capacity of description and discrimination in objects located at these depths.
CameraSys classThe camera is in the same enclosure as the Raspberry Pi and is placed in front of the wall, so as to have a reach over the region of the wall to be scanned:
The Raspberry camera is useful, through homography, to automatically calibrate the vertical robot that is equated from two triangles.
The robot to be calibrated requires three spatial values. It needs to know the distance between the left motor and the right motor (the base composed of the two triangles), the distance between the gondola that carries the Walabot and the left motor (left hypotenuse) and the distance between the gandola and the right motor (right hypotenuse). The kinematic equations are written in C++ as follows:
// x2go and y2go are the new coordinates on the wall
b2 = Base - x2go;
newPL = sqrt(x2go*x2go + y2go*y2go); // new left arm length
newPR = sqrt(b2*b2 + y2go*y2go); // new right arm length
deltaPL = newPL - oldPL; // dif to the old length
deltaPR = newPR - oldPR;
if(deltaPL < 0) // coding the left motor direction
{
deltaPL *= -1;
Ldir = 0;
}
else
Ldir = 1;
if(deltaPR < 0) // coding the right motor direction
{
deltaPR *= -1;
Rdir = 1;
}
else
Rdir = 0;
Lsteps = deltaPL * STEPS / (2 * PI * RADIUS); // transforming to steps
Rsteps = deltaPR * STEPS / (2 * PI * RADIUS);
oldPL = newPL;
oldPR = newPR;
sendCommand(Ldir, Lsteps, Rdir, Rsteps); // sending to robot
The camera system works as follows: the camera sees a chess pattern, placed on the wall, which establishes the geometric reference on the wall where it stands. The distribution relation of corners of the chess pattern to the camera allows the establishment of a geometric transformation (homography),that transforms the points observed by the camera to the chess plane. In the next step, the system uses the Hough transform to detect marks that are circles positioned in alignment with the left and right motors, and the Walabot transporter gondola (see figures):
The circular marks detected by the camera are mapped by the homography transformation to the same plane where the chess pattern is found and that coincides with the plane of the wall. The distances between these marks on the wall plane correspond to the actual distances between the marks. The system automatically becomes aware of the measures necessary for the robot to carry the gondola along the wall.
RobotSys classThe Robot allows the Walabot to be carried along the wall in a patterned motion, to identify each point of the wall in the tube class that exists behind the wall.
It works as follows:
The position of the gondola is synchronized with the Walabot trigger; this way, with each new position, the Walabot gives its signals to the SVM (passing them through the statistical descriptors), these are classified and the result is placed in a matrix related to the real space of scanning. This allows us to know the class identified for each scanning location. The dimensions of the scanning space are defined by:
Width: base of the robot triangles.
Height: distance from that base to the gondola.
There are offset values to limit the movements of the robot to a more restricted area, which can be useful in walls where there are protrusions that prevent the robot from moving.
Motors’ movement is controlled by two ST drivers in a shield (X-NUCLEO-IHM02A1). This shield is formed by two L6470 chips. The control for this shield is produced by a MBED FR401R from ST.
The MBED is connected to the Raspberry PI via the USB port. Raspberry sends the motion pattern, segment by segment, according to the kinematic equations indicating the number of steps and direction each motor must perform, to meet each segment of the motion pattern.
The control protocol for the motors is:
M <dirLeftMotor> # <stepsLeftMotor> # <dirRightMotor> # <stepsRightMotor> E
When a displacement is completed, the robot sends a stop information to the RaspberryPi.
This same MBED also has a touch keyboard shield. This keyboard is based on the Freescale MPR121 chip. With these keys, you can control the operations of the system right close to the wall.
- Key 1 acquires a new class 1 training example;
- Key 2 acquires a new class 2 training example;
- Key 3 acquires a new class 3 training example;
- Key 4 activates the laser calibration rectangle;
- Key 5 is for stopping the system;
- Key 6 deactivates the laser calibration rectangle;
- Key 7 trains the SVM with the examples inserted with the [1,2,3] keys;
- Key 8 calibrates the geometric references viewed by the camera;
- Key 9 starts the scanning and sorting process.
Note: the robot ABS parts were inspired or used from (thank you guys!) :
http://www.thingiverse.com/thing:1303724
and
http://www.thingiverse.com/thing:569308
LaserSys class- The laser is the element that allows you to see the cables and tubes detected, directly on the wall. The same could be done with a video projector connected to the HDMI port of the RaspeberryPi.
It works as follows:
The SVM detections, performed along Walabot scanning on the wall, are marked on matrices that have been scaled according to the actual wall geometry against the camera homography. The information placed in these matrices allows the piloting of a laser projector, using the ILDA protocol and the ILDA interface. The ILDA interface connects to Walabot via a USB connection and a gateway (HELIOS Laser DAC).
The program evokes ILDA actions by the HELIOS API and the libusb library.
The detections made by the SVM in a point-to-point mode are transformed into segments of straight lines by the probabilistic Hough transform. The endpoints of the segments are used to draw these lines, through the laser, on the wall. Electrical cables will be colored red and the water pipes, green.
Laser Calibration
The laser projector used was the CS-1000RGB Mk2. It allows using a knob to modify the projection scale easily. By placing the laser parallel to the wall and aligning the calibration laser rectangle with the circular marks, the correct alignment of the laser with the scanning area of the Walabot is formed.
The laser rectangle is scaled according to the scanning area detected by the camera, while viewing the circular markers.
This alignment matches the laser-designed detections to the positions of the plumbing that are inside the wall.
The tests were carried out on a wooden wall with the pipes placed 10cm from the wall surface (relative to the Walabot).
Confusion table:
The system has good differentiation and plumbing detection capabilities. The point-oriented error rate is around 30%. However, after the set of points was transformed into a set of straight segments (with the Hough transform correctly parametrized), the detection accuracy of the water pipes and electrical cables was 100%.
The biggest disadvantage presented by the system is the scanning time. The use of a Raspberry Pi impairs this factor, since the inertia of the acquisition of the antenna signals is notorious, which forces the transport system to be positioned in short steps at a low speed and to perform 3-second waits for each positioning.
With the same approach, it was possible to identify other objects (not inside the wall) at longer distances, but the training requires more examples and the average energy descriptor has to be eliminated, leaving only the descriptors of: energy deviation energy skewness and energy kurtosis.
Future workIt will be interesting to test the system on faster platforms and try to use more signal descriptors, in order to achieve a higher point-oriented hit rate. The use of machine learning techniques such as deep learning, provided it is used with a good computational support, can bring a more efficient functional performance, allowing classification of a more varied range of things in shorter times.
The laser ilda frame need to be optimized to get better ligth marks.
InstructionsInstall the walabot API for Raspberry Pi
To get the maximum current from the USB ports, in /boot/config.txt put:
max_usb_current=1
safe_mode_gpio=4
You need to compile the whole program using the next command:
g++ -std=gnu++14 -Wno-write-strings -Wno-sign-compare -O3 -Wall $(pkg-config --libs --cflags opencv) -o W2B *.cpp -lWalabotAPI -lwiringPi -lusb -lpthread
On the mbed side (server robot) you have to import the
X_NUCLEO_IHM02A1 library.
Note 1: be careful manipulating the laser, they are very dangerous to the eyes and skin.
Note 2: Run the program using sudo.
How to use the system to train the SVM to recognize pipes/wires inside the wall:
- Run the program;
- Put the Walabot on a wall where there are not pipes/wires;
- Press the key 1 on the touch keypad;
- Take several examples along that wall;
- Put the Walabot on a wall where there are water pipes;
- Press the key 2 on the touch keypad;
- Take several examples along that wall;
- Put the Walabot on a wall where there are electrical wires;
- Press the key 3 on the touch keypad;
- Take several examples along that wall;
- Press the key 7 to train the SVM with these examples.
How to use the system to identify the tubes inside the wall (after training):
- Put the Walabot in the robot gondola;
- Move the polar robot to near the wall;
- The circulars markers must almost touch the wall;
- Mount the Raspberry camera in front of the wall;
- Mount the laser in front of wall;
- Run the program;
- Align the laser using the calibration rectangle touching the key 4;
- Press the key 6 to turn off the calibration rectangle;
- Press the key 8 to get the camera calibration;
- Press the key 9 to start the scanning;
- Wait and at the end you have the laser light marking the wires and the water pipes on the wall.















_3u05Tpwasz.png?auto=compress%2Cformat&w=40&h=40&fit=fillmax&bg=fff&dpr=2)
Comments