Software apps and online services
Hand tools and fabrication machines
Traffic light control hasn't changed much in the last decades. Since IT changes everything, it's time to transform the traditional traffic control systems to smart ones that adapt and react to their environment.
I was experimenting with Walabot, trying to make something with cats, because... Well because cats are fun. But then this happened:
He wasn't the biggest fan of the cat + Walabot topic. Anyway I had to go shopping to buy some food. On the way to the grocery I had to wait a lot of time to get the green light at a couple of pedestrian crossings. There were hardly any cars on the roads but I was keep getting the red lights. Traffic lights efficiency can be terrible, sometimes. That was the moment when I realized that something needs to be done about this issue and you're reading the end result of it.
To handle this situation we need two things:
- Knowing the number of participants in the traffic situation and their position.
- Algorithms to control traffic lights better, based on the above data
This doesn't seem to be too difficult. Let's do some research to figure out the details!
The traditional traffic lights with constant timings cannot react to their constantly changing environment so they do not update their timings accordingly. Obviously the main reason is they don't have sensors, so they don't know their environment. In this case the "What would be the best sensor?" question is not trivial. Cameras can work in most cases, but we have to take into consideration these factors:
- We might don't want to put a bunch of cameras everywhere. You know, privacy and stuff.
- Images of people (e.g.: ads in the background) might confuse the image recognition system.
- Cameras can make only 2D images
- They depend on external light and they need their own normal/IR light source at night. The resulting images will differ a lot, so quite complex image processing algorithms will be required. It will raise the costs, the development times and might not work in the end properly.
- It can be quite easy to break a camera based traffic control system. You just need to damage the lenses, or use some paint and it can't see anymore. It can take days to repair the system.
That said, cameras are definitely a good choice for the recognition of cars, but in case of pedestrians I've got a better idea: Walabot. A 3D sensor that uses microwaves for imaging. It can do amazing things like see through walls, detect your breathing from a distance, see in dark and so on. I don't want to write pages about its other capabilities because others did a great job and did it a couple of times. I'll focus those parts that are related to my project.
The Walabot generates microwaves and the image is created based on the returning signals. The Walabot produces weaker microwaves than your smart phone, so don't worry it's not a portable microwave oven. It can see for about 10 meters in air. This revolutionary piece of tech lacks the previous issues of cameras.
- The created images are very different from camera images. Humans cannot be distinguished by each other, so no privacy issues here.
- Walabot wouldn't see printed images. Ads cannot confuse it.
- 3D imaging!
- Changes in light doesn't affect the imaging. It will produce very similar images in 24/7
- The imaging side is just a sheet of plastic, no lenses or something like that. Painting this plastic with regular paints (microwave transparent) shouldn't cause any serious change in its operation.
This is how Walabot see things:
Walabot seems to be the perfect sensor to detect pedestrians. Its biggest advantage is to change the detection range very easily. If you want to detect objects only in a 2 meter range you can do that very easily, just change one parameter. Try that with a 2D camera!
Unfortunately this max 10 m range means that we cannot use it to detect cars. We would need higher range. For this purpose cameras might be better anyway. Cars follow the rules of traffic. It is much easier to predict and handle the behavior of cars. However, in this project I'll look at car detection as a black box and I emulate it. Detecting cars with cameras would require working with neural networks or other difficult stuff, so I leave that for now. Maybe we won't need that at all. In a couple of years all cars will be connected to the internet - especially self driving cars - so counting them with cameras won't be necessary. Self driving cars, smart cities, controlled traffic flow, IoT Lots of things are gonna change in the next decades.
Making decisions based on the acquired data is a difficult task and it is not necessarily an IT job (traffic engineer maybe), but I'll create a very basic smart traffic controller. Let's say that both the pedestrians and the cars have a default time for how long the green light will last. Normally this is how time passes:
remainingTime = prevRemainingTime - timePassed
My approach is:
remainingTimePed = prevRemainingTime - (numberOfCars / numberOfPeds) * timePassed remainingTimeCar = prevRemainingTime - (numberOfPeds / numberOfCars) * timePassed
My approach will result that the more people are waiting on either side, the more time they get for the green light while the other side will get less. In the next part I'll talk about its details.
<spoiler>The results are much better with my controller.</spoiler>
Probably it will work but will it be better than the traditional traffic control? It should, but we need proof of that. I created a worksheet in Excel that can give us a proper proof of my system's superiority over traditional ones. I used VBA to create this document, so you might need to enable it in Excel. It doesn't cover all possible cases, but it is good for modeling situations. You have to give it 4 parameters at the beginning:
- By default, how long is the green light lasts for cars (in seconds)
- By default, how long is the green light lasts for pedestrians
- Car tailgating time tells Excel the time between two following cars.
- Pedestrian tailgating time tells Excel the time between two following pedestrians.
You need to set that how many cases you'd like to simulate (number of rows). Then you have to fill the number of pedestrians and the number of cars columns in each row. These are initial conditions, others arriving to the crossroads after start is not possible in this simulation.
- Car's default time for green light: 60 s
- Car's tailgating time: 2 s
- Pedestrian's default time for green light: 15 s
- Pedestrian's tailgating time: 1 s
The waiting times indicate the worst case scenarios in this table. They show us the time, when the last car or pedestrian left the crossroads in that simulation. The magenta color means the controller got worse results, the cyan means that the controller got better results than the other while gray means they both got the same results. In most cases the controller on the right - my controller - got a lot more improvements. There's a lot more cyan on the right than magenta. I used some simplification, for example there's no waiting in a red-red situation, but it wouldn't change a lot on the final results.
Cars' waiting times are about the same with both left (traditional) and right (smart) controllers. Being a pedestrian is sucks if you're using left. Their waiting times on the left is about twice as long as on the right.
As expected, right's advantage is caused by one thing: no idle time when no one can cross the road while people are waiting to cross. This happens when there's 0 people on either side. The green light time for 0 people is a complete waste. It means that mine is always produces the same or better results than the traditional ones.
Analyzing the final results in this diagram shows me that my controller is participant neutral. It doesn't care if you're a car or a pedestrian, the only thing that matters is their ratio. There's hardly any difference between cars and pedestrians waiting times, while the traditional traffic controller is clearly preferred cars. By looking at the diagram I feel like I should call my controller Equality. It should be noted that I calculated with 40 people in some simulations, but I don't think that Walabot can distinguish that many people at the moment, but it's not deal breaker.
If these numbers hasn't conviced you, feel free to use your own data and compare the results.
Ok, this thing should work. Let's continue with building some traffic lights for modeling. The calculating and processing part, the "master" runs on a regular PC, but it is quite difficult to control LEDs with a PC, so I use an Arduino to do this. This device will be the "slave" that is doing nothing but what the master tells it to do on their common language. This part is basically a modification of the well known traffic lights project for beginners.
As in real life, by default the traffic lights are off. That time the yellow lights are blinking all the time and that’s all they do. The Arduino is in standby mode and is ready to receive commands through serial from the master. The received commands are made of a character at the beginning and in some cases a number after that. Here’s an explanation about what is what:
- s – state of the arduino. It is just used by the master to determine whether the connected device has the traffic control sketch. The response is always ”OK”.
- d – the master queries the number of cars on the road. As this is not part of my project, I used a potentiometer to get a 0-10 value and give it to the master as the number of cars. The 0-10 values are calculated by calling the analogRead function and divide the result by 100.
- p[0-3] – the master can change the pedestrians' lights with these state values. 0 means red, 1 means green, 2 means green flashing and 3 means disabled, so its LEDs are dark.
- c[0-4] – the master can change the vehicles' traffic lights with these state values. 0 means red, 1 means red+yellow, 2 means green, 3 means yellow, 4 means disabled (the yellow is blinking and everything else is off).
The Arduino cannot change its state without the master, except when a timeout occurs. If the master doesn’t communicate with the Arduino for 7.5 sec, it will change its state to disabled using p3 and c4 commands.
Basically that’s it. There’s nothing difficult here.
Ok, here comes the fun part! We get a couple of modes for Walabot. It can see into walls, detect breathing from a distance and a bunch of other cool stuff, so this is a very versatile device. They give us demo projects in C++, C# and Python. The most examples are in python, so I used it in my project. I have much more experience with C# and C++ (I had 0 experience with python), but at least I also learned a new language. You can find their python examples here. My project is built around their rawImage demo example.
Their example proved to be a good skeleton to my application and I also learned a lot from it. I used their Walabot Configuration panel to configure the Walabot. I didn’t even need to touch this part, however I added a new line, the ”horizon” to it. I’ll talk about that later.
I created two new panels, the Serial Configuration panel and the Traffic Lights Panel. On startup my code checks for open serial ports. After that you just select the slave's port and baud and that’s all for this panel.
The Traffic Lights Panel is just a graphical feedback for the states of the traffic lights. When the process is running, you will see the number of pedestrians found on the Walabot's image, the number of vehicles, received from the Arduino and the car/pedestrian light states.
Thats’s all for the surface, let’s go deeper!
The default configuration should work in most cases, however increasing the threshold might be required if you see strange blobs on the screen when there shouldn’t be any contact. The Walabot has a 10 m maximum range. I recommend reading the documentation of how the Walabot works. If you want to change the parameters of what Walabot will see, you really need to take a look at this part of the documentation. Keep in mind that the more Walabot sees, the slower it gets. It means that if you use higher range, the frame rate will drop. The frame rate of my app with 2m "R res" is around 5 FPS. By default, the MTI function (used to detect movement) is disabled, but it might come in handy in some cases, so I left it in my app.
So we clicked on the Start button and after the calibration the connection is established with the Arduino and the Walabot. The traffic control system comes to life. The traffic lights are changing their colors properly and you can also see the processed raw image of the Walabot.
Using the GetRawImageSlice() Walabot API, I get the rawImage matrix, scaled to the 0-255 range. Here, 0 is the threshold value and 255 is the highest value of the raw data. It means that weaker signals will appear as strong as very strong signals on the images. It is great in some cases, but you have to be very careful with high enough threshold levels, because noise can cause strange things. A low threshold can easily produce meaningless, but strong contacts and the program won’t know that it was just noise.
At this point everything is working and there should be zero contact. Now you place some objects in front of the Walabot and you’ll see what it sees. Blobs, that are representing objects in space. Counting these blobs (people) is not very difficult, but it’s not trivial either. Think of the blobs as hills.
My approach is following this tactic:
- 1. Find the highest hill.
- 2. If the hill is lower than the threshold then return with the number of hills
- 3. Count your hill
- 4. Demolish this hill.
- 5. Go to step 1
I implemented a recursive function based on this idea and it works very well. It can find even very small hills. Each white dot that you see on the resulting image is the peak of that hill. Demolishing that hill started from that point. You’ll see as many hills/blobs/contacts as many humans are standing in front of the Walabot.
I mentioned earlier this ”Horizon” parameter. This isn’t an important parameter, but it lets you see far, and in the meantime, the people in the distance won’t be counted as some who want to cross the road. It can also come in handy in some cases. 0 means that you want the app to count everything in its range.
Sorry about the subtitle + audio problem in the video. I made the original version a couple of weeks ago - without audio - and long story short: data loss (raw videos). After that I had to work with the processed version and I have no way of removing the subtitles, but I wanted to give you audio.
The imaging is not perfect. 2 people can easily merge into one on the Walabot's image. Luckily this isn't necessarily a big deal, but it lowers the system's effectiveness.
As you can see I used some very happy power banks as pedestrians. Naturally it also works with real people, but it was easier and funnier to demonstrate the system with the power banks and I can't take good recordings about the traffic lights if I use big, real people.
I can say that my project is a huge success. It is something that can really have a positive impact on the quality of our everyday life. It's not just the 20+ seconds in average at each traffic light that is saved by my system, but I'm sure it will decrease people's stress levels, too. Waiting for no reason is always irritating. By the way, waiting to get green light for about 3 minutes at each working day would mean more than 10 hours in a year in total per person.
In this project I simulated a very simple crossroads: one road with a pedestrian crossing part, but in more complex cases the difference could be a lot more. In those cases the drivers will also enjoy the benefits of this system, not just the pedestrians. In the end, my traffic controller didn't do anything special, just increased the effectiveness and utilization of the crossroads.
This system would also help a lot to handle and control the traffic flow in smart cities. Effective utilization of the roads can drastically decrease the time you spend in traffic on a daily basis.
The traditional traffic controllers can also improve a lot if they can get rid of the idle times. Walabot can help them, too. I think Walabot can be a crucial part of the smart cities' traffic control. It would be great if my project would come to life in one day in a real world project!
If you are still here then you made it to the end and you are a hero of this project! Congratulations!
Thank you for reading and have a nice day!