Software apps and online services
Hi - I live next to a road. It's a minor rural road but it seemed to me that each year it was getting busier with increasing commuting traffic and that much of that traffic was travelling above the speed limit. But I couldn't prove it. So I decided to build my own traffic camera and do my own monitoring.
My house is on the edge of a village and so resides within the 30 mph speed limit....just. About 50 metres up the road the speed limit increases to 60 mph. There are speed limit signposts and another sign that flashes up 30 in bright LEDs if the approaching vehicle is doing over 30 mph, but while many people observe the speed limit, a lot don't and many just ignore it. It's not always easy to accurately judge passing vehicle speeds, but I reckoned most were using the road in front of my house as an acceleration or deceleration zone. But again I couldn't prove it.
Local authorities usually carry out traffic and speed surveys by laying two rubber strips across the road. These are filled with air and a sensor at the side of the road detects the change in air pressure as tyres compress the air space. Two strips a known distance apart allow you to work out the speed.
This wasn't an option for me. I intended to build something semi-permanent that monitored traffic for months and I didn't want to risk causing any nuisance or danger to road users. So the requirement was to build a reliable, long-term means of measuring traffic volumes and speeds passing in front of my house.
An additional requirement would be to determine the direction of travel which might prove interesting. Capturing photos of passing vehicles was a bonus but not essential.
To cut the story short, I ended up building a solution with a Raspberry Pi and camera. I found Greg's website, which had tackled a similar challenge and his code formed the basis of this project. I modified it for my circumstances and let it run. After a year, I'm now able to see daily traffic fluctuations, how many drivers are observing the speed limit or not and count how many vehicles are passing by each day. Most recently, it has verified the massive drop in traffic due to the Coronavirus outbreak and the gradual creeping increase in traffic despite the lockdown remaining in place.
I'm writing this over a year since I completed the project, which isn't ideal; I should have written it while building the project. I know better now but the lockdown has given me time to write it up. However, I did keep some notes during the build and my aim now is to convert those notes and photos into something more easily readable that might help others to build a similar project.
The nature of the project is quite technical and was a bit daunting for me, but don't let that put you off. While I've dabbled with Pis and Arduinos a bit, I wouldn't consider myself a master. This was my first proper project with the Pi. I've written code in my past, but wouldn't consider myself a modern software developer as I haven't kept my coding skills up to date. There's nothing particularly difficult with the coding although it might be hard to completely understand why I'm doing certain things even with comments. So I've added a section to this site explaining the purpose of each section.
Most of the time was spent understanding the existing code base and then modifying it for my purpose which proved the most interesting part. I had some challenging problems to solve along the way as you'll see. You will need to change some of the settings in the code for your particular circumstances.Background Research
So the basic requirement was to build some sort of camera which can count traffic and work out the speed and direction of the vehicles passing by.
To make matters more complicated, the front of my house is not parallel to the side of the road and the road is approx. 10 metres away. There's also a pavement (walkway) and a grassy area between my house and the road so there will be some pedestrians passing between vehicles and camera but we're not talking busy here...maybe the odd dog walker / runner especially on nice days.
I quickly focused in on a camera solution using computer-based vision to do the vehicle recognition. This decision was aided by a TV programme I caught on BBC called the Big Life Fix (https://www.bbc.co.uk/programmes/b09g5hwf) where computer vision had been employed to help someone with poor vision. I hadn't realised how sophisticated the vision algorithms had become so this was a spur to look further.
Also instrumental in this solution was coming across a web page by Greg Barbu and his carspeed detector (https://gregtinkers.wordpress.com/2016/03/25/car-speed-detector/). Greg's code was the starting point for my code and I've made quite a few modifications as I'll explain later. I highly recommend reading Greg's page as it explains the fundamentals of the maths being used for the calculation and the main operations performed in the program.
There's a link from Greg's page to someone else who has developed a another solution which is worth a look at. This has appeared since I developed mine. https://www.lucas-trashbin.ch/projects/ard-rasp/car-speed-detection-with-a-raspberry-pi-and-a-picamera/
So Greg lives in Ohio at about 40 degrees north and I got the impression that his main concern was the speed of a few individuals but he wasn't concerned by the volume of traffic and nor was he trying to record data at night. I live in Scotland about 56 degrees north and during the winter it gets dark about 4 pm and that's before peak commuter time. There are no street lights so if I was to get any reliable long-term data, I needed a solution that would work in the dark. My initial thought was to solve this problem using an infra-red camera and indeed I purchased the Pi IR camera although I reckon I could just have used the normal Pi camera as I'll explain. This isn't a thermal heat-sensing infra-red sensor (they are very expensive) so you need a source of infra-red light to illuminate your subject.
The challenge was to illuminate the piece of road with IR lights from 10 metres away. I already had an IP security camera that has 2 IR lights built in, so I tested the Pi's IR camera to see if it could "see" in the dark from the IR light provided by the IP camera. The result was way too dark to be useable. This could be because the Pi camera was set for a different band of IR than that provided by my IP camera but it wasn't promising. I considered purchasing large IR flood illuminators but they're not exactly subtle, use a fair chunk of power and would have added to the wiring required, more expense and another thing to go wrong. At the time I wasn't confident the solution would work, so I didn't want to incur expense for something I couldn't reuse. They would also have attracted attention as well as night-time flying insects (many are attracted to infra-red lights) which would have created even more false detections.
Then I realised I didn't really need to see in the dark anyway. Handily it's a legal requirement in this country that when you drive at night, you must switch on your vehicle lights. Modern cars tend to leak light sideways a bit in order to give the driver a wider field of view. If I could detect the movement of the front headlights would that work?
Greg's code uses the excellent OpenCV software. The algorithms look for blobs of motion and during the day will put a box around the whole vehicle. But at night time it doesn't see most of the vehicle, just the lights at the front and back of the car and so it shows two boxes for each vehicle. I only wanted to track one of them. If you know it's night time then you can adjust the OpenCV parameters to detect the smaller headlights and still work out the vehicle speed. How do you know it's gone dark? Well, you've got a camera that can measure overall light levels and so you can adjust settings for detection based on that light level.
There's also the issue of two vehicles passing in front of the camera at the same time. This does occur, but not very often and when it does the program can time-out waiting for a car to pass or you can get negative speeds and ignore it. This will lead to under reporting of traffic but better that than reporting too many vehicles and jeopardising the credibility of the data.
I realised quite early on that I'd have to put the camera outside. Mounting the camera indoors would have resulted in reflections in the windows causing false detections. Remember that my house is not parallel to the road and the camera has to be mounted parallel to the road for the maths to work. So an indoor camera would have been badly affected by reflections. That gave me two additional problems to solve 1) How to run power and data to the Pi and 2) how to protect the Pi from the rain, snow, wind of a Scottish winter and the heat of summer sun as the camera faces south.
To solve the first problem I went straight for Power over Ethernet (PoE). I already had a Netgear PoE ethernet switch so this was the obvious solution and a PoE enabled Raspberry Pi hat was available. This meant I only needed to run a single ethernet cable from the switch to the Pi and I'd have data and power sorted.
I decided to attach it to a fence post in front of my house. I could easily run a ethernet cable to it and it provided a sturdy mount for shake-proof images.
As for weatherproofing, I found a company called naturebytes.org who make a case for the Pi. It's aimed at wildlife projects and is a great way of introducing kids to the whole Pi thing and getting some interest in nature too. It looked the ideal solution and has proven up to the job, although I have supplemented it with a rough wooden surround to keep the worst weather a bay and provide additional sun shade for the camera. The case can be purchased on its own or as part of a kit including the Pi, camera and other parts such as battery and PIR sensor that I didn't need. Instructions can be downloaded and are very well written. Even so I had to make a number of modifications which I'll explain in the next section.
To be honest I wasn't confident the Pi would survive that long. I was putting electronics outside in a non-airtight box and trying to keep it dry during the wet winter and cool in the summer. Yet as I write this, I'm 16 months without any faults. There's been the odd interruption due to power supply cuts to the house leading to the program not auto-restarting but that's a software issue. Hardware has worked without a hitch. A credit to the quality of the Raspberry Pi, the PoE hat and the NatureBytes case.Assembly
The case required a few modifications. The Naturebytes project has been designed to be powered by an internal battery pack and kept very well sealed. Clearly that's ok for the short-term but not for my requirement.
I was concerned about the unit overheating during the summer. During some testing I'd monitored the internal temperature when there was a lot of passing traffic and it was getting quite toasty inside - it's a busy little CPU doing all that computer vision clever stuff. Add in some summer sun and without additional cooling the Pi would shut down to stop it cooking itself.
There was hole on the back of the case with a rubber grommet that was towards the top of the case. This provided a good way to ventilate the space inside - heat rises after all. I placed some gauze over the hole to reduce the number of bugs that could crawl inside and then put a small 5-volt fan on top of the gauze. More on the fan later when I explain how it's controlled.
The case comes with an internal mounting card onto which the camera and Pi is attached. The camera goes on the front of it and the Pi goes on the other. The naturebytes instructions show the top of the Pi mounted facing to the front of the case, but I wanted to maximize the air flow around the PoE hat and Pi CPU so I turned it over so that the top of the Pi was facing the back of the case as you can see in the photo.
The next problem was that the provided screws for attaching the Pi to the mounting board weren't long enough for the extra height of the PoE hat. So I shortened the provided 15 mm spacers to something between 5 - 8mm and this gave enough separation over the pi camera nuts, but allows the screws holding the PoE and the Pi to work. Be very careful the camera nuts don't short any of the circuit on the back of the Pi!
Since I'd used the cable entry for the fan hole, I had to drill a new hole for the LAN cable. By making it the same size as the original I could reuse the grommet that came with the case to help reduce any moisture / bugs getting in.
Since the fan was set to blow warm air out of the case I'd need somewhere for air to get into the case; I drilled four small holes along the bottom of the case. These would also serve as drainage hole in the event of any water getting in the case. Again I covered these holes with gauze to minimise the bug entry. The gauze was attached with glue.
The camera was mounted on an existing post which had its front face parallel to the front of my house and so not parallel to the road. I added a pivot mount I had left over from an old outdoor light to the base of the case to allow me to angle the camera so it faced perpendicularly across the road.
The case is bright green and whilst that might be fine in a wildlife situation it did look very obvious on the front of my house. Additionally, I wanted to improve the weather-proofing and sun shading on the case. So I surrounded the case with a wooden box with a hole on the front so the camera wasn't obscured. The box has no base which aids ventilation and is attached directly to the post rather than the case; so the box and case are mounted independently which means vibration from birds, squirrels or inquisitive children or even heavy rain won't blur the camera image.
Finally I stapled some damp proof course onto the top of the box to stop rain water soaking through into the box area and also painted it white to reflect the heat of the summer sun.
Subsequently I discovered that the camera auto-exposure was badly affected by the brightness of the sky. This resulted in very dark images of the vehicles passing by. The solution is the same as used by many landscape photographers. They use very expensive graduated filters; I used a piece of duck tape.Setting Up the Pi
Before you can run a line of code you'll need to get Raspbian installed on your Pi. Greg documented his build on Jessie - I used Stretch (version 9) and the current latest version is Buster which is required if you're running a Pi 4. Greg used a Raspberry Pi 2. I used a Raspberry Pi 3 model B+ which has a faster processor. There are plenty of places online that cover installing Raspbian so I won't create another.OpenCV
When I built my PI I used OpenCV3 for Stretch and followed the excellent instructions provided by Adrian Rosebrock https://www.pyimagesearch.com/2017/09/04/raspbian-stretch-install-opencv-3-python-on-your-raspberry-pi/. If you're wanting to get into computer vision this is definitely a good place to start. I was focused on just getting my speed camera working so wasn't going to delve into the details of computer vision - I just wanted something that works. The OpenCV install and compile did take several attempts but with some persistence I got it working. There is now an OpenCV 4 with an easier install process https://www.pyimagesearch.com/2019/09/16/install-opencv-4-on-raspberry-pi-4-and-raspbian-buster/
So in summary I was using Python 3.5.3 with OpenCV3.4.5 and Raspbian Stretch on a Pi3 B+.Program Overview
As I've mentioned previously, it's a good idea to review Greg's web page where he explains his code in detail. I'll do the same for mine and concentrate on where I made changes to Greg's original code.
But first I thought it would be a good idea to give an overview of how the program works before getting into the details.
The basic principle is that the program will detect objects moving within a defined frame. We don't detect movement across the whole image but just a defined area that we know traffic will pass through. We want this area to be as long as possible in the orientation of motion of the vehicles i.e. a wide area. Height is less important but you want to make sure it will pick up the lights of the vehicles. By limiting the area that detects motion, we are saving the Pi from having to do unnecessary work and reducing the chance of false detections from birds in the sky for example. If it detects multiple objects then it selects the largest one. In the image below you can see two green vertical bars that mark the left and right extent of the monitored area. I've pushed the width of the monitored area quite wide to increase the number of frames captured whilst also being able to see the vehicle enter and leave the monitored area.
Using the field of view angle of the Pi camera (Pi camera 2 is 62.2 degrees) and the distance between the road and the camera and a bit of trigonometry it's possible to work out the the feet per pixel at the distance the road is from the camera.
As the vehicle moves across the image, OpenCV generates a sequence of frames with a set of bounding boxes around objects it "sees" as moving.
The code works out the largest box and assumes that is the vehicle to be tracked. We get the x coordinate of the left hand side of each box. For a vehicle travelling right to left the x coordinate is at front of the vehicle; for a vehicle travelling left to right the x coordinate is the back of the bounding box which normally coincides with the back of the vehicle but not if the vehicle has just entered the monitored area as the rear part of the vehicle is still out of the area. By working out the movement in the coordinates of the box in pixels we can determine the distance travelled by the vehicle between each frame. The program also records the time between each frame and so we can work out the speed of the vehicle.
I've modified the calculation to allow for two lanes of traffic with each lane having its own distance to the camera.
When Greg wrote his program he found the Pi 2 was struggling to capture enough frames to determine the speed. He solved this by reducing the load on the Pi's processor by switching off the refresh of the display of the camera image whilst a vehicle is passing. This has the odd effect of cars entering the image from one side, disappearing and then re-appearing just before they leave the image. Sometimes when they're going really fast you don't even see the vehicle enter and leave although the image and data are still recorded. This might not have been necessary on the Pi3 with its faster processor but I didn't change it as it seemed a good way of saving the Pi from over-heating and also improves the chance of capturing more frames and resulting in a more accurate speed. Not being able to see the image of the moving vehicle isn't a great problem; the unit spends most of the time not being monitored by a human and just logs data.
Greg's program saves an image of each vehicle with the speed displayed on the screen and you you could also save the data to a CSV file. My road sees approximately 2500 vehicles pass each day. That's a lot of image files taking up storage space. So I defined a speed limit above which images are recorded. I also have passing pedestrians and so there's a minimum speed below which data is not recorded. Sometimes the program gets confused if two vehicles cross in front of the camera at the same time. This usually results in either a negative speed or nothing being detected in the buffer zone. So there's a time limit within which the vehicle must complete the transition or the program resets.
- Modifications for Night Vision
The trick to measuring speeds at night is to track the front lights of the vehicle. Since these are a smaller object than what you would be tracking during the day, you have to modify the "minimum area" parameter. This controls the size of the smallest box OpenCV will put around an object. During the day you want it quite large to avoid picking up birds etc. But at night (assuming you have no street lighting) nothing else is visible and so you can lower the minimum area to something the size of a headlight.
The other parameter to experiment with was "threshold". This is a number which OpenCV uses to detect an object by differences in the luminance of the pixels. So a low threshold will detect more objects and a high threshold will detect fewer objects. For night we just want to detect the bright lights of a headlight on a car which has a very high contrast against black and so we want to set a high threshold level. Ideally we want a value that detects front white light and not red rear lights, but I found that there's too much variation in vehicle lights and weather conditions to achieve this.
As a vehicle approaches the camera it will detect the front lights first and then as it passes the rear lights come into view. For any frame, if the rear lights appear bigger than the receding front lights before the front lights leave the monitored area, the program switches to tracking the rear light instead of the front lights. When this occurs the speed calculation for that frame becomes negative; it's as if the car has suddenly stopped and reversed in a fraction of a second. So if we get a run of normal speed values followed by a negative value, we can be pretty sure it's picked up the tail lights. When that happens the program stops tracking, ignores the last negative speed and records whatever has been logged up to that point. This doesn't always happen; it depends on the relative brightness of the front and rear lights, the sizes of the light cluster, the length of the vehicle.
Greg's code has narrow strips of pixels at both ends of the monitored area - labelled as buffer zones in the above illustration. When the front of the moving box (shown in red in the illustration) is detected in these strips, the program stops tracking and carries out the steps for saving the image and recording the data.
As it gets dark the camera has to increase the length of time that the sensor is exposed to light for each image. This is so as to capture enough light to make an image. Although the Pi camera has no physical shutter, it is effectively an electronic shutter that only switches on the sensor for a period of time. The exposure times get longer in low light situations and this results in fewer frames being captured in the time a vehicle takes to pass by. In the dark, it's possible for the moving box around the vehicle lights to cross the buffer zone strip between frames; especially if the vehicle is moving quickly. Consequently the data is never saved and we lose any record of this vehicle.
The solution is to increase the width of the strips at each end of the monitored area. But by how much and do we want to go from narrow to wide in one step and when do we do this?
The approach I took was to take a light meter-reading every minute and then map that reading to values for the width of the buffer zones ("adjusted save buffers" as I call them in the program), the threshold and the minimum area. After a lot of testing of various algorithms and values I ended up using a reciprocal function curve for the save buffer, a square root curve for the minimum area and a simple stepped value for the threshold.
- Other Changes
Rather than just taking the speed for the last frame, I decided to calculate the mean of all of the speeds recorded for each frame. However, I saw a consistent issue with traffic moving left to right. The first speed reading was always much higher than subsequent readings. This obviously affected the mean speed being calculated. It took me a long time to identify the cause.
As a vehicle enters from the left hand side into the monitored area, the front of the vehicles enters first, but the whole of the vehicle is not usually fully inside the area; the bounding box will therefore not be around the entire length of the vehicle. The program was calculating the change in x coordinate for each frame as:
abs_chg = x + w - initial_x
But the problem is that the value in "initial_x" is the left hand corner of the bounding box of the first frame. Which is not the front of the vehicle but some point part way between the front and the back of the vehicle. So I changed this as follows :
abs_chg = (x + w) - (initial_x + initial_w)
initial_w is the width of the bounding box on the first frame and will normally be shorter than the full length of the vehicle. This change resolved the problem.Usage
Point the Picamera at the road. Ensure it is perpendicular to the direction of traffic.
Before you run carspeed.py, modify the constants L2R_DISTANCE and R2L_DISTANCE to the distances from the front of the Pi camera lens to the middle of the Left to Right lane and Right to Left lane.
Modify the MIN_SPEED_IMAGE to the minimum speed in mph to capture an image. I usually set this quite high as I don't want to fill up the Pi's storage with images of cars that I'll never look at.
Modify the MIN_SPEED_SAVE to the minimum speed in mph to capture the data in the CSV file and pass to the MQTT broker. I usually set this to about 10 mph so I don't record pedestrians.
Modify MAX_SPEED_SAVE to the max speed to record. Sometimes when two vehicles cross it can lead to crazy speed numbers.
You may also need to adjust the vflip and hflip to match your camera’s orientation depending on the orientation of your camera sensor.
Run from a terminal with: python3 carspeed.py -ulx 64 -uly 321 -lrx 960 -lry 444. Greg's code requires the user to draw the rectangle monitoring area, but I've bypassed this and instead prefer to pass in the coordinates of the monitored area. The advantage of this approach is that it produces the same consistent monitoring area each time the program is run. Also it means the program can be started by a script without human intervention which is handy if you want it to auto-start e.g. if after a power outage.
As cars pass through the monitored area, images will be written to disk with the speed in the image and the time, mean speed, direction, frame count and speed standard deviation are recorded.
Exit with a press of the ‘q’ key.
I've recorded a short video showing traffic passing the camera and the command window displaying the captured data.The Code
Start with importing the necessary packages:
CV2 is the OpenCV library. I'm using MQTT to pass the data onto another Pi but you don't need to do that. numpy is used for mean and standard deviation calculations and argparse for parsing the parameters passed in.
Next, a few methods and functions copied straight from Greg's code:
- prompt_on_image: simply formats and displays a message on the image.
- get_speed: returns the speed based on the number of pixels traversed in a given time (substitute 3.6 for the 0.681818 value if you are working with metres and kph rather than feet and mph.)
- secs_diff: returns the number of seconds between two times.
- record_speed: writes a line to the CSV file.
Some new functions added next:
- my_map: is a simple mapping function which I use to map an input range of values to an output range e.g. 0 - 256 mapped to 1 to 10.
- measure light: uses the OpenCV histogram function to take an exposure meter reading of the image.
The next few functions were all required for tracking at night:
- get_save_buffer: returns a number, determined from the input light level, which is used to define the width of the save buffers at either end of the monitored area.
- get_min_area: returns the minimum size of box we want to allow and is based on the input light level.
- get_threshold: returns the threshold based on input light level.
- store_image: will save a copy of the last image in the frame buffer to a jpg file.
- store_traffic_data: writes the time, mean speed, direction of the vehicle, no of data frames captured and the standard deviation to a local csv file. The data is also sent to an MQTT server running on another Pi - you don't have to do that.
Some useful constants. You will need to change L2R_DISTANCE and R2L_DISTANCE. They could be the same value if you are monitoring a single lane.
THRESHOLD and MIN_AREA are initial or default values.
The enumerated values make the program easier to read. The first three monitor the current state of the tracking process WAITING, TRACKING, SAVING. The next two define the direction of movement on the image. The values assigned are not significant. TOO_CLOSE defines the minimum time in seconds between two following vehicles. If we detect another vehicle closer than 0.4s we ignore it. MIN_SAVE_BUFFER defines the minimum width of pixels for the buffers where the final frame is detected and saving of data is initiated. This can be increased if you find that vehicles are being tracked but the frames frequently fall either side of the save buffer.
Now capture the passed in arguments that define the monitored area.
Next determine the frame width and feet per pixel and display on screen
Next we need to initialise a bunch of variables
And initialise the camera.
We’ll want to see that the program is processing, so a window is created and moved to the upper left corner of the display.
Later on, I will be passing the captured data to another Pi that's running an MQTT server. This ensures the data isn't just stored on the local client. Server name and password obviously have been removed from the attached code.
Create the CSV file if required and put in some column headers
Calculate the dimensions of the monitored area from the input parameters and print all to the command window.
Finally, we are to the core of the program. Using capture_continuous, the program repeatedly grabs a frame and operates on it. Capture_continuous is used so that the Pi camera doesn’t go through the initialization process required when capturing one frame at a time. The image is cropped to the monitored area. Using the logic discussed on the pyimagesearch site, the image is converted to grayscale and blurred. The first time through, the program saves the image as base_image. Base_image is then used to compare to the current image and see what has changed. At this point, any differences between the captured image and the base_image are represented by blobs of white in the threshold image.
Note that for the very first iteration we get a light exposure reading and set the minimum area, threshold and save zone buffers accordingly.
Next the program looks for the largest white blob in the threshold image using findContours. We ignore small white blobs, as they can happen at random or may represent a leaf or other small object traveling through the monitored area. The process of grabbing an image and looking for motion continues until motion is detected.
The first time motion is detected, the state changes from WAITING to TRACKING and the initial values of the area-in-motion are recorded. I want to calculate the mean speed once all the frames have been captured so I'm storing the calculated speeds in an array.
I'm also counting the number of frames. The more frames we capture the more confident we can be about the calculated speed, but if we only have one frame then we can't calculate the mean.
car_gap is the time difference between the current vehicle being tracked and the last one. I worked out the minimum time gap between two cars that would occur for the rear lights of the leading car and headlights of the following car to be detected sequentially and treated as separate vehicles at 30mph, 40 mph and 50 mph; for my camera setup it came to 0.76s, 0.56s and 0.44s. I reasoned that a 0.4 second gap (TOO_CLOSE) is either a false reading (e.g. rear lights being detected) or a car being driven an insane idiot. I can't cater for every maniac out there.
If the camera gets bumped or the lighting in monitored area changes dramatically the camera thinks there is motion but usually doesn't detect movement in the save zone buffers. So if nothing is saved after a set time period it resets. Greg uses a value of 15 seconds but I've gone for 3 seconds because my traffic is travelling faster and is busier and I don't want to miss the next vehicle because I'm counting up to 15. Three seconds also helps to eliminate pedestrians and slow cyclists from the data.
With a state of TRACKING, the second and subsequent images with motion are processed to see how far the area-in-motion has changed. The calculation of change in position depends upon the direction of movement. From right-to-left, the x value of the box bounding the area-in-motion represents the front of the car as it passes through the monitored area. But for motion from left-to-right, the x value won’t change until the entire car has entered the monitored area. The bounding box grows wider as more and more of the car enters the monitored area, until finally the bounding box encloses the rear of the car. Thus the front of the car is x+w where w is the width of the bounding box enclosing the area-in-motion.
Once we have the current position of the front of the car, we calculate the absolute change in pixels from our initial x position. The time interval between the current frame and the initial frame provide the seconds that have lapsed. From time and distance, the speed is calculated.
If the calculated speed is negative then we assume something is wrong (e.g. we've picked up a tail light), abandon the current frame and force it to save what we've got so far.
This process continues until the area-in-motion’s bounding box reaches the opposite end of the monitored area. We then calculate the mean and standard deviation of the speed array as long as we have more than two values.
At that point, the date, time are written to the image, the speed is displayed centered on the image and the image is written to disk.
If the speed is above the required limit but below the max limit we store the data. The state is changed to SAVING. The program will continue to see motion as the car exits the monitored area, but since the state is not WAITING or TRACKING, the motion will be ignored.
The next "else" statement is for when no motion is detected. Normally this is fine as we've previously saved and recorded all the data on the previous iteration. However, if the current status is still TRACKING then the vehicle did not pass through the SAVING state and must not have captured any frame within the save buffer. I see this occasionally with very fast vehicles in poor light where bad luck determines that no frame falls within the save buffer. In which case I can still capture the data that has been calculated from previous iterations as long as we have one or more data items.
Whether we missed a car or not, we reset the status and wait for the next car.
The image window is only updated if no motion is detected. If the state is WAITING, the date, time and status is added to the current image. The base_image is adjusted slightly to account for lighting changes in the monitored area. These changes result from clouds passing, the changing angle of shadows, blowing leaves, etc. Also, every 60 seconds we take a new exposure reading and adjust the MIN area, threshold and save zone buffer accordingly.
The keyboard is checked for the press of “q” indicating the program should be terminated.
I wasn't sure how hot it was going to get inside that little green plastic box. I put an extra heat sink on the Pi's CPU but there was the PoE hat on the top so that would limit air flow. I'd drilled some extra holes in the base for air inflow and the outflow was controlled by a fan. Instructions for controlling the fan be found here: https://www.instructables.com/id/PWM-Regulated-Fan-Based-on-CPU-Temperature-for-Ras/ This means that the fan isn't running all the time; only when it needs to be. To be honest I've not monitored the temperature since it started running but Pi still works.Results
Each time a vehicle is detected and the speed measured the data is written to a local CSV file. It also sends it to an MQTT server running on another Pi. Through various other steps I then end up with the data on a network share where I can open it in Excel. I had been using the ThingSpeak web service for stats analysis but discovered that data was going missing. So I reverted to using local files backed up in the cloud.
I'm not a statistics expert but I can usually manage to do what I need in Excel. It's a bit cumbersome managing thousands of rows of data but with a fairly chunky desktop it seems to cope. By way of example I had the camera running from March 2019 to March 2020 and ended up with two files containing a total of over 840, 000 rows.
Looking at data just for 2019 there was an average of 2, 557 vehicles detected per day with the maximum of 3, 855 on 20th June. The busiest day was typically a Tuesday and the quietest was normally a Sunday.
The chart below shows the average number of vehicles per day for each week (calculated from total number of vehicles in each week and divided by the number of days in that week). The two dips were due to system failures where I had a power cut but the program had not restarted. There's a very gradual trendline showing a slight decline in traffic, but not by much and that's probably due to the outages.
But the original question was about speeding especially at rush hour. I divided the recorded speeds into four categories and worked out how many vehicles fell into each speed category
11 - 30 39
31 - 40 51
41 - 50 9
over 51 1
So 61% of all vehicles are going over the 30 mph speed limit and an average of 275 vehicles a day are at least 10 mph over the 30 mph limit.
But is it a rush hour thing? I divided the speeds into one hour time slots and then created some charts. The chart below shows the average number of vehicles at speeds 40 to 50 mph and over 50 mph for each of the hour slots. So peak speeding occurs between 7am and 8am with a second peak between 5pm and 6pm. So yes, most speeding occurs at rush hour.
If I was a Police Officer suitably equipped with a much more expensive speed camera, then I reckon a couple of hours between 7am and 9am one weekday morning would haul in about 46 vehicles going in excess of 10 mph above the speed limit.
So that was 2019, a "normal" year. What about 2020, which by most people's standards has been far from normal. I'm writing this in May, and in Scotland we're still in lock down, but there has been noticeably reduced traffic volume although lately I've noticed an increase in traffic flow. How much has it changed and have folks slowed down?
Between weeks 10 and 13 of 2020 I saw a 69% drop in vehicles. It's since increased at about 5% a week and it's now about 50% of normal volumes.
For the whole year so far, the average number of vehicles is 1, 751 and the maximum has dropped to 3, 079 on 14th January. No idea why that date was a busy one.
The average number of vehicles per day dropped from its usual 2, 500 to a low of 810 in week 13 and has climbed consistently every week since and now at week 21 it's at 1, 184. So just under 50% of normal.
There's been a slight change to the busiest days of the week during lock down. Whilst Tuesday remains the busiest day for the year as a whole so far, if I just look at lockdown weeks then Wednesday and Friday are now the busiest.
And have they slowed down or sped up? There are fewer vehicles but those on the road are still speeding at the same times as before but I'm not seeing a disproportionate increase in speeding.
So the answer to my first question about the traffic increasing each year is not yet verified and will take a few more years recording to prove. Although the Coronavirus has stuffed up this year's data, I think it's a fair statement that there is a lot of traffic for a C class minor rural road. Under normal conditions it sees 2.5 times what you would normally expect for a typical rural road. The Dept of Transport has published a Road Traffic Estimates for Great Britain - see link below. On page 20 it shows that the average rural minor road has 1, 000 vehicles per day. In 2019 my road had an average of 2,500 a day.
Whilst the 2020 Coronavirus outbreak has had a major impact on traffic volumes, it is not yet clear whether we will see a return to normal volume once the outbreak is gone which maybe a long time away. Even then, will people return to normal commuting methods or will many work from home? Will the jobs still be there for people to go to anyway? Too many unknowns.
One thing that is certain to lead to an increase in traffic is the opening of a new bridge at the end of my road. The bridge had to be closed a couple of years ago due to failures in the structure. It crosses a dual carriageway trunk road that provides access to Edinburgh. So currently traffic coming from Edinburgh (Westbound) is unaffected, but cars wanting to travel into Edinburgh (Eastbound) must take a lengthy diversion. I think this explains why we see a narrow peak of traffic in the morning (most traffic going into Edinburgh) and a broader peak in the evening when that same traffic is going home. Folks use an alternative route in the morning but not on the way home. This is confirmed when we look at the chart below which shows the totals by week but marked in green for travelling Eastbound into Edinburgh and yellow for westbound out of Edinburgh.
Consequently once the bridge re-opens I expect the morning rush hour to increase in volume and the green bars to be approximately the same height as the yellow. This will mean an increase from about 17, 000 to an estimated 20, 000 per week. The bridge was due to open a few weeks ago but has been delayed to the Coronavirus outbreak.
What about speeding ? Over 60% of all traffic is speeding. 10% is exceeding the speed limit by 10 mph or more. And the charts clearly indicate that the problem is worst at rush hour times.
Edinburgh Council are intending to reduce the speed limit on the road after the 30 mph zone to 40 mph; it's currently a 60 mph limit. So maybe there will be less accelerating and braking going on in front of my house. Although I suspect someone who is doing 50 mph in a 30 mph zone is not going to be bothered by a 40 mph sign.
I will continue to log and monitor the data.