Note: The project is part of a larger implementation.
Throughout history, the process of counting people in various spaces has often proven to be essential for a range of practical purposes, including shaping new regulations, conducting social studies, and implementing improvements.
One might trace back the origins of population counting to census taking in ancient civilizations, where such activities were pivotal in tax collection, planning military strategies, and drafting policies. But the need for people-counting extends far beyond governance. Social scientists, urban planners, and even business strategists often rely on accurate population counts to study patterns, anticipate future needs, and create value.
In the modern era, the emergence of the COVID-19 pandemic has underscored the significance of counting people in enclosed spaces. Regulatory bodies worldwide had to enforce capacity limits to curb the spread of the virus, and this could not be achieved without a reliable mechanism to count individuals in various venues accurately.
This is where technology plays a transformative role. With the advent of artificial intelligence (AI), we now have the means to automate and optimize people-counting processes. Innovative solutions, such as AI sensors, can precisely enumerate the number of individuals in a given area, providing data that can be instrumental in enhancing public health strategies, refining business operations, or advancing academic research.
So, let's explore how to harness this cutting-edge technology and utilize it effectively for capacity control, whether it's for enforcing public health regulations, planning city infrastructure, or optimizing retail spaces. Our AI sensor provides an opportunity to transform the way we perceive and manage physical spaces in a world where the counting of people has never been more crucial.Software Preparation
Whether you're a seasoned programmer or just beginning your journey in the world of coding, setting up your development environment correctly is a critical first step. This guide is designed to help you prepare your Windows, Linux, or Intel Mac for Python programming by ensuring you have Python and all necessary dependencies installed. This will create a robust foundation for your coding adventures, paving the way for smooth project execution and minimizing potential roadblocks down the line.
To get started, we will focus on two primary tasks: installing Python if it's not already present on your computer, and setting up the necessary dependencies. Let's dive in and get your system ready for Python development.
For Windows, Linux, Intel Mac
1. Make sure Python is already installed on the computer. If not, visit this page to download and install the latest version of Python
2. Install the following dependency
pip3 install libusb1
M1/ M2 Mac
- Install Homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
- Install conda
brew install conda
- Download libusb
wget https://conda.anaconda.org/conda-forge/osx-arm64/libusb-1.0.26-h1c322ee_100.tar.bz2
- Install libusb
conda install libusb-1.0.26-h1c322ee_100.tar.bz2
Step One: Collect Image Data- Step 1. Connect SenseCAP A1101 to PC by using USB Type-C cable
- Step 2. Double click the boot button to enter boot mode
Create a new file on your PC and name it "capture_images_script.py".
Copy and paste the following code into the file "capture_images_script.py"
import os
import usb1
from PIL import Image
from io import BytesIO
import argparse
import time
import cv2
import numpy as np
from threading import Thread
WEBUSB_JPEG_MAGIC = 0x2B2D2B2D
WEBUSB_TEXT_MAGIC = 0x0F100E12
VendorId = 0x2886 # seeed studio
ProductId = [0x8060, 0x8061]
class Receive_Mess():
def __init__(self, arg, device_id):
self.showimg = not arg.unshow
self.saveimg = not arg.unsave
self.interval = arg.interval
self.img_number = 0
self.ProductId = []
os.makedirs("./save_img", exist_ok=True)
self.expect_size = 0
self.buff = bytearray()
self.device_id = device_id
self.context = usb1.USBContext()
self.get_rlease_device(device_id, False)
self.disconnect()
self.pre_time = time.time() * 1000
time.time_ns()
def start(self):
while True:
if not self.connect():
continue
self.read_data()
del self.handle
self.disconnect()
def read_data(self):
# Device not present, or user is not allowed to access device.
with self.handle.claimInterface(2):
# Do stuff with endpoints on claimed interface.
self.handle.setInterfaceAltSetting(2, 0)
self.handle.controlRead(0x01 << 5, request=0x22, value=0x01, index=2, length=2048, timeout=1000)
# Build a list of transfer objects and submit them to prime the pump.
transfer_list = []
for _ in range(1):
transfer = self.handle.getTransfer()
transfer.setBulk(usb1.ENDPOINT_IN | 2, 2048, callback=self.processReceivedData, timeout=1000)
transfer.submit()
transfer_list.append(transfer)
# Loop as long as there is at least one submitted transfer.
while any(x.isSubmitted() for x in transfer_list):
# reading data
self.context.handleEvents()
def pare_data(self, data: bytearray):
if len(data) == 8 and int.from_bytes(bytes(data[:4]), 'big') == WEBUSB_JPEG_MAGIC:
self.expect_size = int.from_bytes(bytes(data[4:]), 'big')
self.buff = bytearray()
elif len(data) == 8 and int.from_bytes(bytes(data[:4]), 'big') == WEBUSB_TEXT_MAGIC:
self.expect_size = int.from_bytes(bytes(data[4:]), 'big')
self.buff = bytearray()
else:
self.buff = self.buff + data
if self.expect_size == len(self.buff):
try:
Image.open(BytesIO(self.buff))
except:
self.buff = bytearray()
return
if self.saveimg and ((time.time() * 1000 - self.pre_time) > self.interval):
with open(f'./save_img/{time.time()}.jpg', 'wb') as f:
f.write(bytes(self.buff))
self.img_number += 1
print(f'\rNumber of saved pictures on device {self.device_id}{self.img_number}', end='')
self.pre_time = time.time() * 1000
if self.showimg:
self.show_byte()
self.buff = bytearray()
def show_byte(self):
try:
img = Image.open(BytesIO(self.buff))
img = np.array(img)
cv2.imshow('img', cv2.cvtColor(img,cv2.COLOR_RGB2BGR))
cv2.waitKey(1)
except:
return
def processReceivedData(self, transfer):
if transfer.getStatus() != usb1.TRANSFER_COMPLETED:
# transfer.close()
return
data = transfer.getBuffer()[:transfer.getActualLength()]
# Process data...
self.pare_data(data)
# Resubmit transfer once data is processed.
transfer.submit()
def connect(self):
'''Get open devices'''
self.handle = self.get_rlease_device(self.device_id, get=True)
if self.handle is None:
print('\rPlease plug in the device!')
return False
with self.handle.claimInterface(2):
self.handle.setInterfaceAltSetting(2, 0)
self.handle.controlRead(0x01 << 5, request=0x22, value=0x01, index=2, length=2048, timeout=1000)
print('device is connected')
return True
def disconnect(self):
try:
print('Resetting device...')
with usb1.USBContext() as context:
handle = context.getByVendorIDAndProductID(VendorId, self.ProductId[self.device_id],
skip_on_error=False).open()
handle.controlRead(0x01 << 5, request=0x22, value=0x00, index=2, length=2048, timeout=1000)
handle.close()
print('Device has been reset!')
return True
except:
return False
def get_rlease_device(self, did, get=True):
'''Turn the device on or off'''
tmp = 0
print('*' * 50)
print('looking for device!')
for device in self.context.getDeviceIterator(skip_on_error=True):
product_id = device.getProductID()
vendor_id = device.getVendorID()
device_addr = device.getDeviceAddress()
bus = '->'.join(str(x) for x in ['Bus %03i' % (device.getBusNumber(),)] + device.getPortNumberList())
if vendor_id == VendorId and product_id in ProductId and tmp == did:
self.ProductId.append(product_id)
print('\r' + f'\033[4;31mID {vendor_id:04x}:{product_id:04x} {bus} Device {device_addr} \033[0m',
end='')
if get:
return device.open()
else:
device.close()
print(
'\r' + f'\033[4;31mID {vendor_id:04x}:{product_id:04x} {bus} Device {device_addr} CLOSED\033[0m',
flush=True)
elif vendor_id == VendorId and product_id in ProductId:
self.ProductId.append(product_id)
print(f'\033[0;31mID {vendor_id:04x}:{product_id:04x} {bus} Device {device_addr}\033[0m')
tmp = tmp + 1
else:
print(
f'ID {vendor_id:04x}:{product_id:04x} {bus} Device {device_addr}')
def implement(arg, device):
rr = Receive_Mess(arg, device)
time.sleep(1)
rr.start()
if __name__ == '__main__':
opt = argparse.ArgumentParser()
opt.add_argument('--unsave', action='store_true', help='whether save pictures')
opt.add_argument('--unshow', action='store_true', help='whether show pictures')
opt.add_argument('--device-num', type=int, default=1, help='Number of devices that need to be connected')
opt.add_argument('--interval', type=int, default=300, help='ms,Minimum time interval for saving pictures')
arg = opt.parse_args()
if arg.device_num == 1:
implement(arg, 0)
elif arg.device_num <= 0:
raise 'The number of devices must be at least one!'
else:
pro_ls = []
for i in range(arg.device_num):
pro_ls.append(Thread(target=implement, args=(arg, i,)))
for i in pro_ls:
i.start()
Run the Python script to start capturing images of the area in question.
The presented code leverages the usb1 library to interface with USB devices. At the heart of this program lies the Receive_Mess
class, which takes charge of initiating a connection to the device, collecting incoming data, and further processing it.
In terms of image handling, the program draws on three significant libraries. The PIL (Pillow) library provides the necessary tools for dealing with images at a fundamental level. The OpenCV cv2 library is used for displaying the images on the screen, and numpy is brought into play for sophisticated manipulation of image arrays.
At the execution level, the primary script instantiates an object of the Receive_Mess
class and sets it into action. It offers a variety of customizable options such as enabling or disabling image display, setting the number of devices to connect with, and determining the minimum interval for saving images.
A key feature of this program is its capacity to handle multiple devices simultaneously. Depending on the number of devices specified, it can spin off several instances of Receive_Mess
and execute them concurrently using separate threads.
In essence, this robust code is designed to interact with USB devices effectively, process the data received, and furnish an array of options to display and save images acquired from the connected devices.
Step Two: Image Processing and People Counting
Image processing serves as a pivotal component in our people counting process. This is where Roboflow, a cloud-based computer vision platform, comes into play. Roboflow aids in facilitating image processing and the training of people detection models.
One might wonder, why is image processing so fundamental in people counting? Here are some reasons, illustrated with examples:
- Accuracy: Image processing allows for precise people counting, even in crowded spaces. For example, in a shopping mall during peak hours, accurately determining the number of customers can be a challenging task. With effective image processing and AI models, each person can be identified and counted individually, ensuring accurate numbers.
- Real-time analysis: Image processing enables real-time people counting, which is crucial in many scenarios. For instance, in a conference hall or a concert, it's necessary to monitor the number of attendees in real-time to ensure safety and compliance with capacity regulations.
- Anonymity and privacy: In comparison to manual counting or surveillance, image processing in people counting respects privacy as it does not require personal identifiable information. This is especially important in places like hospitals or financial institutions where privacy is paramount.
- Scalability: Image processing allows the people counting system to be scalable and adaptable. For instance, the system can be adapted to count people in different scenarios - whether it's a small retail shop, a large sports stadium, or an outdoor event like a parade.
By utilizing Roboflow, we leverage sophisticated image processing techniques and AI training capabilities to achieve accurate and efficient people counting, ensuring we deliver reliable and scalable solutions for all our users.
Harnessing the power of Roboflow, our system employs state-of-the-art image processing techniques and AI training capabilities. This empowers us to achieve precise and efficient people counting, thereby ensuring the delivery of reliable and scalable solutions for our users.
To start utilizing these benefits yourself, let's dive into the initial steps:
- Set up your Roboflow account: Your journey begins with creating a Roboflow account. Visit the Roboflow website and follow the simple signup process. Once completed, log into your newly created platform.
- Import your images: With your account set up, you're now ready to bring in the images captured by the SenseCAP A1101 sensor. Roboflow offers a flexible image uploading process. If your images are stored locally on your computer, you can directly upload them into the platform. Alternatively, if you're using cloud services such as Amazon S3 or Google Cloud Storage, you can easily import your images from there. (Example of how to import)
- Use Roboflow's labeling tools to mark and delimit the people present in each image. This will help train the people detection model to correctly recognize and count individuals.
- Once all the images are labeled, Roboflow will process the data and generate a training set. This dataset will include the original images along with their corresponding labels.
- Use Roboflow's algorithms to train a person detection model. During training, the model will learn to accurately identify and count people in the images
Step 3: Visualization and gauging control
Once we have processed the images and counted the number of people present using the SenseCAP A1101 sensor and Roboflow, it is important to visualize and control the capacity in real time. There are several options to accomplish this, such as using the SenseCAP Mate application or creating our own custom capacity control system.
- Using the SenseCAP Mate application:
The SenseCAP Mate app is a tool designed to work in conjunction with the SenseCAP A1101 sensor. It provides an intuitive, easy-to-use interface that displays real-time people counts and allows you to set capacity limits. You can use this app to view updated capacity and receive notifications when the set limit is reached or exceeded. The SenseCAP Mate application is a convenient, out-of-the-box solution for capacity control.
- Creating a customized capacity control system:
If you prefer to create your own capacity control system, you can use the Python programming language and libraries such as OpenCV and Flask to develop a custom interface. Here's an example of how you could implement a basic capacity control system using Flask:
f rom flask import Flask, render_template
import requests
app = Flask(__name__)
# Endpoint to obtain the headcount
@app.route('/count')
def get_people_count():
response = requests.get('http://localhost:5000/api/count') # URL to obtain the head count from the SenseCAP A1101 sensor.
count = response.json()['count']
return render_template('count.html', count=count)
# Endpoint to set the capacity limit
@app.route('/set_limit/<int:limit>')
def set_capacity_limit(limit):
requests.post('http://localhost:5000/api/set_limit', json={'limit': limit}) # URL para establecer el límite de capacidad en el sensor SenseCAP A1101
return 'Límite de capacidad actualizado correctamente'
if __name__ == '__main__':
app.run(debug=True)
This code is a Flask application that provides two endpoints to interface with a sensor called SenseCAP A1101. Here is the explanation of the code:
- First, the necessary modules are imported: Flask to create the web application and render_template to render HTML templates. We also import the requests module to make HTTP requests.
An instance of the Flask application is created with the name app.
- Then, a route function (@app.route('/count')) is defined and called when the URL /count is accessed. This function is used to get the people count from the sensor.
- Within the get_people_count() function, an HTTP GET request is made to the URL http://localhost:5000/api/count to get the people count from the SenseCAP A1101 sensor. The response is stored in the response variable.
- Next, the people count value is extracted from the JSON response using response.json()['count'] and stored in the variable count.
- Finally, the HTML template count.html is rendered, passing the people count value as a parameter, using render_template('count.html', count=count). The count.html template must be previously created in the application's template directory.
- Another route function (@app.route('/set_limit/<int:limit>')) is defined and called when the URL /set_limit/<limit> is accessed. This function is used to set the capacity limit on the sensor.
- Within the set_capacity_limit(limit) function, an HTTP POST request is made to the URL http://localhost:5000/api/set_limit, passing the capacity limit as a JSON object in the request body using json={'limit': limit}.
- Finally, a string is returned indicating that the capacity limit has been successfully updated.
Finally, the file is checked for direct execution (if __name__ == '__main__') and the Flask application is started in debug mode using app.run(debug=True).
Results
Counting people in specific environments can yield a wealth of useful information, even outside the context of managing a pandemic. The ability to accurately count people can provide vital insights for various stakeholders, including urban planners, event organizers, retailers, and social scientists, to name a few.
For instance, city officials can use these numbers to better plan urban infrastructure, optimize public transportation, and allocate resources more efficiently. By understanding population distribution and flow, they can anticipate congestion issues and design cities that are both sustainable and livable.
In the retail industry, people counting can provide valuable data on customer behavior, peak shopping hours, and store performance. It aids in staffing decisions, layout changes, and promotional strategies, ultimately improving the shopping experience and boosting sales.
Event organizers can use people counting for capacity control, ensuring safety and compliance with regulations. For concerts, conferences, or sporting events, an accurate count can inform logistics, security measures, and emergency planning.
For social scientists, the data gathered can support research on human behavior, social dynamics, and demographic trends. It can also help to monitor and assess the impact of specific interventions or policy changes.
In this example, we use Flask to create a basic web application. The /count endpoint retrieves the headcount from the SenseCAP A1101 sensor and displays it in an HTML template. The /set_limit/<int:limit> endpoint allows setting the capacity limit by sending a POST request to the sensor.
In conclusion, the importance of counting people in various environments extends beyond health crises. The data it provides is invaluable for decision-making processes, strategic planning, and overall performance optimization across various fields.
Comments