Nowadays most of advertising is targeted, and you get to look at what interest you whether you are browsing the internet, on facebook, twitter and almost every social network. BUT that is not the case on the streets. When you go out you can get a wide range of advertisements that go from things you will never use to maybe something useful but expensive, and everything in between. Street advertisements are not really targeted, and lack really the statistics and impact they could be generating. And sometimes when it goes through several ads and you get one you like it changes and you get lost in a sea of ads! That is quite a shame in the age of IoT, image manipulation and Computer vision, I think we can do better and give a much nicer experience to both businesses and consumers.
I will build an experiential advertising platform. Imagine going through the streets minding your own business. And then suddenly, you look directly into a place where an ad should be, and in a matter of seconds you find yourself looking at an incredible product, something you would like to buy or possess, or maybe an incredible offer or a service you really need right now!
It will help by taking advantage of targeting their ads also on the Digital out of home spaces, and also measure their consumption to help understand their audience better. Publishers will have means to get the required traction of their consumers seamlessly, and monetize their existing infrastructure better. It will be an intelligent Digital-Out-Of-Home ad platform that transforms traditional digital outdoor advertising screens into targeted ad-spots.
Existing solutions for out of home ads do not do this, they base all their hypothesis on data models that seem to work, but have little evidence of this. And it is quite different of what google and others do because it requires hardware out of the user’s possession.
We will base the system on this connection diagram:
The first step is to put your Theta V camera in Developer mode, this is a simple but maybe a little slow process. It ussualy takes 2 or 3 days for the engineers in charge of Ricoh Theta to unlock the developer mode for your camera
Read the following documentation by Jessie Cassman to set your camera into dev mode:
While your Developer mode is being approved, you can start downloading the latest version of Android Studio at the following link:
To have the best experience with your Ricoh Theta V camera, we recommend downloading ALL the software that Ricoh Theta offers (it is FREE):
To control the camera using the Ricoh Theta API, it will be necessary to have installed specialized software to execute programs written in Python. This can be done with Terminal, CMD or an IDE such as Anaconda, in this tutorial we will use Anaconda to run the programs:
We have to have the following libraries installed in Anaconda.
To install all the packages, you have to run the following commands in the Anaconda Prompt.
conda install -c anaconda pip pip install pickle pip install pygame conda install -c conda-forge pytest-shutil conda install -c conda-forge opencv
Once we have all the installed packages, the programs in the "Python Scripts" folder should run without errors. All the files have to run from the "Python Scripts" folder, such as deep learning models to identify the gender and age of the subjects that are viewed by the camera.
It is necessary at least once to run the "Save Pickle.py" program.
After running it for the first time, the other programs should run without problems.
In order to run the project without a problem, we need to connect the laptop to the camera via wifi.
The process to activate the AP point of the camera is the following:
1.-Press the power button on the camera to turn the power on
2.-When the camera is connected to a smartphone via a Wi-Fi network, you can use your smartphone to shoot remote photos and view photos.
3.- Press the wireless button to turn the wireless function on
- The wireless lamp should light in red.
- You should be able to display the Wi-Fi settings of your smartphone
Select the SSID of the camera from the network list and enter the password:
The serial number printed on the base of the camera is the same as the SSID and password.
- The SSID is "THETA" plus the numbers in the (B) section (in this case "THETA001017"). The password is the numbers in the (A) section (in this case "00001017").
Connect your Laptop to the camera.
In order for the product to work as one we need to run the following python programs in parallel with two different kernel, this is easily done with Anaconda.
This program connects to the Theta API.
Takes a 360 photo.
Processes it with DL models to parse Gender and Age data:
The process to obtain age and gender is done through the following models.
- It gets the faces inside the image using Haarcascades.
- Processes them using DL models.
Once we have the whole process we remove the photo from the camera, to save memory in the long run.
Then we count the people detected (according to demographics), and we keep the variable "check" in a pickle file.
The update of the "check" variable occurs every 20 seconds, the minimum that the update should last is 30 seconds, because each ad lasts approximately 30 seconds. So the ad that is reproduced will be the most efficient for the public detected at all times.
This program is to play the Ads, and it workd like so:
If the "check" variable equals 0, it means that most of those around the camera are men. Thus the program will play only ads targeted to men.
If the "check" variable equals 1, most of the subjects around the camera are women. Thus the program will reproduce only ads for women.
If the "check" variable equals 2, then the ads played will be 1 for women first and then one for men in succession.
The following image is a screenshot of how the code works:
Each ad lasts approximately 30 seconds.
Now to see it in action!:
Original Install Guide: http://theta360.guide/plugin-user-guide/main/install/
Connect your THETA to your computer and click on the Install button.
The Ricoh Desktop Application will automatically start for plug-in installation.
As the THETA V can save multiple plug-ins to internal storage, you need to specify the active plug-in to launch when the camera is put into plug-in mode.
There are many ways to select the active plug-in.
Developers can use the open API of the THETA to set the plug-in. Here we will be using the official desktop app to specify the active plug-in.
Connect your desktop computer or laptop to the THETA with a USB cable. Under the file menu, select Plug-in management....
Select the plug-in that you want to use:
It uses OpenCv to process images and generates interesting filters for the camera in order to experiment later with haarcascades.
- Image Equalization (color).
- Binarization of image or Threshold (Red, Green and Blue).
Once activated, we can select the desired filter using the "Mode" button, depending on the color that the Wi-Fi symbol has, it will be the effect that will be applied to the image.
- Blue: Image Equalization (color).
- Green: Image Binarization or Threshold (Red, Green and Blue).
- Cyan: Grayscale.
- Magenta: Blur.
- Yellow: Erosion-Dilatation.
- White: Negative.
- Once the filter is selected, we only have to press the Shutter button to take the image and save it, the image will be saved in a folder called "Filtered Images".
Note: when the filter is being applied the WiFi symbol will flash, until it stops blinking. Do not press any other button, this process takes 1 - 4 seconds depending on the filter.
The filters were made with the following code:
Image Equalization (color):
fileUrl = String.format("%s/%s_equalize.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Mat rgbImage = new Mat(img.size(), img.type()); Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2YCrCb); List<Mat> channels = new ArrayList<Mat>(); Core.split(img, channels); Imgproc.equalizeHist(channels.get(0), channels.get(0)); Core.merge(channels, img); Imgproc.cvtColor(img, img, Imgproc.COLOR_YCrCb2BGR);
- Binarization of image or Threshold (Red, Green and Blue):
fileUrl = String.format("%s/%s_threshold.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Mat rgbImage = new Mat(img.size(), img.type()); Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2YCrCb); Imgproc.threshold(img, img, 127.0, 255.0, Imgproc.THRESH_BINARY); Imgproc.cvtColor(img, img, Imgproc.COLOR_YCrCb2RGB);
fileUrl = String.format("%s/%s_gray.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY);
fileUrl = String.format("%s/%s_blur.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2BGR); Imgproc.blur(img, img, new Size(25,25));
fileUrl = String.format("%s/%s_erodedilate.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Imgproc.cvtColor(img, img, Imgproc.COLOR_RGB2GRAY); Imgproc.threshold(img, img, 0, 255, Imgproc.THRESH_BINARY_INV+Imgproc.THRESH_OTSU);
fileUrl = String.format("%s/%s_negative.jpg", Constants.PLUGIN_DIRECTORY, dateTimeStr); Core.bitwise_not(img,img);
In a future version of the project we would like to experiment much more performing Haarcascades and other DL algorithms directly on the camera transforming the concept to a more accessible one. For the moment it works perfectly and we see how it can be improved and even deployed for commercial applications.
We want to test it more thoroughly and this time in a much bigger screen.
We have to find hardware that can run haarcascades and the DL algorithms at a much lower price instead of using a gaming laptop haha.
After that deploy it in a much more commercial setting such as retail stores or urban centers.
Hopefully you all liked the project, it was quite a challenge to make.
I want to extend my thanks to the RICOH team and the contest organizers for the great support. And specially to Jesse Casman for all the follow through and easy access he provided.