Museums, the house of muses, some people love them, but for some, it is no more than a boring collection of lifeless things. However, here comes our application, Ablativo, intending to bring also those least interested people to the wonderful world of museums.
The IdeaLet's start with a short introduction to the idea
PROBLEM
For most people museums are boring, out-of-date, still too far from the contemporary idea of interactivity. Therefore, we want to renew this preconception, to bring closer also non-art lovers, providing them with a different way to learn new things.
SOLUTION
Make artworks come to life!
- The user connects to the Ablativo mobile application;
- When he/she approaches an artwork, it starts a new conversation with the user, via chat;
- The user can now choose which question ask to the statue (for now precompiled ones);
- The artwork, simulated by a bot, can answer questions or even make proposals to the user.
While the user visits the museum, the mobile application will collect data about his/her personal experience. At the end of the visit, a machine learning algorithm will mix them with those provided by the environmental sensors, and elaborate everything to produce a pleasant song. A final romantic gift to remember the emotions experienced during the visit.
BENEFITS
USERS:
- Visits will no longer be boring thanks to the active interaction with the artworks
- Are not forced to read the entire description, but only ask for the things that interest them
- Experienced and less experienced users can take advantage of targeted visits
- Review the questions asked, and have a memory of their visit in the form of music
MUSEUM:
- Failure detection of the embedded devices inside the museums
- The current environmental status of the museum, showing the telemetries collected by the embedded sensors;
- The number of visitor inside the museum using the mobile application;
- The number of likes that each artwork receive;
More about this section can be found on the Design document on Github, but first, let's see the service demo
Structure overviewThe following paragraph briefly describes the execution flow of the service and gives a high-level overview of the overall components.
1. The main flow of the mobile application starts with the interaction between the smartphone and the beacon sensors (one for each room) through the BLE protocol.
2. When the visitor enters an area covered by a beacon, the mobile device recognizes the id of the sensor and sends it to the back-end deployed on EC2.
3. The back-end searches in the Dynamodb database for the data of the corresponding room and a bot starts sending messages to the user's chat, addressing predefined questions and answers.
4. During the visit, both smartphone and embedded devices retrieve values from some sensors. The environmental telemetries are stored inside the database, while the personal visitor telemetries are stored only inside the smartphone, for privacy reasons (and reducing the service final cost). The sensors are managed by Iot-core.
5. At the end of the visit, all the data connected to the visitor are used to reconstruct his/her activity/emotions and so converted into musical notes, then used as input for a Neural Network to generates a melody.
6. At the same time, the dashboard for data analysis shows to the curator the environment telemetries and statistics about the number of visitors inside the museum. The interaction with the cloud services is managed by AWS amplify, and in particular the authentication service by Cognito.
More about this section can be found on the Architecture document on Github.
NOTE: For a real deployment, we may use AWS Timestream to reduce the cost for the telemetries.The STM board and mbed-OS
Both Beacon and Environmental telemetries are implemented on a single board, the STM B-L475E-IOT01A1, running Mbed-OS.
- Beacon uses the proximity perception of Bluetooth Low Energy technology to transmit a unique universal identifier (UUID), which will then be read by a specific app or operating system. Once the signal is read, the app can perform various scheduled actions.
- The environmental telemetries collected by the board are sent to the cloud using the MQTT protocol, through the wifi module. The benefits of this procedure are twofold. Indeed, aside from the obvious environmental check, these messages also act as alive Heartbeats for a simple failure detector. The data collected are
Temperature
,Humidity
andPressure
.
Let's analyse the code
- For the sensors, we can use the drivers in the BSP_B-L475e-IOT01A library, that provides a very simple interface. Once initialized, we can extract the telemetries in the main loop by calling the corresponding function for each value.
/* Initialize sensors */
BSP_TSENSOR_Init();
BSP_HSENSOR_Init();
BSP_PSENSOR_Init();
/* Extract telemetries */
std::string temp = std::to_string( BSP_TSENSOR_ReadTemp());
std::string hum = std::to_string( BSP_HSENSOR_ReadHumidity());
std::string press = std::to_string( BSP_PSENSOR_ReadPressure());
- For the Wifi connection, we first need to include the wifi-ism43362 library, containing the drivers for the homonymous component (built-in). Then, we can make a standard mbed WiFiInterface, (inherited from the NetworkInterface), a new instance, and finally load the connection parameters, that can be found in
mbed_app.json
.
WiFiInterface* network = NULL;
network = WiFiInterface::get_default_instance();
network->connect(MBED_CONF_APP_WIFI_SSID, MBED_CONF_APP_WIFI_PASSWORD, NSAPI_SECURITY_WPA_WPA2);
- For the AWS connection, we need a TLSSocket, that implement TLS stream over the existing Socket transport. The complete configuration requires first to setup of all the certificates (in
MQTT_server_setting.h
), then to open the connection to the correct host and port (same file).
TLSSocket* socket = new TLSSocket;
socket->open(network);
socket->set_root_ca_cert(SSL_CA_PEM);
socket->set_client_cert_key(SSL_CLIENT_CERT_PEM, SSL_CLIENT_PRIVATE_KEY_PEM);
socket->connect(MQTT_SERVER_HOST_NAME, MQTT_SERVER_PORT);
- For the MQTT connection, there are a few different library options. However, the mbed-mqtt library seems to be the best at the moment. Once created the specific MQTTPacket_connectData instance, and configured it with the clientID provided by AWS IoT Core, we only need to make an MQTTClient over the previously generated socket and connect.
MQTTPacket_connectData data = MQTTPacket_connectData_initializer;
data.cleansession = false;
data.MQTTVersion = 4; // 3 = 3.1 4 = 3.1.1
data.clientID.cstring = (char *)MQTT_CLIENT_ID;
mqttClient = new MQTTClient(socket);
mqttClient->connect(data);
- The initialization is complete. It is time for the main loop. At each step we extract the telemetries and then send them through MQTT using the following few lines of code
/* Compose message */
std::string telemetries = std::string("{\"deviceId\":") + std::to_string(DEVICE_ID) + ",\"temp\":" + temp + ",\"hum\":" + hum + ",\"press\":" + press + "}";
char* buf = (char*)telemetries.c_str();
/* Setup MQTT message instance */
MQTT::Message message;
message.retained = false;
message.dup = false;
message.payload = (void*)buf;
message.qos = MQTT::QOS0;
message.payloadlen = strlen(buf);
/* Publish */
mqttClient->publish(MQTT_TOPIC_PUB, message);
- For the beacon functionality, we have the TARGET_CORDIO_BLUENRG library and a file called
beaconService.h
containing the corresponding class. We can load the service making a new thread (before the loop) that initialize a new BLE instance, and then builds the beacon on top.
/* make thread (inside main) */
Thread thread_ble;
thread_ble.start(callback(ble_thread));
/* ble_thread function */
void ble_thread(){
printf("BLE running...\r\n\n");
BLE &ble = BLE::Instance();
ble.onEventsToProcess(schedule_ble_events);
Beacon beacon(ble, event_queue);
beacon.start();
}
It is time to download the code from the corresponding repo. However, before compiling, remember to
1. Setup MQTT_server_setting.h
with your credentials;
2. Set the correct museum and device ID in mbed_app.json
(unique for each device);
3. Set "WiFi SSID" and "WiFi Password" in mbed_app.json
;
4. Look at this great tutorial to understand how to setup AWS IoT Core;
From IoT core to DynamoDB
At this point, we can check from the AWS console the incoming messages.
However, we also need to store the telemetries. Thus, let's go to the DynamoDB page and make a new table, using create table
.
For convenience, let's use the deviceId as primary key and the timestamp of the telemetry as sort key (makes it easier to retrieve the telemetries for a given interval of time while generating the final song).
AWS makes it super easy to connect the IoT devices with the database. Indeed, we can simply set up a new rule on the IoT core console.
Let's go to act -> rules
, click on the create
button and fill the form as follows
- For the name and description use whatever you want;
- Having only incoming telemetries, we can simply use
FROM '#'
in the query statement; - Add action
insert message into a DynamoDB table
and configure it as shown here
timestamp()
returns the current timestamp
Management and monitoring module, practically speaking the admin console. It is implemented as a React + Material UI web application so that to have a fully responsive and accessible from everywhere tool, with great performances and a pleasant Material Design interface. It provides information about:
- The current environmental status of the museum, showing the telemetries collected by the embedded sensors;
- The number of visitor inside the museum using the mobile application;
- The number of likes that each artwork receive;
In future releases, it may include other functionalities, such as a calendar, Q&A fast personalization, and so on.
In this article, we assume the reader is already familiar with the technologies above, so that to introduce only the AWS services.
For easy deployment, we decided to take advantage of the Amplify framework. To install the CLI, we can run:
npm install -g @aws-amplify/cli
and then configure it by:
amplify configure
At this point, in order to get our Amplify project started, we run the following command to initialize and configure the project:
amplify init
AUTHENTICATION
The Amplify Framework uses Cognito as the main authentication provider so that it can be easily implemented following the few steps at this link of the documentation. The service provides also built-in forms and a great management console for the users.
Given the fact that only the museum curators can access the web application, the registration form is blocked and only the admin can add new users from the AWS console.
FAILURE DETECTOR
AWS IoT core constantly checks the status of the connected devices and publishes messages over the MQTT broker, so that we can easily detect failures.
The AWS Amplify PubSub category provides connectivity with cloud-based message-oriented middleware in order to create real-time interactive experiences. Thus, we can intercept the disconnections.
The configuration is very simple and it is explained at this link. However, given the "educate" account we cannot make new IAMs and so it is currently provided only as PoW.
Every time a device loses the connection, a pop-up is shown on the Dashboard.
DATABASE INTERACTION
API with GraphQL provides a very comfortable way to interact with the database, directly through React. However, unfortunately, the educate account does not provide access to these functionalities. Thus, for the proof of work, we decided to simply implement the REST API on the backend server deployed on EC2.
The Mobile appIt is an open-source application available on our git repository and provided as apk. We are talking about a hybrid app developed on React Native, a cross-platform mobile development framework that allows programmers to create apps for both iOS and Android in one simple language, JavaScript. React Native runs on React, an open-source library for building UI with JavaScript, this framework through a set of components builds a mobile application with native look and feel. Due to the pretty simple learning curve and well-balanced performance React Native, is the perfect compromise for our application.
Getting Started
First of all, let's assume the reader is familiar with the base concepts of mobile development. We need the react-native-cli, Android Studio and the last JDK available configured (see how here). Then, we can download the Ablativo_Mobile project on Git and install the required packages with
npm install
To start the application
npx react-native run-android
That command will launch a Metro Bundler server, and if everything is set up correctly, we should see Ablativo running in our Android emulator.
Project structure
App.js is where all the magic begins, essentially it is made of dispatch events in order to address the user to the right place at the right time. If the user has already logged into the application, it will be sent to the Application flow, otherwise, it will be sent to the Authentication flow.
As you can see, the structure is divided into two main flow: Application and Authentication. In the authentication flow, there is nothing more than what the name predicts, during the registration the user will be able to choose his username and mentor. Things are different when we deal with the application flow, the main actors here are the Chat, Home,and Profile screens.
Profile User details are available on that screen, also with logging of all its visits where the user can listen to the music generated during that particular visit.
HomeThis screen shows the current room in which the user is settled and handles the visit logic of the application. Let's see how does it work
Once we retrieve the roomID of the room in which the user is settled through a Rest, we get the related value as Artworks, Room name, and so on. The user will see all the artworks that surround him and will be able to leave positive or negative feedback. On the other hand through a shake of the device the room will get positive feedback.
When the user clicks on "inizia visita" button, the smartphone starts to collect data from the sensor in order to generate music (deeper explanation later).
Beacon
We have talked about getting the roomID but how does it work?
Before we start, note that you will not be able to use the Bluetooth inside the emulator, you will need a device with at least an Android 5.0 version installed.
Then we need to define the type of beacon and a region to look for, composed of an identifier (room name), the UUID of the beacon, and finally minor and minor.
At this point let's take the closest beacon in the list, the minor value will represent the specific device id, that id will be used to retrieve the room details (room name, list of artworks, and so on...) through a very simple REST to our back end.
Chat In the project structure, we can see that three files are involved: chat.js, chatList.js, and finally chatWrapper.js. Ablativo in this part has a really basic and intuitive UI that follows the classic Chat design for mobile applications, according to that in the chat list we retrieve all the artworks in the current room allowing the user to interact with them. Chat.js is the core of this section, build on the skeleton of React-Native-Gifted-Chat, a well-balanced library that gives the most basic function with a linear and simple UI.
In a chat, we have three main actors: User, Artwork, and finally Mentor. The Mentor interaction in this phase is very limited. Indeed, he will mostly moderate the chat so that the user avoid to repeat always the same question.
Artworks and mentor have a predetermined list of questions and answers. The user can choose the questions each time (through quick replies) and the system will simulate the interaction of the artworks as if they were real.
On the backend side, all the messages will be saved on a table in order to be retrieved at every moment.
When a user finishes their visit inside the museum, Ablativo generates a melody based on the parameters of the environmental sensors and user activity recognition. In this part, we deal with the technical aspects to create a personalized melody for the user.
DATA COLLECTION
First, we extract the values from the sensors. We decided to use:
- 3 ambient sensors:
Temperature
,Humidity
, andPressure
with the STM board. These are not personal values but they can affect the emotional condition of the users. See The STM board and mbed-OS section above for the implementation - 3 personal sensors:
Accelerometer
,Gyroscope
,Heart Rate Sensor
. These values are different for every user because they depend on the activity they are doing. The first two are physically retrieved by the smartphone, while the heart rate sensor is currently simulated with random integers in the range 50-180 bpm:
CONVERT DATA INTO MUSICAL NOTES
Now we have to convert data into musical notes. The result of this process has to produce a mapping between numerical data and musical notes, represented in MIDI Piano styles.
We used two libraries to reach this goal:
- NoteSequence: this is a JSON-like representation of the MIDI notes:
/*** EXAMPLE ***/
SEQUENCE = {
notes: [
{pitch: 60, startTime: 0.0, endTime: 0.5},
{pitch: 60, startTime: 0.5, endTime: 1.0},
{pitch: 67, startTime: 1.0, endTime: 1.5},
{pitch: 67, startTime: 1.5, endTime: 2.0}
],
totalTime: 2
};
- MidiWriterJS: JavaScript library providing an API for generating expressive multi-track MIDI, used in this case to generate the URI of the track:
// Save Music
const writer = new MidiWriter.Writer(track);
musicURI = writer.dataUri()
MUSIC GENERATION
In the world of machine learning, there are many ways to generate music. For our purposes we choose these technologies:
- Magenta.js: is an open-source JavaScript API for using the pre-trained Magenta models in the browser. It is built with TensorFlow.js, which allows for fast, GPU-accelerated inference. It allows us to manipulate the NoteSequence representation that must be quantized to work with the RNN model
const quantizedSequence = core.sequences.quantizeNoteSequence(SEQUENCE, 1)
- MusicRNN: is a pre-trained model of the Magenta repositories. Takes as input a NoteSequence, and continues it in the same style as possible. There are different models to use, we choose MelodyRNN with the mono_rnn model. This configuration acts as a baseline for melody generation with an LSTM model. It uses basic one-hot encoding to represent extracted melodies as input to the LSTM. While basic_rnn is trained by transposing all inputs to a narrow range, mono_rnn can use the full 128 MIDI pitches.
// Continuing SEQUENCE with RNN
const mrnn = require('@magenta/music/node/music_rnn');
const model = new mrnn.MusicRNN('https://storage.googleapis.com/magentadata/js/checkpoints/music_rnn/melody_rnn')
model.initialize();
rnn_steps = 60;
rnn_temperature = 1.5;
model.continueSequence(quantizedSequence, rnn_steps, rnn_temperature)
Final evaluationWe divided it into two main sections: User experience and Technical aspects.
User experience
We first made a survey, sharing it with our family, friends, and friends of friends. Indeed, by these feedbacks and suggestions, we (hopefully) understood where to make some changes, and especially if the final user would really like and possibly use the final service.
Here the metrics for this section:
- Accessibility: any type of user who knows how to use a chat must be able to use Ablativo.
- Simplicity: Ablativo must be intuitive: The museum visitor can easily sign-in/sign-up and choose his/her mentor; The museum visitor can easily interact with the mentor/statue, which must provide an extensive set of questions; The museum curator can easily understand and see the statistics of the metrics collected by the service;
- Usability: The response times of the chat must be rapid, otherwise this may lead the user to not use the application.
- Graphic interface: The overall interface must be user friendly and pleasant.
- Privacy & Security: Ablativo does not store any sensible data. The passwords are hashed as for standard requirements.
Technical aspects (Beacon-Sensors board)
- Accessibility: indicates the ability of the technology to be exploited by the user.
- Accuracy: average error in calculating the distance from the statues.
- Precision: how the system works overtime, how similar the various measurements are to each other, which does not necessarily mean that the system is accurate.
- Robustness: the system's ability to resist interference and noise from nearby sensors
- Scalability: the scalability of a system ensures its normal operation even when the sphere of the application becomes larger.
- Cost: the cost of a system like this can depend on several factors. The most important ones include money, time, space, weight, and energy. The time factor is linked to installation and maintenance times.
- Security: security means the danger that data sent through the system will be violated or accessed by third parties.
- Failure detection: the system must be able to notify the museum if there is a fault.
Technical aspects (Mobile app)
- Accessibility: indicates the ability of the technology to be exploited by the user.
- Complexity: complexity can be attributed to hardware, software needed for the system.
- Scalability: the scalability of a system ensures its normal operation even when the sphere of the application becomes larger.
- Cost: the cost of a system like this can depend on several factors. The most important ones include money, time, space, weight, and energy.
- Security: security means the danger that data sent through the system will be violated or accessed by third parties.
Technical aspects (Web dashboard)
- Accessibility: indicates the ability of the technology to be exploited by the user.
- Complexity: complexity can be attributed to hardware, software needed for the system.
- Scalability: the scalability of a system ensures its normal operation even when the sphere of the application becomes larger.
- Cost: the cost of a system like this can depend on several factors. The most important ones include money, time, space, weight, and energy.
- Security: security means the danger that data sent through the system will be violated or accessed by third parties.
In this article, we only exposed the metrics, but more about this section can be found in the Evaluation document on Github.
Comments