One thing I've learned from watching companies release successful IoT products is that it's best to keep things simple. Rachio sprinkler controllers, Nest thermostats and smoke detectors, Ring doorbells, Philips Hue light bulbs and various other products find one general thing to do, and do it well. In order to demonstrate some of the cool things you can do with Android Things and the Google Assistant, I decided to put together a pretty straightforward smart lamp. The base of this project is an Android Things board with a strip of programmable LEDs and a button. To add more detail and make it worthy of being a 'smart device,' we'll also add a speaker and a microphone to interface with the Google Assistant. This will allow us to turn on and off the lights, change their color or brightness and perform various more complicated tasks using voice operations.
3D PrintWhile we could just set everything up with hardware and leave it exposed, it's more fun to put together a casing. Having found a simple light on Thingiverse, I heavily modified it to fit my needs. It's not the world's best lamp (I need more LEDs in it. Then again, when doesn't a project need more LEDs? :-P) but it serves its purpose. This lamp consists of four parts:
The diffuser
The electronics casing
Thee connector for the diffuser to the casing
And a base cover
Everything is held together using M3 bolts after drilling out and tapping the holes in the models.
For the base Android Things device, I'm using a Raspberry Pi 3B. While the NXP Pico i.MX7D is also available, I happen to have multiple Pis laying around, so went with that. You can start by going to the Android Things Developer Console and creating your new product. After you're product is created in the console, you can open the drawer on the left of the screen and go into the tools section. This is where you'll find the Android Things setup script tool.
After you download the script, you can run it with administrator privileges to flash the SD card (assuming you're using a Pi). I used the standard default image to get rolling.
After the image installs, it's time to setup wireless. Take the SD card out of your computer and put it into the Raspberry Pi, then plug the Pi into your network via an ethernet cable. You will be prompted by the script to setup wifi.
After you enter your credentials, your device should be connected and ready to go for development.
Next you will want to connect all of the peripherals. The peripheral list that I'm using is as follows:
- Pimoroni Blinkt
- USB microphone
- AUX portable speaker
- LED Arcade button
The Blinkt is designed to sit on top of the Raspberry Pi like a hat, though we actually only need four of the pins. You will plug in the component as shown below, with the blue wire going to the Raspberry Pi pin 23, and the green going to pin 24.
The arcade button will be wired up so that the LED is powered by pin 12, and the button will be connected to the 3.3v connection on the Pi, as well as pin 22. It's important to note here that I'm using a normally closed pin on the arcade button, so there will be a steady flow of 3.3v to pin 22 unless the button is pressed. We will need to remember this when we write our code to read the button, as it is active when the signal is low. I also added a pull-up resistor to the button, as the signal was a bit jumpy without it and inconsistent with actual button presses.
The final two parts, the AUX connected speaker and the USB microphone, can be plugged directly into the Raspberry Pi via the onboard AUX and USB connections. I did use a USB extension cable for the microphone in order to bring it out of the casing, allowing it to pick up audio a little easier.
Android Things Assistant Code SetupOnce everything is wired together and placed into the 3D printed casing, it's time to program. Open Android Studio and create a new Android Things project with a minimum API version of 27 (Oreo 8.1). This can consist of a single Activity without any UI files.
As much as I hate to say it, setting up the Google Assistant on Android Things is not as easy as other platforms. We will need to use the Google Assistant service, rather than an easily installable SDK or library, to add any form of Assistant functionality. We can get the necessary files for enabling the Assistant by downloading the official sample and copying various files directly from there. Start by downloading the sample as a zip and opening it up. Next you'll find a folder named grpc, which contains the proto files necessary to use the Assistant service. Copy that into the root of your new Android Things project.
Once you have the folder copied, open the settles.gradle file and include the folder as a module in your project.
include ':app', ':grpc'
Next, open the top level build.gradle file and include the dependency for protobuf in the dependencies
node.
classpath "com.google.protobuf:protobuf-gradle-plugin:0.8.6"
After you add this dependency to the top level gradle file, you will want to go into the app module's build.gradle file to add dependencies related to the peripherals we will be using under the dependencies
node:
implementation 'com.nilhcem.androidthings:driver-blinkt:0.0.3'
implementation 'com.google.android.things.contrib:driver-button:1.0'
You will also want to add support for Google's OAuth2 library and support annotation.
implementation('com.google.auth:google-auth-library-oauth2-http:0.6.0') {
exclude group: 'org.apache.httpcomponents', module: 'httpclient'
}
implementation 'com.android.support:support-annotations:28.0.0'
Next, go into the AndroidManifest.xml file and add the required permissions for this project. These focus on internet usage, since the Google Assistant is an online feature, peripheral I/O for accessing the hardware peripherals on this device, and audio settings for recording and playing back audio from the Google Assistant. It's worth noting that after you install and run your app, you will need to restart in order to fully grant these permissions to the device.
<uses-permission android:name="android.permission.RECORD_AUDIO" />
<uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS" />
<uses-permission android:name="android.permission.INTERNET" />
<uses-permission android:name="com.google.android.things.permission.USE_PERIPHERAL_IO" />
For your app to open up automatically after rebooting the device, you will also need to add an intent-filter
to your Activity declaration in AndroidManifest.xml.
<activity android:name=".MainActivity">
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.LAUNCHER"/>
</intent-filter>
<intent-filter>
<action android:name="android.intent.action.MAIN"/>
<category android:name="android.intent.category.HOME"/>
<category android:name="android.intent.category.DEFAULT"/>
</intent-filter>
</activity>
We're almost done with the initial setup copying :) At this point it's time to copy the EmbeddedAssistant.java and Credentials.java files into your project. The EmbeddedAssistant
class will handle conversations from the Google Assistant, and the Credentials
class, as the name implies, will retrieve your necessary credentials for accessing the Assistant.
We'll come back to using the Google Assistant in a moment. The next thing you will want to do is create the credentials that will be used by your device. Start by going to the Actions Console and creating a new project for your lamp. Once the project is created, go into Device Registration on the left of the screen and select the blue REGISTER MODEL button.
You will be prompted to enter information about your device, and then download your OAuth2.0 credentials. On the last screen you will be asked which pre-defined traits you would like to enable. For this project, enable Brightness, ColorSpectrum, and OnOff.
After you have registered a model, enable the Google Assistant API from this screen.
Next you will need to enable an OAuth Consent screen. Enter in information related to your project and submit it via this form.
For the next step, you will need to install the Google-OAuthLib-Tool in a Python3 virtual environment (or you can install it directly on your machine if you're running Linux or OSX, which is what I did).
python3 -m venv
envenv/bin/python -m pip install --upgrade pip setuptools
env/bin/pip install --upgrade google-auth-oauthlib[tool]
source env/bin/activate
Next, navigate into the root of your new lamp project and run the following command using your previously downloaded credentials.
google-oauthlib-tool --client-secrets /yourdownloadedcredentials.json \
--credentials app/src/main/res/raw/credentials.json \
--scope https://www.googleapis.com/auth/assistant-sdk-prototype \
--save
This will take you to an authorization screen where you must approve your application to use the Google Assistant.
After you approve the device, another set of credentials will be saved in the res/raw directory of your application.
Now that we have our credentials, it's time to update MainActivity.kt (you are using Kotlin, right?). The first thing to do is add all of our hardware constants and variables for the button and LED (we'll get to the strip of LEDs in the next section), as well as the values for using the assistant.
private val LED_GPIO = "BCM12"
private val BUTTON_GPIO = "BCM22"
// Audio constants.
private val PREF_CURRENT_VOLUME = "current_volume"
private val SAMPLE_RATE = 16000
private val DEFAULT_VOLUME = 100
// Assistant SDK constants.
private val DEVICE_MODEL_ID = "ptr-smart-lamp"
private val DEVICE_INSTANCE_ID = "sample-device"
private val LANGUAGE_CODE = "en-US"
// Hardware peripherals.
private lateinit var mButton: Button
private lateinit var mLed: Gpio
private lateinit var mEmbeddedAssistant: EmbeddedAssistant
You'll notice the DEVICE_MODEL_ID
value is the model that we set up earlier in the Actions console. The DEVICE_INSTANCE_ID
value will be needed in the next section, and would need to be unique to the device in a normal production environment.
Next, initialize your hardware peripherals in the onCreate()
method. As mentioned in the previous section, your button will be triggered when the electrical signal is low.
try {
val pioManager = PeripheralManager.getInstance()
mButton = Button(
BUTTON_GPIO,
Button.LogicState.PRESSED_WHEN_LOW
)
mButton.setOnButtonEventListener(this)
mButton.setDebounceDelay(1000)
mLed = pioManager.openGpio(LED_GPIO)
mLed.setDirection(Gpio.DIRECTION_OUT_INITIALLY_LOW)
mLed.value = false
} catch (e: IOException) {
Log.e("Test", "error configuring peripherals:", e)
return
}
You'll notice that we're also setting an OnButtonEventListener
on the button. This is the method that will be called on either a rising or falling edge signal to the button. When the button is pressed, we will turn on the button's LED and start a conversation with the Google Assistant. When it is released, we will turn off the LED.
override fun onButtonEvent(button: Button?, pressed: Boolean) {
Log.e("Test", "button event: " + pressed)
try {
mLed.setValue(pressed)
} catch (e: IOException) {
Log.d("Test", "error toggling LED:", e)
}
if (pressed) {
mEmbeddedAssistant.startConversation()
}
}
Returning to onCreate()
, you will need to retrieve the volume for the device (either one a user has set, or the default), as well as the credentials for your device to access the Google Assistant.
val preferences = PreferenceManager.getDefaultSharedPreferences(this)
val initVolume = preferences.getInt(PREF_CURRENT_VOLUME, DEFAULT_VOLUME)
var userCredentials: UserCredentials? = null
try {
userCredentials = EmbeddedAssistant.generateCredentials(this, R.raw.credentials)
} catch (e: IOException) {
Log.e("Test", "error getting user credentials", e)
} catch (e: JSONException) {
Log.e("Test", "error getting user credentials", e)
}
Now that you have all of the required information for initializing the Google Assistant, let's create the EmbeddedAssistant
object. The EmbeddedAssistant
class uses a builder pattern to set all of its properties and callbacks. The majority of the required code is relatively straight-forward - we set the credentials and IDs, a sample rate for audio sampling, language and volume. We are also adding the first of two callbacks: the request callback. This callback has one method that's triggered when a conversation is started, and the other will attempt to return the recognized speech to your application.
mEmbeddedAssistant = EmbeddedAssistant.Builder()
.setCredentials(userCredentials)
.setDeviceInstanceId(DEVICE_INSTANCE_ID)
.setDeviceModelId(DEVICE_MODEL_ID)
.setLanguageCode(LANGUAGE_CODE)
.setAudioSampleRate(SAMPLE_RATE)
.setAudioVolume(initVolume)
.setRequestCallback(object : RequestCallback() {
override fun onRequestStart() {
Log.e("Test", "starting assistant request, enable microphones")
}
override fun onSpeechRecognition(results: List<SpeechRecognitionResult>) {
}
})
The largest part of this class that you will interact with is the ConversationCallback
object. Whenever the Assistant sends data back to your device, one of the methods in this object will be called. After creating the ConversationCallback
, you will need to call build()
on the EmbeddedAssistant.Builder()
object to create the EmbeddedAssistant
.
.setConversationCallback(object : ConversationCallback() {
override fun onError(throwable: Throwable) {
Log.e("Test", "assist error: " + throwable.message, throwable)
}
override fun onVolumeChanged(percentage: Int) {
Log.e("Test", "assistant volume changed: $percentage")
val editor = PreferenceManager
.getDefaultSharedPreferences(this@MainActivity)
.edit()
editor.putInt(PREF_CURRENT_VOLUME, percentage)
editor.apply()
}
override fun onConversationFinished() {
Log.e("Test", "assistant conversation finished")
}
override fun onAssistantResponse(response: String) {
Log.e("Test", "response: " + response)
}
}).build()
One thing to note is that the original implementation of this object has empty definitions for all of the callback methods, so you won't need to define each of them in your class. It is worth looking into the EmbeddedAssistant
class to know what callback methods are available, as we will use additional methods later in this project. You can find an explanation of each callback method in the comments of the EmbeddedAssistant
class.
public static abstract class ConversationCallback {
/**
* Called when the user's voice query ends and the response from the Assistant is about to
* start a response.
*/
public void onResponseStarted() {}
/**
* Called when the Assistant's response is complete.
*/
public void onResponseFinished() {}
/**
* Called when audio is being played. This may be called multiple times during a single
* response. The audio will play using the AudioTrack, although this method may be used
* to provide auxiliary effects.
*
* @param audioSample The raw audio sample from the Assistant
*/
public void onAudioSample(ByteBuffer audioSample) {}
/**
* Called when an error occurs during the response
*
* @param throwable A {@link Throwable} which contains information about the response error.
*/
public void onError(Throwable throwable) {}
/**
* Called when the user requests to change the Assistant's volume.
*
* @param percentage The desired volume as a percentage of intensity, in the range 0 - 100.
*/
public void onVolumeChanged(int percentage) {}
/**
* Called when the response contains a DeviceAction.
*
* @param intentName The name of the intent to execute.
* @param parameters A JSONObject containing parameters related to this intent.
*/
public void onDeviceAction(String intentName, JSONObject parameters) {}
/**
* Called when the response contains supplemental display text from the Assistant.
*
* @param response Supplemental display text.
*/
public void onAssistantResponse(String response) {}
/**
* Called when the response contains HTML output from the Assistant.
*
* @param html HTML data showing a rich response
*/
public void onAssistantDisplayOut(String html) {}
/**
* Called when the entire conversation is finished.
*/
public void onConversationFinished() {}
}
To wrap up onCreate()
, you will need to connect to the Google Assistant service.
mEmbeddedAssistant.connect()
The last thing you will need to do to add the Assistant to your device is update onDestroy()
to properly tear down your hardware peripherals and the connection to the Assistant.
override fun onDestroy() {
super.onDestroy()
Log.e("Test", "destroying assistant demo")
try {
mLed.close()
} catch (e: IOException) {
Log.w("Test", "error closing LED", e)
}
try {
mButton.close()
} catch (e: IOException) {
Log.w("Test", "error closing button", e)
}
mEmbeddedAssistant.destroy()
}
At this point, if everything has gone as expected, you should be able to get general information from your device.
Pre-Defined/Built-In TraitsAs you may remember, we enabled a selection of pre-defined traits while registering our device model. It's time to follow up and use these traits. Return to your python3 virtual environment if you installed it, or your terminal if you installed the oauthlib tool directly. Run the following command on the credentials generated by Google.
google-oauthlib-tool --client-secrets path/to/credentials.json \ --scope https://www.googleapis.com/auth/assistant-sdk-prototype \ --save
You will also want to install the google-assistant-sdk
pip install google-assistant-sdk
With the SDK installed, you list the models that have traits in your project with the following command
googlesamples-assistant-devicetool --project-id PROJECT_ID list --model
Finally, register your specific device. The DEVICE_INSTANCE_ID
is the device unique String that we used in the previous section.
googlesamples-assistant-devicetool --project-id PROJECT_ID register-device \--model MODEL_ID --device DEVICE_INSTANCE_ID --client-type SERVICE
Now that the setup for actions is done, let's get back into coding. You will need to first add the blinkt
object to the declarations section of MainActivity.kt from the library we imported earlier.
private val blinkt = Blinkt()
We will also add a new method to toggle the lights on and off. For this example we'll just set the color of the light to white.
private fun toggleLight(enabled : Boolean) {
if( enabled ) {
blinkt.brightness = 1
blinkt.write(
intArrayOf(
Color.WHITE,
Color.WHITE,
Color.WHITE,
Color.WHITE,
Color.WHITE,
Color.WHITE,
Color.WHITE,
Color.WHITE
)
)
} else {
blinkt.brightness = 0
}
blinkt.show()
}
You may remember that we enabled a multiple traits: OnOff, ColorSpectrum and Brightness. This above method just assumes one color and max brightness. Using the same techniques as we're discussing here for OnOff, you can update the color and brightness using the ColorSpectrum or Brightness trait schemas.
The final thing we'll do to use a pre-defined trait is update the ConversationCallback
object with the onDeviceAction()
method. We will check the intent (little i, not like an Android Intent
:)) and compare it to the key used by the OnOff pre-defined trait. If it matches, we'll see if it contains the on
parameter. We will then send that value to the toggle method we previously created.
override fun onDeviceAction(intentName: String, parameters: JSONObject?) {
if (intentName == "action.devices.commands.OnOff") {
try {
val turnOn = parameters!!.getBoolean("on")
toggleLight(turnOn)
} catch (e: JSONException) {
Log.e("Test", "Cannot get value of command", e)
} catch (e: IOException) {
Log.e("Test", "Cannot set value of LED", e)
}
}
}
At this point you should be able to tell your device to 'turn on', and it will turn on the light. Likewise, if you tell it to 'turn off', it will turn off the LEDs.
Custom ActionsNow that we know how to use the built-in traits, let's do something a little more interesting. The handful of existing traits are fine for general use, but some devices require custom actions. For example, let's say we were working with a new IoT stove/oven. You may want users to be able to say "set burner 4 to medium high" or "preheat oven to 375". Since these actions fall outside of the standard traits, you would need to create your own. For this sample, you will set the lamp to flash red a few times when told to go into 'emergency mode.' This example will keep it short and simple, but you can find information on adding parameters for color, speed, blinking pattern or anything else you may want to support in the official documentation for parameters. You can start by creating a new JSON file named actions.json. You will need to add the following snippet in order to define your action, and what will be sent back to your device when that action is triggered. (apologies for this being formatted as a quote. Code formatting was super unhappy with it as a code block, so this is at least a little better)
{
"manifest": {
"displayName": "Emergency Light",
"invocationName": "Emergency Light",
"category": "PRODUCTIVITY"
},
"actions": [
{
"name": "com.example.emergencylight",
"availability": {
"deviceClasses": [
{
"assistantSdkDevice": {}
}
]
},
"intent": {
"name": "com.example.emergencylight",
"trigger": {
"queryPatterns": [
"activate emergency mode"
]
}
},
"fulfillment": {
"staticFulfillment": {
"templatedResponse": {
"items": [
{
"simpleResponse": {
"textToSpeech": "Activating emergency mode"
}
},
{
"deviceExecution": {
"command": "com.example.emergencylight"
}
}
]
}
}
}
}
]
}
After you've created the above file, download the gactions command line tool from this link. This will be used to register your Action with Google. If you're on a Mac or Linux, make sure you make the file executable via chmod
.
To finish setting up your new Action, you will need to register it via the tool. The Action can be added as a test item for 30 days with the following command:
./gactions test --action_package actions.json --project <<your-actions-console-project-name>>
If you get to the point of making a project that will be sold to customers, you will need to get your Action reviewed and approved for production. You can find information on that process here.
Once you're done getting everything set up, it's time to update your app. You may remember that earlier we turned the light on and off with a normal trait in the onDeviceAction
method. Return to that method and update it so that it expects your new custom trait. You'll notice I also did a little refactoring to use a when statement
instead of an if statement
, as this is closer to best practices in Kotlin.
override fun onDeviceAction(intentName: String, parameters: JSONObject?) {
when( intentName ) {
"action.devices.commands.OnOff" -> {
try {
val turnOn = parameters!!.getBoolean("on")
toggleLight(turnOn)
} catch (e: JSONException) {
Log.e("Test", "Cannot get value of command", e)
} catch (e: IOException) {
Log.e("Test", "Cannot set value of LED", e)
}
}
"com.example.emergencylight" -> {
flashEmergency(4, 1000)
}
}
}
You will also want to update the toggleLight()
method to have two versions: one that only accepts on/off as a parameter, and another that expects a color.
private fun toggleLight(enabled: Boolean, color: Int) {
if( enabled ) {
blinkt.brightness = 1
blinkt.write(
intArrayOf(
color,
color,
color,
color,
color,
color,
color,
color
)
)
} else {
blinkt.brightness = 0
}
blinkt.show()
}
private fun toggleLight(enabled : Boolean) {
toggleLight(enabled, Color.WHITE)
}
Finally, add the flashEmergency()
method, which will call toggleLight()
repeatedly to turn the lights on and off with a red color.
private fun flashEmergency(count: Int, delay: Long) {
val on = false
for(i in 1..count*2) {
Handler(Looper.getMainLooper()).postDelayed({
try {
toggleLight(!on, Color.RED)
} catch (e: IOException) {}
}, i * delay)
}
toggleLight(false)
}
More to ComeAt this point you should be able to control your own Android Things devices directly with voice commands. Going forward, I'll expand on this project by adding support for conversations through DialogFlow in order to show how you can have more complex commands for your device, and how DialogFlow can connect to your Android Things device through Firebase Functions and Google Cloud IoT Core
Comments