Software apps and online services
Hand tools and fabrication machines
Long time ago (2006), in a kingdom far away (France), lived a rabbit. It wasn't a normal rabbit, it had very special powers. It could wiggle it's ears, had RGB LED lights in his belly, and it could talk! Reading up the weather forecast, new emails, or just chit-chat, it was the most famous rabbit of all. So it was given a special name: Nabaztag (Armenian for "hare").
But as with many fairy tales, the story took a twist and the rabbit died.
Nabaztag was very the first IoT device available on the market that humans could buy. But ahead of his time, the inventor went bankrupt and had to shut down their servers. The rabbit wasn't open source and heavily relied on the servers from it's inventor.
Since then, the rabbit was sitting in a corner of my room, it didn't talk, it didn't wiggle it's ears, it didn't bring any joy. Playing with Raspberry Pi's, I always had the idea to bring it alive one day, but it was on the long list of projects.
But today, receiving my Google AIY Voice HAT, I'm going to change that!
And the rabbit happily lived ever after!
This is what every hacker likes, taking stuff apart. So starts this project too, to see if there is room enough to fit a Raspberry Pi and the Google AIY Voice HAT inside it. Goal is to leave the motors for the ears in tact, and see what else can survive. It turns out that the main PCB has all components soldered right to it, including the LEDs. Only separate PCB is the RFID reader and a small WiFi-antenna. So unplug all wires (don't cut them!) and disassemble the rabbit.
Save all parts, we will use some of them, others might come in handy later on for other projects maybe?Wiring it up for the basics
Your rabbit is stripped, time to connect the components! Get your Google AIY kit, you will need these components: 1, 2, 3, 6, 7. Don't throw away the other parts, they might come in handy later on.
The Voice HAT seems to come in different versions: with and without soldered headers. Mine had headers at the Servo 0/1/2/3/4/5 and Driver 0/1/2/3 sections, others seem to have no headers at all. So probably you will need to solder your headers on. In my first build I did this quick and dirty by using straight headers, but using the default wire connectors, the rabbit shell cannot be closed. So I've ordered angled headers and will add pictures of this once I've received and soldered them on.
Take your Raspberry Pi, click the spacers (nr 3) on the 2 corners opposite the GPIO header, and click the Voice HAT (nr 1) on top of it.
Take the microphone board (nr 2), the 5-wire cable (nr 7) and connect these together and to the Voice HAT. There is only 1 way of connecting.
Take the speaker cable from the rabbit (yellow/orange), remove the connector (this time you can cut it) and insert the stripped wires in the screw-terminal. Yellow is + and Orange is -.
Take the 4-wire cable (nr 6) and eject the wires from the connector (pinch them with a small screw driver). Take the button cable from the rabbit (gray/white) and remove the connector from this too. Now insert the 2 clips into the connector from the kit, where the button-wires are the 2 at the bottom, replacing the black and white wires (order doesn't matter).
Booting your Raspberry Pi with a prepared Google Voice Kit SD-card
Download the Voice Kit SD image from the Google website, and write it to an SD-card. I've used Etcher for that.
To setup your wireless network, we will take a small step at start before inserting the SD-card. Open the SD-card in Windows/Mac/whatever, and add a file with the name "wpa_supplicant.conf" having these contents (replace by your network credentials):
country=<Insert 2 letter ISO 3166-1 country code here>
Now it's time to insert the SD-card into your Raspberry Pi and boot it up for the first time.
If you have an HDMI-screen, you can proceed with the next steps from a Terminal window. Without a screen, use a tool like PuTTY to connect through SSH to your Raspberry Pi or use an extension in the Chrome browser.
*** Note to self (and others): don't apt-get upgrade, in earlier versions it did break audio/microphone support so I haven't done that anymore ***
Enabling VNC Server
As we will use the Raspberry Pi / Voice HAT without a screen (doesn't fit inside the rabbit), we use VNC to teleport it to another computer. You can do this on-screen or at the command line:
With a screen, select Menu > Preferences > Raspberry Pi Configuration > Interfaces. Ensure VNC is Enabled.
Or you can enable VNC Server through the (SSH)-terminal:
Enable VNC Server by doing the following: Navigate to 5 Interfacing Options. Scroll down and select P3 VNC > Yes.
To use the VNC Server, download the VNC Viewer and open it on your Windows/Mac/whatever computer. Just enter the Raspberry Pi IP-address, you will now be connected to your very own Raspberry Pi.
To change the display resolution, click the Raspberry icon top-left, Preferences > Raspberry Pi Configuration. Click the Set Resolution button and make the change to what your screen fits. Reboot to apply the change.Configuration of the Google Voice tools
When you have completed these steps, you can test your setup and connections (also see Google documentation).
On the desktop, click the "Check Audio", "Check WiFi" links to see if this is configured correctly.
The version of Voice HAT I have, needs an additional line for installation, the "Check Audio" script will tell you what to do. For me, I had to run this in a terminal window, and after that restart the Raspberry Pi:
echo "dtoverlay=googlevoicehat-soundcard" | sudo tee -a /boot/config.txt
Now we will connect to the Google Cloud Platform. You can follow the detailed steps here, to get your credentials.
Some more steps; the Python version on the Raspberry is 3.7.3, and according to the manual, the webbrowser.register() should accept 4 parameters since 3.7. But it is not working... So we need a little manual fix;
Scroll down to line 75, and remove the ", -1" from the end of this line:
webbrowser.register('chromium-browser', None, webbrowser.Chrome('chromium-browser'), -1)
It now looks like this:
webbrowser.register('chromium-browser', None, webbrowser.Chrome('chromium-browser'))
Ctrl+X and Enter to close and save.
And some sad news, with a fix! When running Google Assistant functionality, I get a "segmentation fault" error. This is due to the deprecated google-assistant-library. The Google Assistant Service no longer supports hotword detection like “Ok Google” and “Hey Google”. If you don't mind and use the button instead, just go ahead. But if you do mind, run this command to downgrade the Google Assistant library from 1.0.1 back to 1.0.0:
pip3 install google-assistant-library==1.0.0
Once done, go ahead by starting the Google Assistant for the first time, it will prompt you for your Google login:
You will now be ready to use all the basics of the Voice Recognition. Play something around by asking silly questions (hint: ask for singing a song, beat-boxing, sounds that animals make).Add our own components!
Use this map of the Voice HAT for adding your own components to the GPIO pins:
This video shows all steps I describe in the next paragraphs:LED lights
As said, the RGB LEDs were soldered to the main PCB and are removed. But what would a rabbit be without shiny lights... In the base there is not much room for adjustments, so I have to place it somewhere else. Also the regular NeoPixels don't work with a Raspberry Pi, except when you jump through some hoops.
There are some newer types of LEDs that have a less time constraints for controlling them and there now is a module called rpi_ws281x with a Python wrapper that will make it work with a Raspberry Pi. And as some of these hoops were disabling the on-board audio, you will be happy that the Voice HAT does have it's own audio controller. Meaning we can disable the on-board snd_bcm2835 controller!
So I've decided to add a LED ring of 32 lights, 11 cm wide. This will exactly fit the top of the base. This LED ring contains SK6812 RGBW LEDs, so it has 3 colors Red/Green/Blue and a White led. This LED ring will replace the LED that is in the kit.
Wiring it up
The LED ring uses 3 wires: DATA-IN, PWR and GND. The LED ring is made to be used on multiple voltage ranges, but if you send 3.3V DATA, it expects you to use 3.3V on PWR too. Or 5V on both. As we need the system PWM pin for driving the LED ring, we can only use GPIO12 (servo 4), which serves 3.3V, so take the PWR from somewhere else. For example from the I2C or SPI pins (you might need to solder something in there). In my build, I'll need more 3.3V sources, so I'm going to add another solution for that.
Installation of the module
Once the LED ring is wired up, you can start installing modules for that. I'll use the rpi_ws281x module, which is written in C and there is a Python library that can communicate with it.
In the past you had to jump through hoops, but today there is a readily installable module using PIP. Just run:
sudo pip3 install rpi_ws281x==4.2.4
Since this library and the onboard Raspberry Pi audio both use the PWM, they cannot be used together (although it worked for me, but I have a strange sounding voice response, which might be related to this). You will need to blacklist the Broadcom audio kernel module by running the following command:
echo "blacklist snd_bcm2835"| sudo tee -a /etc/modprobe.d/snd-blacklist.conf
If the audio device is still loading after blacklisting, you may also need to comment it out in the /etc/modules file.
Code to test
With the library, there are examples delivered for the LED ring; the strandtest.py file. As I have the SK6812 type LEDs, I'll use the SK6812_strandtest.py file. Open it with Thonny Python IDE (standard Python editor on the Raspberry Pi)
Change the configuration to match your LED ring, for me I have to use:
# LED strip configuration:
LED_COUNT = 32 # Number of LED pixels.
LED_PIN = 12 # GPIO pin connected to the pixels (must support PWM!).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 10 # DMA channel to use for generating signal (try 10)
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)
LED_CHANNEL = 0
LED_STRIP = ws.SK6812_STRIP_GRBW
Now run the file as sudo (more on that in next paragraph and see your LED ring changing colors. Whoohoo!
sudo python3 SK6812_strandtest.py
Hook the LED ring into Google AIY
The code coming with Spacepark OS does have a separate thread that auto-starts for controlling the LED: status-led.service. This works as a FIFO system (First In, First Out) that will keep looping through the last commanded LED sequence. This is called an iteration cycle. When a new command is received, it directly starts with performing the new command as the iteration is adjusted. Great place to hook up our LED ring control.
As a starter; the rpi_ws281x module needs to be run as SUDO (system user with all powers), because it does write directly to memory. By default, the status-led.service is started as a normal user, so we need to change that.
sudo nano /lib/systemd/system/status-led.service
This opens a text-editor. Now make a change to the 8th line, add the word sudo:
ExecStart=/bin/bash -c 'sudo /usr/bin/python3 -u src/led.py </tmp/status-led'
Press CTRL+X, Y, Enter to save.
Now we can start coding the LEDs!
Go to the folder /home/pi/voice-recognizer-raspi/src and open the file led.py
Like the 2 examples above, add the import and configuration part. See the full led.py code at the bottom of this project page.
Restart the status-led service and start the voice recognizer:
sudo systemctl daemon-reload
sudo systemctl restart status-led.service
Start clicking the button, and ask silly questions. You will see that the LED ring responds to your button presses and shows a rainbow in resting state:Motors
The ears are turned by 2 motors, having 2 wires each. That means using PWR/GND for turning one side, and GND/PWR for turning the other side.
The Voice HAT does not have the powers to control this type of motors out-of-the-box, so I'll need to add an H-Bridge. The one I used, can drive both ears. It has 4 signal pins that you connect to GPIO, and a PWR and GND pin. Basic idea is that pin A-1A & A-1B control motor A, and pin B-1A & B-1B control motor B. If you set 1A to HIGH and 1B to LOW, the motor will turn one way. If you set 1A to LOW and 1B to HIGH, the motor will turn the other way. Both LOW means no movement, both HIGH is not recommended.
Start by connecting the pins to the VoiceHAT: GPIO 26/06/13/05 or Servo 0/1/2/3, and connect PWR/GND to the corresponding pins right next to GPIO26/Servo0.
As we will call GPIO pins, import the RPi.GPIO module. And set the PIN numbers for the servos:
import RPi.GPIO as GPIO
# Use BCM GPIO references instead of physical pin numbers
# Define GPIO signals to use
I've added 2 threads (left/right) for rotating the ears. Reason is that I can avoid this way that I give 2 opposite commands and having both pins HIGH.
def t_earLeft(StepForwardPin, StepBackwardPin, EncoderPin):
global earLeft_millis, earLeft_run, earLeft_direction
earLeft_run = 1
earLeft_direction = 0
earLeft_millis = int(round(time.time() * 1000)) - 1
while earLeft_run == 1:
if earLeft_direction == 1 and earLeft_millis >= int(round(time.time() * 1000)):
elif earLeft_direction == -1 and earLeft_millis >= int(round(time.time() * 1000)):
earLeft_direction = 0
# avoid overflow
The thread is started in the main() function when the main.py file loads:
# Set ear-motor pins
thread_earLeft = threading.Thread(target=t_earLeft, args=(StepPinLeftForward,StepPinLeftBackward,EncoderPinLeft))
thread_earRight = threading.Thread(target=t_earRight, args=(StepPinRightForward,StepPinRightBackward,EncoderPinRight))
The actual direction and duration of turning that direction is triggered from the actions:
# Move Left Ear
earLeft_millis = int(round(time.time() * 1000)) + 1600
earLeft_direction = -1
# Move Right Ear
earRight_millis = earLeft_millis
earRight_direction = -1
See the full main.py code at the bottom of this project page.Ear encoders
The movement of the ears is registered by 2 encoders. There is a wheel with long teeth moving through an IR reader. Each teeth/gap will trigger a HIGH/LOW on the reader so we can see it turns. To know when the ears are UP, there is missing a (set of) teeth, meaning we will have a longer LOW at that time. For reading we could use the normal GPIO pins with an INTERRUPT, or use ANALOG signals.
For reading the ear encoders and the volume control (more on that later), I'll use an ADC (Analog to Digital Convertor) to see the analog data. I've used an ADS1115 ADC for that, it can read up to 4 channels with 16 bits precision. It will be connected to the I2C header we soldered on in a previous step.
At the startup of main.py, I want to let the ears go to a basic position. That way I can make sure there is no doubt about where the ears are after a hickup. The encoders can help us with that, as it tells when the ears are reaching the top.
Adafruit made a module for talking to the ADC. Let's install it with their instructions:
sudo apt-get update
sudo apt-get install build-essential python-dev python-smbus python-pip
sudo pip install adafruit-ads1x15
Connect to these pins (see I2C section on the Voice HAT): V = PWR (3.3V), G = GND, SCL = SCL, SDA = SDA
To allow I2C connections, click the Raspberry icon top-left, Preferences > Raspberry Pi Configuration. Open the Interfaces tab, and behind I2C select Enable. In the terminal window, enter this to see if you have a correct connection:
i2cdetect -y 1
Result should look like:
pi@raspberrypi:~ $ i2cdetect -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- 48 -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
That means that our ADC is available on I2C device 48.
Now connect the encoders to the ADC: Orange = to A0/A1 on the ADC, Yellow = PWR (3.3V), Green = GND, Blue = 100 Ω resistor to PWR (3.3V), and add a 10K Ω resistor between Green and Orange (once I receive my new encoders, I'll solder the resistors directly to the small PCB's, then I only have 3 wires to connect.
As we now have the module installed, we can use it in our code (default code looks at device 48):
# Import the ADS1x15 module.
adc = Adafruit_ADS1x15.ADS1115()
GAIN = 1
For example I could call this thread to constantly see what happens at the encoders:
millisLeft = int(round(time.time() * 1000))
valuesLeft = 0
millisRight = int(round(time.time() * 1000))
valuesRight = 0
global millisLeft, millisRight, valuesLeft, valuesRight
millisLeft = int(round(time.time() * 1000))
millisRight = int(round(time.time() * 1000))
millisLeftTemp = int(round(time.time() * 1000))
valuesLeftTemp = adc.read_adc(EncoderPinLeft, gain=GAIN)
print ("Left :", millisLeftTemp - millisLeft, valuesLeftTemp)
if ((valuesLeftTemp > 2000 and valuesLeft < 2000) or (valuesLeftTemp < 2000 and valuesLeft > 2000)):
if (((millisLeftTemp - millisLeft) > 140) and (valuesLeftTemp < 2000)):
print ("Left UP:", millisLeftTemp - millisLeft, valuesLeftTemp)
millisLeft = millisLeftTemp
valuesLeft = valuesLeftTemp
millisRightTemp = int(round(time.time() * 1000))
valuesRightTemp = adc.read_adc(EncoderPinRight, gain=GAIN)
#print ("Right:", millisRightTemp - millisRight, valuesRightTemp)
if ((valuesRightTemp > 2000 and valuesRight < 2000) or (valuesRightTemp < 2000 and valuesRight > 2000)):
if (((millisRightTemp - millisRight) > 140) and (valuesRightTemp < 2000)):
print ("Right UP:", millisRightTemp - millisRight, valuesRightTemp)
millisRight = millisRightTemp
valuesRight = valuesRightTemp
Detailed implementation will have to happen, but I fried one of the encoders by putting 5V on it instead of 3.3V... So I've ordered new ones, but they are exotic enough to take some time for delivery.
See the full main.py code at the bottom of this project page.Scroll wheel
At the backside of the rabbit (it's tail?) there is a scroll wheel. This used to be the volume control. We can connect it to the ADC to read the values: Brown = GND, Orange = PWR (3.3V), Red = A3 on the ADC.
As we use the same code as on the Ear Encoders, nothing to install for this.
In the header of main.py, import these 2 modules:
And create a process that we will execute as a thread:
valuesScrollWheel = 0
valuesScrollWheelTemp = adc.read_adc(3, gain=GAIN)
if ((valuesScrollWheelTemp - 500 > valuesScrollWheel) or (valuesScrollWheelTemp + 500 < valuesScrollWheel)):
vol = math.floor(valuesScrollWheelTemp / 250)
vol = max(0, min(100, vol))
valuesScrollWheel = valuesScrollWheelTemp
subprocess.call('amixer -q set Master %d%%' % vol, shell=True)
# avoid overflow
Now start the thread in the main() function:
To Do: Proximity sensor
thread_scrollWheel = threading.Thread(target=_ReadScrollWheel)
To trigger the Voice HAT, you can use a button (like the one you got in the kit and what we used on his head), or you can create your own triggers. It sounds cool to have a Proximity and Gesture Sensor (APDS-9960) for this trigger function. Sparkfun does have one, and I did order a clone of it.
But later on I found out that there is no out-of-the-box Python implementation for the APDS-9960 version as of yet. There is a module for a little older/simpler version of the chip, so I'll need to extend that. So for now I've parked this idea to the To Do list, but it will be added later on for sure!To Do: RFID
By taking apart the Nabaztag, I saved all components, didn't throw anything away (yet). So I now have an RFID reader, and this project does give some examples on how to hook it up on I2C, so maybe I connect it back in later on!