Things used in this project

Hardware components:
Nabaztag
Not sold anymore, but maybe on Graigslist or eBay?
×1
Aiy projects 4 bgsoumwo5r
Google AIY Projects Kit
×1
Pi 3 02
Raspberry Pi 3 Model B
×1
Google AIY Voice HAT
They come as a free gift with The MagPi #57, or you can subscribe to get a notification when they are for sale later on.
×1
SK6812 RGBW LED ring (32 bits)
×1
L9110S H-bridge Controller Board
×1
ADS1115 16-Bit Analog to Digital Convertor
×1
APDS-9960 Proximity and Gesture Sensor
×1
Mfr 25frf52 100r sml
Resistor 100 ohm
×2
Mfr 25frf52 10k sml
Resistor 10k ohm
×2
Software apps and online services:
Google Voice Kit SD Image
Logo assistant kvfq4j72g6 yciz8pmvwr
Google Assistant SDK
Hand tools and fabrication machines:
09507 01
Soldering iron (generic)

Schematics

Fritzing circuit diagram
Fritzing kchh4d2s9z

Code

led.pyPython
This is the code that lights up the status LED. I've extended it to drive the LED ring.
# Copyright 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

'''Signal states on a LED'''

import itertools
import logging
import os
import threading
import time

import RPi.GPIO as GPIO

from neopixel import *
import math
# LED strip configuration:
LED_COUNT      = 32      # Number of LED pixels.
LED_PIN        = 12      # GPIO pin connected to the pixels (must support PWM!).
LED_FREQ_HZ    = 800000  # LED signal frequency in hertz (usually 800khz)
LED_DMA        = 5       # DMA channel to use for generating signal (try 5)
LED_BRIGHTNESS = 255     # Set to 0 for darkest and 255 for brightest
LED_INVERT     = False   # True to invert the signal (when using NPN transistor level shift)
LED_CHANNEL    = 0
LED_STRIP      = ws.SK6812_STRIP_GRBW    


logger = logging.getLogger('led')

CONFIG_DIR = os.getenv('XDG_CONFIG_HOME') or os.path.expanduser('~/.config')
CONFIG_FILES = [
    '/etc/status-led.ini',
    os.path.join(CONFIG_DIR, 'status-led.ini')
]


class LED:

    """Starts a background thread to show patterns with the LED."""

    def __init__(self, channel):
        self.animator = threading.Thread(target=self._animate)
        self.channel = channel
        self.iterator = None
        self.running = False
        self.state = None
        self.sleep = 0

        GPIO.setup(channel, GPIO.OUT)
        self.pwm = GPIO.PWM(channel, 100)

        self.iterator_Color = None
        # Create NeoPixel object with appropriate configuration.
        self.strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ, LED_DMA, LED_INVERT, LED_BRIGHTNESS, LED_CHANNEL, LED_STRIP)
        # Intialize the library (must be called once before other functions).
        self.strip.begin()


    def start(self):
        self.pwm.start(0)  # off by default
        self.running = True

        # Led Ring = CLEAR
        for i in range(self.strip.numPixels()):
            self.strip.setPixelColor(i, Color(0, 0, 0, 0))
            self.strip.show()

        self.animator.start()

    def stop(self):
        self.running = False
        self.animator.join()
        self.pwm.stop()
        GPIO.output(self.channel, GPIO.LOW)

    def set_state(self, state):
        self.state = state

    def _animate(self):
        # TODO(ensonic): refactor or add justification
        # pylint: disable=too-many-branches
        while self.running:
            if self.state:

                if self.state == 'listening':
                    self.iterator = None
                    self.pwm.ChangeDutyCycle(100)
                    self.sleep = 0.05

                    # Led Ring = ORANGE scrolling
                    ledcolors = []
                    for i in range(self.strip.numPixels()):
                        ledcolors.append(Color(255, 50, 0, 0))
                        for j in range(self.strip.numPixels()-2):
                            ledcolors.append(Color(100, 20, 0, 0))
                    self.iterator_Color = itertools.cycle(ledcolors)

                elif self.state == 'power-off':
                    self.iterator = None
                    self.sleep = 0.0
                    self.pwm.ChangeDutyCycle(0)

                    # Led Ring = CLEAR
                    self.iterator_Color = None
                    for i in range(self.strip.numPixels()):
                        self.strip.setPixelColor(i, Color(0, 0, 0, 0))
                        self.strip.show()

                elif self.state == 'starting':
                    self.iterator = itertools.cycle(
                        itertools.chain(range(0, 100, 10), range(100, 0, -10)))
                    self.sleep = 0.2

                    # Led Ring = WHITE scrolling
                    """Movie theater light style chaser animation."""
                    ledcolors = []
                    for i in range(self.strip.numPixels()):
                        ledcolors.append(Color(0, 0, 0, 100))
                        for j in range(self.strip.numPixels()-2):
                            ledcolors.append(Color(0, 0, 0, 20))
                    self.iterator_Color = itertools.cycle(ledcolors)
                        

                elif self.state == 'thinking':
                    self.iterator = itertools.cycle(
                        itertools.chain(range(0, 100, 5), range(100, 0, -5)))
                    self.sleep = 0.05
                    # Led Ring = GREEN
                    self.iterator_Color = None
                    for i in range(self.strip.numPixels()):
                        self.strip.setPixelColor(i, Color(0, 255, 0, 0))
                        self.strip.show()

                elif self.state == 'stopping':
                    self.iterator = itertools.cycle(
                        itertools.chain(range(0, 100, 5), range(100, 0, -5)))
                    self.sleep = 0.05

                    # Led Ring = WHITE
                    self.iterator_Color = None
                    for i in range(self.strip.numPixels()):
                        self.strip.setPixelColor(i, Color(0, 0, 0, 0))
                        self.strip.show()

                elif self.state == 'ready':
                    self.iterator = itertools.cycle(
                        itertools.chain([0] * 300, range(0, 30, 1), range(30, 0, -1))) # 3 times slower
                    self.sleep = 0.01 # 5 times faster (was 0.05)

                    """Rainbow movie theater light style chaser animation."""
                    ledcolors = []
                    for j in range(256):
                        for i in range(self.strip.numPixels()):
                            ledcolors.append(wheel(math.floor(((i * 255 / self.strip.numPixels()) + j)) & 255))
                    self.iterator_Color = itertools.cycle(ledcolors)
                        


                elif self.state == 'error':
                    self.iterator = itertools.cycle([0, 100] * 3 + [0, 0])
                    self.sleep = 0.25

                    """Movie theater light style chaser animation."""
                    ledcolors = []
                    for i in range(self.strip.numPixels()):
                        ledcolors.append(Color(255, 0, 0, 0))
                        for j in range(self.strip.numPixels()-2):
                            ledcolors.append(Color(100, 0, 0, 0))
                    self.iterator_Color = itertools.cycle(ledcolors)
                
                else:
                    logger.warning("unsupported state: %s", self.state)

                self.state = None
            if self.iterator or self.iterator_Color:
                if self.iterator:
                    self.pwm.ChangeDutyCycle(next(self.iterator))
                
                if self.iterator_Color:
                    for i in range(self.strip.numPixels()):
                        self.strip.setPixelColor(i, next(self.iterator_Color))
                    self.strip.show()

                time.sleep(self.sleep)
            else:
                time.sleep(1)

def wheel(pos):
    """Generate rainbow colors across 0-255 positions."""
    if pos < 85:
        return Color(pos * 3, 255 - pos * 3, 0)
    elif pos < 170:
        pos -= 85
        return Color(255 - pos * 3, 0, pos * 3)
    else:
        pos -= 170
        return Color(0, pos * 3, 255 - pos * 3)


def main():
    logging.basicConfig(
        level=logging.INFO,
        format="[%(asctime)s] %(levelname)s:%(name)s:%(message)s"
    )

    import configargparse
    parser = configargparse.ArgParser(
        default_config_files=CONFIG_FILES,
        description="Status LED daemon")
    parser.add_argument('-G', '--gpio-pin', default=25, type=int,
                        help='GPIO pin for the LED (default: 25)')
    args = parser.parse_args()

    led = None
    state_map = {
        "starting": "starting",
        "ready":    "ready",
        "listening": "listening",
        "thinking": "thinking",
        "stopping": "stopping",
        "power-off": "power-off",
        "error":    "error",
    }
    try:
        GPIO.setmode(GPIO.BCM)

        led = LED(args.gpio_pin)
        led.start()
        while True:
            try:
                state = input()
                if not state:
                    continue
                if state not in state_map:
                    logger.warning("unsupported state: %s, must be one of: %s",
                                   state, ",".join(state_map.keys()))
                    continue

                led.set_state(state_map[state])
            except EOFError:
                time.sleep(1)
    except KeyboardInterrupt:
        pass
    finally:
        led.stop()
        GPIO.cleanup()

if __name__ == '__main__':
    main()
main.pyPython
This is the code that runs the complete Voice Recognition tool. I've extended it with functionality for the Motors, Encoders and ScrollWheel
#!/usr/bin/env python3
# Copyright 2017 Google Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

"""Main recognizer loop: wait for a trigger then perform and handle
recognition."""

import logging
import os
import sys
import threading
import time

import configargparse
from googlesamples.assistant import auth_helpers

import audio
import action
import i18n
import speech
import tts


# START ADDED
# For the EarMotors:
import RPi.GPIO as GPIO

# Use BCM GPIO references instead of physical pin numbers
GPIO.setmode(GPIO.BCM)

# Define GPIO signals to use
StepPinLeftForward=26
StepPinLeftBackward=6
StepPinRightForward=13
StepPinRightBackward=5

# For the EarEncoders and ScrollWheel
import math
import subprocess

# Import the ADS1x15 module.
import Adafruit_ADS1x15
adc = Adafruit_ADS1x15.ADS1115()
GAIN = 1

# Define the ADC pins
EncoderPinRight=0
EncoderPinLeft=1
ScrollWheelPin=3
# END ADDED

# =============================================================================
#
# Hey, Makers!
#
# Are you looking for actor.add_keyword? Do you want to add a new command?
# You need to edit src/action.py. Check out the instructions at:
# https://aiyprojects.withgoogle.com/voice/#makers-guide-3-3--create-a-new-voice-command-or-action
#
# =============================================================================

logging.basicConfig(
    level=logging.INFO,
    format="[%(asctime)s] %(levelname)s:%(name)s:%(message)s"
)
logger = logging.getLogger('main')

CACHE_DIR = os.getenv('XDG_CACHE_HOME') or os.path.expanduser('~/.cache')
VR_CACHE_DIR = os.path.join(CACHE_DIR, 'voice-recognizer')

CONFIG_DIR = os.getenv('XDG_CONFIG_HOME') or os.path.expanduser('~/.config')
CONFIG_FILES = [
    '/etc/voice-recognizer.ini',
    os.path.join(CONFIG_DIR, 'voice-recognizer.ini')
]

# Legacy fallback: old locations of secrets/credentials.
OLD_CLIENT_SECRETS = os.path.expanduser('~/client_secrets.json')
OLD_SERVICE_CREDENTIALS = os.path.expanduser('~/credentials.json')

ASSISTANT_CREDENTIALS = os.path.join(VR_CACHE_DIR, 'assistant_credentials.json')
ASSISTANT_OAUTH_SCOPE = 'https://www.googleapis.com/auth/assistant-sdk-prototype'

PID_FILE = '/run/user/%d/voice-recognizer.pid' % os.getuid()


def try_to_get_credentials(client_secrets):
    """Try to get credentials, or print an error and quit on failure."""

    if os.path.exists(ASSISTANT_CREDENTIALS):
        return auth_helpers.load_credentials(
            ASSISTANT_CREDENTIALS, scopes=[ASSISTANT_OAUTH_SCOPE])

    if not os.path.exists(VR_CACHE_DIR):
        os.mkdir(VR_CACHE_DIR)

    if not os.path.exists(client_secrets) and os.path.exists(OLD_CLIENT_SECRETS):
        client_secrets = OLD_CLIENT_SECRETS

    if not os.path.exists(client_secrets):
        print('You need client secrets to use the Assistant API.')
        print('Follow these instructions:')
        print('    https://developers.google.com/api-client-library/python/auth/installed-app'
              '#creatingcred')
        print('and put the file at', client_secrets)
        sys.exit(1)

    if not os.getenv('DISPLAY') and not sys.stdout.isatty():
        print("""
To use the Assistant API, manually start the application from the dev terminal.
See the "Turn on the Assistant API" section of the Voice Recognizer
User's Guide for more info.""")
        sys.exit(1)

    credentials = auth_helpers.credentials_flow_interactive(
        client_secrets, scopes=[ASSISTANT_OAUTH_SCOPE])
    auth_helpers.save_credentials(ASSISTANT_CREDENTIALS, credentials)
    logging.info('OAuth credentials initialized: %s', ASSISTANT_CREDENTIALS)
    return credentials


def create_pid_file(file_name):
    with open(file_name, 'w') as pid_file:
        pid_file.write("%d" % os.getpid())


def main():
    parser = configargparse.ArgParser(
        default_config_files=CONFIG_FILES,
        description="Act on voice commands using Google's speech recognition")
    parser.add_argument('-I', '--input-device', default='default',
                        help='Name of the audio input device')
    parser.add_argument('-O', '--output-device', default='default',
                        help='Name of the audio output device')
    parser.add_argument('-T', '--trigger', default='gpio',
                        help='Trigger to use {\'clap\', \'gpio\'}')
    parser.add_argument('--cloud-speech', action='store_true',
                        help='Use the Cloud Speech API instead of the Assistant API')
    parser.add_argument('-L', '--language', default='en-US',
                        help='Language code to use for speech (default: en-US)')
    parser.add_argument('-l', '--led-fifo', default='/tmp/status-led',
                        help='Status led control fifo')
    parser.add_argument('-p', '--pid-file', default=PID_FILE,
                        help='File containing our process id for monitoring')
    parser.add_argument('--audio-logging', action='store_true',
                        help='Log all requests and responses to WAV files in /tmp')
    parser.add_argument('--assistant-secrets',
                        help='Path to client secrets for the Assistant API')
    parser.add_argument('--cloud-speech-secrets',
                        help='Path to service account credentials for the '
                        'Cloud Speech API')

    args = parser.parse_args()

    create_pid_file(args.pid_file)
    i18n.set_language_code(args.language, gettext_install=True)

    player = audio.Player(args.output_device)

    if args.cloud_speech:
        credentials_file = os.path.expanduser(args.cloud_speech_secrets)
        if not os.path.exists(credentials_file) and os.path.exists(OLD_SERVICE_CREDENTIALS):
            credentials_file = OLD_SERVICE_CREDENTIALS
        recognizer = speech.CloudSpeechRequest(credentials_file)
    else:
        credentials = try_to_get_credentials(
            os.path.expanduser(args.assistant_secrets))
        recognizer = speech.AssistantSpeechRequest(credentials)


    # START ADDED
    # Set ear-motor pins
    GPIO.setup(StepPinLeftForward, GPIO.OUT)
    GPIO.setup(StepPinLeftBackward, GPIO.OUT)
    GPIO.setup(StepPinRightForward, GPIO.OUT)
    GPIO.setup(StepPinRightBackward, GPIO.OUT)

    GPIO.output(StepPinLeftForward, GPIO.LOW)
    GPIO.output(StepPinLeftBackward, GPIO.LOW)
    GPIO.output(StepPinRightForward, GPIO.LOW)
    GPIO.output(StepPinRightBackward, GPIO.LOW)

    threading.Thread(target=t_earLeft, args=(StepPinLeftForward,StepPinLeftBackward,EncoderPinLeft)).start()
    threading.Thread(target=t_earRight, args=(StepPinRightForward,StepPinRightBackward,EncoderPinRight)).start()
    threading.Thread(target=_ReadScrollWheel).start()
    # END ADDED
    
    recorder = audio.Recorder(
        input_device=args.input_device, channels=1,
        bytes_per_sample=speech.AUDIO_SAMPLE_SIZE,
        sample_rate_hz=speech.AUDIO_SAMPLE_RATE_HZ)
    with recorder:
        do_recognition(args, recorder, recognizer, player)


def do_recognition(args, recorder, recognizer, player):
    """Configure and run the recognizer."""
    say = tts.create_say(player)

    actor = action.make_actor(say)

    if args.cloud_speech:
        action.add_commands_just_for_cloud_speech_api(actor, say)

    recognizer.add_phrases(actor)
    recognizer.set_audio_logging_enabled(args.audio_logging)

    if args.trigger == 'gpio':
        import triggers.gpio
        triggerer = triggers.gpio.GpioTrigger(channel=23)
        msg = 'Press the button on GPIO 23'
    elif args.trigger == 'clap':
        import triggers.clap
        triggerer = triggers.clap.ClapTrigger(recorder)
        msg = 'Clap your hands'
    else:
        logger.error("Unknown trigger '%s'", args.trigger)
        return

    mic_recognizer = SyncMicRecognizer(
        actor, recognizer, recorder, player, say, triggerer, led_fifo=args.led_fifo)

    with mic_recognizer:
        if sys.stdout.isatty():
            print(msg + ' then speak, or press Ctrl+C to quit...')

        # wait for KeyboardInterrupt
        while True:
            time.sleep(1)


class SyncMicRecognizer(object):

    """Detects triggers and runs recognition in a background thread.

    This is a context manager, so it will clean up the background thread if the
    main program is interrupted.
    """

    # pylint: disable=too-many-instance-attributes

    def __init__(self, actor, recognizer, recorder, player, say, triggerer, led_fifo):
        self.actor = actor
        self.player = player
        self.recognizer = recognizer
        self.recognizer.set_endpointer_cb(self.endpointer_cb)
        self.recorder = recorder
        self.say = say
        self.triggerer = triggerer
        self.triggerer.set_callback(self.recognize)

        self.running = False

        # START ADDED
        self.GotResponse = ''
        # END ADDED
                         
        if led_fifo and os.path.exists(led_fifo):
            self.led_fifo = led_fifo
        else:
            if led_fifo:
                logger.warning(
                    'File %s specified for --led-fifo does not exist.',
                    led_fifo)
            self.led_fifo = None
        self.recognizer_event = threading.Event()

    def __enter__(self):
        self.running = True
        threading.Thread(target=self._recognize).start()
        self.triggerer.start()
        self._status('ready')

    def __exit__(self, *args):
        self.running = False
        self.recognizer_event.set()

        self.recognizer.end_audio()

    def _status(self, status):
        if self.led_fifo:
            with open(self.led_fifo, 'w') as led:
                led.write(status + '\n')
        logger.info('%s...', status)

    def recognize(self):
        # START ADDED
        global earLeft_direction, earLeft_millis, earRight_millis, earRight_direction
        # END ADDED
        if self.recognizer_event.is_set():
            # Duplicate trigger (eg multiple button presses)
            return

        # START ADDED
        if self.GotResponse != 'recognizing':
            # Move Left Ear
            earLeft_millis = int(round(time.time() * 1000)) + 1600
            earLeft_direction = 1
            # Move Right Ear
            earRight_millis = earLeft_millis
            earRight_direction = 1
        # END ADDED

        self.recognizer.reset()
        self.recorder.add_processor(self.recognizer)
        self._status('listening')
        # Tell recognizer to run
        self.recognizer_event.set()

    def endpointer_cb(self):
        self._status('thinking')
        self.recorder.del_processor(self.recognizer)

    def _recognize(self):
        # START ADDED
        global earLeft_direction, earLeft_millis, earRight_millis, earRight_direction
        # END ADDED
        while self.running:
            self.recognizer_event.wait()
            if not self.running:
                break

            logger.info('recognizing...')

            # START ADDED
            self.GotResponse = 'recognizing'
            # END ADDED

            try:
                self._handle_result(self.recognizer.do_request())
            except speech.Error:
                logger.exception('Unexpected error')
                self._status('error')
                self.say(_('Unexpected error. Try again or check the logs.'))

            self.recognizer_event.clear()

            # START ADDED/ADJUSTED
            if self.GotResponse == 'no command recognized':
                # Move Left Ear
                earLeft_millis = int(round(time.time() * 1000)) + 1600
                earLeft_direction = -1
                # Move Right Ear
                earRight_millis = earLeft_millis
                earRight_direction = -1
                self.triggerer.start()
                self._status('ready')
                self.GotResponse = ''
            else:
                self.recognize()
            # END ADDED

    def _handle_result(self, result):
        if result.transcript and self.actor.handle(result.transcript):
            self._status('talking')
            logger.info('handled local command: %s', result.transcript)
        elif result.response_audio:
            self._status('talking')
            self._play_assistant_response(result.response_audio)
        elif result.transcript:
            logger.warning('%r was not handled', result.transcript)
            self.say(_("I dont know how to answer that."))
            self.GotResponse = 'no command recognized'
        else:
            logger.warning('no command recognized')
            #self.say(_("Could you try that again?"))
            # START ADDED
            self.GotResponse = 'no command recognized'
            # END ADDED

    def _play_assistant_response(self, audio_bytes):
        bytes_per_sample = speech.AUDIO_SAMPLE_SIZE
        sample_rate_hz = speech.AUDIO_SAMPLE_RATE_HZ
        logger.info('Playing %.4f seconds of audio...',
                    len(audio_bytes) / (bytes_per_sample * sample_rate_hz))
        self.player.play_bytes(audio_bytes, sample_width=bytes_per_sample,
                               sample_rate=sample_rate_hz)


                               
                               
                               
def _ReadScrollWheel():
    global valuesScrollWheel
    valuesScrollWheel = 0

    while True:
        valuesScrollWheelTemp = adc.read_adc(ScrollWheelPin, gain=GAIN)
        valuesScrollWheelTemp = math.floor(valuesScrollWheelTemp / 2500) * 10
        valuesScrollWheelTemp = max(0, min(100, valuesScrollWheelTemp))

        if (valuesScrollWheelTemp != valuesScrollWheel):
            valuesScrollWheel = valuesScrollWheelTemp
            subprocess.call('amixer -q set Master %d%%' % valuesScrollWheelTemp, shell=True)

        # avoid overflow of getting too many read inputs. Half a second is enough for volume control
        time.sleep(0.5)


def t_earLeft(StepForwardPin, StepBackwardPin, EncoderPin):
    global earLeft_millis, earLeft_run, earLeft_direction
    earLeft_run = 1
    earLeft_direction = 0
    earLeft_millis = int(round(time.time() * 1000)) - 1
        
    while earLeft_run == 1:

        if earLeft_direction == 1 and earLeft_millis >= int(round(time.time() * 1000)):
            GPIO.output(StepForwardPin, GPIO.LOW)
            GPIO.output(StepBackwardPin, GPIO.HIGH)
            
        elif earLeft_direction == -1 and earLeft_millis >= int(round(time.time() * 1000)):
            GPIO.output(StepBackwardPin, GPIO.LOW)
            GPIO.output(StepForwardPin, GPIO.HIGH)
            
        else:
            GPIO.output(StepForwardPin, GPIO.LOW)
            GPIO.output(StepBackwardPin, GPIO.LOW)
            earLeft_direction = 0

        # avoid overflow
        time.sleep(0.1)

def t_earRight(StepForwardPin, StepBackwardPin, EncoderPin):
    global earRight_millis, earRight_run, earRight_direction
    earRight_run = 1
    earRight_direction = 0
    earRight_millis = int(round(time.time() * 1000)) - 1
        
    while earRight_run == 1:

        if earRight_direction == 1 and earRight_millis > int(round(time.time() * 1000)):
            GPIO.output(StepForwardPin, GPIO.LOW)
            GPIO.output(StepBackwardPin, GPIO.HIGH)
            
        elif earRight_direction == -1 and earRight_millis > int(round(time.time() * 1000)):
            GPIO.output(StepBackwardPin, GPIO.LOW)
            GPIO.output(StepForwardPin, GPIO.HIGH)
            
        else:
            GPIO.output(StepForwardPin, GPIO.LOW)
            GPIO.output(StepBackwardPin, GPIO.LOW)
            earRight_direction = 0
    
        # avoid overflow
        time.sleep(0.1)


if __name__ == '__main__':
    main()

Credits

Bastiaan%20slee lync
Bastiaan Slee

Tinkerer with RaspberryPi, Python, EmonCMS, meter reading, OpenTherm, Weather Station, IoT, cloud-server, Pebble Time, HTML dashboards

Contact

Replications

Did you replicate this project? Share it!

I made one

Love this project? Think it could be improved? Tell us what you think!

Give feedback

Comments

Similar projects you might like

Wireless Sensor Node
Intermediate
  • 14
  • 2

Work in progress

This project is a compact wireless sensor node that connects directly to a smart phone or tablet to display data from connected sensors.

CNC Controller in Python
Intermediate
  • 509
  • 8

Full instructions

First ever CNC machine controller implementation in pure Python.

Pi Zero Speaker - DIY Audio HAT and Audio Player
Intermediate
  • 385
  • 6

Work in progress

Convert an old speaker into something cool powered by Pi Zero.

Smart Conductive 'On Air' Sign!
Intermediate
  • 633
  • 9

Full instructions

Raspberry Pi + Bare Conductive Paint + Pi Cap + RGB LEDs = AWESOME CONDUCTIVE SIGN!

AWS IoT Environment for Home Assistant
Intermediate
  • 947
  • 8

Work in progress

Home Assistant is an automation platform that can track and control all kinds of devices and automate actions, which fits well with AWS IoT.

IoT Red Phone
Intermediate
  • 574
  • 6

Work in progress

The phone will ring if you have an alert in your AWS Cloudwatch. If you pick up the handset, it tells you whats wrong.

ProjectsCommunitiesTopicsContestsLiveAppsBetaFree StoreBlogAdd projectSign up / Login