Dolly Grip Bot is an Alexa controlled camera dolly that allows you to position your camera phone with your voice. This will allow you to control your camera hands-free to help with demo videos, time-lapse photography, and smooth pans. It uses a Lego Mindstorms EV3 set, any modern lego train set, and an Alexa device. The dolly rides on train tracks to allow precise camera control. You can direct it to run in a continuous loop or go to a preset, color-coded position. You can also pitch the camera up and down a specific number of degrees.
Build the DollySee the attached images to build the Dolly using Lego EV3 and Train parts.
Build the Alexa SkillCheck out the Alexa Skill and EV3Dev code from here: https://github.com/MkFoster/dolly-grip-bot.git
Create an Alexa developer account if you don't already have one and then create a new, custom, Alexa hosted skill. Under the Build tab, copy and paste the model.json contents to the Alexa JSON Editor. Save and build your model.
Click on the Code tab and copy/paste and/or create all the files in the project skill-NodeJS/Lambda folder. Save and deploy your code.
Setup the EV3Dev CodeFollow the EV3Dev instructions for deploying the Python code to your EV3 module. You will need to create a dolly-grip-bot.ini file with your GadgetSettings in the same folder as the dolly-grip-bot.ini. I recommend the Visual Studio Code EV3Dev extension to connect to your EV3 module and upload your code.
Run it!Right-click your dolly-grip-bot.py file and click "Run" to get things started. It will take a while before it is ready for commands.
Now invoke your skill and give it a try. I.e., "Alexa, open dolly bot." and then "move forward" or "stop" or "move to camera position blue" (assuming you put down some colored plates for it to pick up.
Comments