One of my top favorite projects I've built has been the robotic Lego Land Rover Defender I did last year in 2022. As with any big project, I had a laundry list of updates and modifications I wanted to implement in version 2 before I was even done with version 1. So naturally, version 2 of the Lego Defender robot became my 2023 big project.
One of the major changes at the top of that list was the mechanical structure of the drive train. I needed to rebuild the gear box to incorporate the DC motor, because my 3D printer belt addition to the out-of-the-box build instructions caused the entire frame of the car to flex in a negative way.
I knew before I even finished building version 1 that I wouldn't have the heart to ever tear it back apart, so unfortunately for my wallet, I purchased a second Lego Technic 42110 kit to start version 2 with. My lab bench very quickly started to look like a Lego garage...
The majority of the rest of the upgrades on the list for version shared a common theme: loading/installing more hardware onto the Lego Defender chassis to make it mobile while being untethered. The main issue was the power supply that could power the Kria KR260 and the DC motors.
Obviously it needed to be a battery. However, finding a 12V battery that could meet the KR260's 3A requirement and not be too heavy for the Lego Defender to be able to "haul" it around quickly proved to be a near impossible task. This lead me down an interesting path because if I couldn't get the hardware onto the Lego Defender, that meant I had to figure out how get it off and make it wireless...
While the Kria KR260 looked super cool on version 1 as a "roof rack", I ultimately figured out how to move it off the Lego Defender and make its connection to the DC motors and camera completely wireless. This freed up my battery requirements so I could size down the motor and camera power supplies to be a manageable size for the Lego Defender to haul.
So even though version 2 looks more "normal" than version 1, I'm happy with the technical upgrades.
Motor UpgradesIn a twist of fate, the morning after I completed version 1 I opened my Amazon account to see none other than a motor conversion kit with full alternative build instructions for the 42110 Lego Defender Technic kit in my "You May Like..." purchase suggestions.
This kit consists of three DC motors in Lego enclosures, a motor controller box that contains a lithium ion battery (rechargeable via a provided microUSB cable) and bluetooth motor drive controller, the spare Lego pieces required to build the motors into the Lego Defender with the provided alternate build instructions, a bluetooth remote controller, and the link to a mobile application that could also connect to the bluetooth motor controller box.
So this kit solved three issues for me: it provided the new mechanical structure of the motors built into the Lego chassis, made the control of the motors wireless, and the power system for it all was nicely packaged into it.
Obviously I had to go with the super motor option over the regular motors, the light kit was sold out at the time but I guess that'll be something to save for version 3? (haha)
I spent a weekend with four paws worth of trouble helping me build the new kit. It somehow felt more tedious than the last time. My "help" was also convinced the larger gears were new cat toys for her...
The alternate build instructions were a huge time saver. I knew it would have taken me far too long to design the drive train build from scratch, but going through the alternate build guide really highlighted what a massive undertaking that would have been.
I initially removed the backseat to be able to store the motor controller box easier:
But I found it was a pain to plug in the USB cable to recharge it so I moved it to the roof later on:
I used the bluetooth remote controller to make sure I had build the hardware without any gross mechanical errors:
I figured it was a pretty safe assumption that the bluetooth packet structure wasn't anything too complex nor that it would be encrypted. So I took advantage of the mobile application and ran a bluetooth trace on my phone while sending all of the commands to drive forward/backward and turn left/right to capture a sample set of bluetooth packets.
I was able to see a pattern of bluetooth packets in order to reverse engineer which command was what (drive forward, drive backward, stop, turn left, turn right).
I then happily discovered that the Ubuntu image for the Kria KR260 was as easy to add bluetooth to as simply plugging in a bluetooth dongle that's listed as Raspberry Pi compatible.
And I was ultimately able to write some Python functions to act as the remote controller. See my previous write-up here for the details of how I did this.
Since the bluetooth functionality needs to run as a coroutine and therefore is asynchronous to the rest of the code running in the Edge Impulse Classifier script, the whole main function needs to be declared as a coroutine function using the asyncio
library.
So the main function in my classify.py script goes from this:
def main(argv):
# Edge Impulse main classifier function
if __name__ == "__main__":
main(sys.argv[1:])
To this:
import asyncio
async def main(argv):
# Edge Impulse main classifier function
if __name__ == "__main__":
asyncio.run(main(sys.argv[1:]))
Then each bluetooth command is called using the await function to make sure there is the proper bluetooth traffic to verify the Lego Defender is successfully receiving/executing each motor drive command
await client.write_gatt_char(bt_uuid, header + portAB_command_stop + portCD_command_stop + module_channel + command_chksum_stop)
Because each bluetooth command does cause the main function to wait until it completes, I did find that I had to be careful and only send motor drive commands when absolutely necessary. Otherwise there was too much delay between when the ML algorithm would/would not detect the pumpkin obstacle then it be able to react appropriately (ie - it would steer around the pumpkin after it already hit it).
Luckily I found that the motor controller box would execute whatever the last bluetooth command it received was indefinitely. Thus I was able to write the logic in the obstacle avoidance code after the bounding boxes are returned from the EI model to only send a new command when a change of direction was required:
elif "bounding_boxes" in res["result"].keys():
if (len(res["result"]["bounding_boxes"])) == 0:
if defender_direction != "straight":
model_number = await client.write_gatt_char(bt_uuid, header + portAB_command_fwd + portCD_command_fwd + module_channel + command_chksum_fwd)
defender_direction = "straight"
else:
for bb in res["result"]["bounding_boxes"]:
img = cv2.rectangle(img, (bb['x'], bb['y']), (bb['x'] + bb['width'], bb['y'] + bb['height']), (255, 0, 0), 1)
if (bb['label']) == "pumpkin":
if (bb['x'] < 45):
if defender_direction != "right":
model_number = await client.write_gatt_char(bt_uuid, header + portAB_command_fwd + portCD_command_rgt + module_channel + command_chksum_fwdrgt)
defender_direction = "right"
elif (bb['x'] > 44):
if defender_direction != "left":
model_number = await client.write_gatt_char(bt_uuid, header + portAB_command_fwd + portCD_command_lft + module_channel + command_chksum_fwdlft)
defender_direction = "left"
else:
model_number = 0
defender_direction = defender_direction
Camera SourceWith the motor control now fully wireless, the next task was the camera. Previously I had been using a Logitech USB webcam, so I initially looked for wireless video transmitters for webcams. However I found the software for this option got wildly complex wildly fast and I didn't have the time/interest for that.
I then remembered my previous experience with setting up the first security cameras for my home and how I had to set up their RTSP feeds in order to view them while I was away and not on my local network (this was before all the fancy plug-n-play cameras from the leading tech manufacturers). So I did some shopping around to see if anyone still sold a security camera with a user-accessible RTSP feed.
A security camera was also ideal for the Lego Defender because it's compact, lightweight, and typically has low power draw.
The other aspect of this was modifying my Edge Impulse runner script to pull frames from an RTSP feed instead of a USB webcam. Again, I've detailed all of this in a separate write-up since this write-up would be a novel by now if I included all the details here (and probably still will be).
Initially I put the Lego Defender up on blocks and used the power cord with the Tapo security camera to do some initial prove-in on my code.
I then built battery power supply for the Tapo using 9V batteries. I purchased 9V battery holders with flying leads and soldered two in parallel to meet the 0.6A requirement of the camera. I then soldered that to a right-angle barrel jack pigtail that fit the Tapo camera, not forgetting my current limiting resistor. Using the camera's power input requirements (9V, 0.6A), I calculated that the resistor needed to be 15Ω/0.54W:
R = V/I = 9V / 0.6A = 15Ω
P = IV = 0.6A * 9V = 0.54W
I then simply set the videoCaptureDeviceId
in the EI classifier.py script to the RTSP reed URL:
videoCaptureDeviceId = "rtsp://<username>:<password>@192.168.1.100/stream2"
camera = cv2.VideoCapture(videoCaptureDeviceId)
I did find that I needed to set the Tapo camera to use the lowest resolution stream possible, which is what the "stream2" at the end of the URL is doing, at 720p. The higher resolution streams over wifi had too much lag for the Lego Defender to react to obstacles in time.
I also found to cut down on latency I needed to set the frame buffer to zero so the EI model was always being fed the latest frame from the camera:
camera.set(cv2.CAP_PROP_BUFFERSIZE, 0)
Ultimately, there was definitely still some lag of a few seconds, but it was usable. I'm definitely going to need to rewrite this in C++ to optimize it any further though.
Edge Impulse AccelerationUnfortunately, I didn't get the Edge Impulse hardware acceleration fully implemented like I wanted quite yet. I made quite a bit of progress though in getting the C++ library of my pumpkin obstacle identifier ML model pulled into my Vitis workspace for the KR260 and the accelerated application project created with the OpenCV libraries. See my project write-up here for those details.
Making a clean break in the data pipeline between the regular C code and the accelerated kernel is still giving me some heartburn. But this will just be an excuse to make version 3 of the Lego Defender!
Unexpected Bonus Round of Lego Mechanical IssuesWhile providing in my new classifier.py script and discovering all the lovely things like needing to use the lower resolution video stream from the Tapo camera, the drive train of the Lego Defender started locking up on me.
After spending quite some time manually turning the wheels and watching each gear in the drive train try to turn, I was able to narrow it down to the fake Lego "motor" itself.
Part of what is supposed to make the Defender Lego Technic kit look cool is that there is a third gear tied to the drive train that makes a set of inline 6 cylinders go up and down with the turning of the wheels. I eventually saw that this gear was the one stopping first and ultimately causing the rest to stall.
For some reason, the cylinders weren't simply going up, but instead blocking the shaft that was supposed to move them from turning. The particular Lego pieces used for the cylinders were very close in length to pieces used in the old gear box, so there is a chance that I used pieces that were slightly too long.
At this point, I was beyond caring about cool factor and wanted functionality so I just removed the cylinders since they were just for show.
Final DesignI set up a makeshift track in the lab for the Lego Defender to run and it was pretty awesome to see it actually driving and not just stuck on blocks like version 1.
Just so I didn't run a bunch of 9V batteries out, I still used the power cable for the security camera for my initial test runs since it was so long:
Another added bonus of having the Kria KR260 board stationary on my desk was being able to plug it into a monitor to see video stream from the classifier.py script to verify if the EI model was detecting the obstacles appropriately.
I've attached my classifier.py script below. I obviously still have quite a bit of development before getting to a full-blown obstacle avoidance script, but I'm happy with the progress so far!
Comments