Software apps and online services
Hand tools and fabrication machines
The most harmful kind of trash in our oceans today are tiny pieces of plastic that are difficult to pick up and can be easily ingested by wildlife.
The LitterBug can help pick up some of the smaller pieces of trash that people may overlook during a beach clean up. It'll work as a guardian of its beach, regularly cleaning up the area and monitoring wildlife and protected areas such a turtle nest.
Packed with a solar panel, it can always be at work or on the lookout.
We'll be using a night vision Pi Camera, motion detector, GPS breakout and Hologram Nova to implement security features such as monitoring when unauthorized activity is occurring, identifying perpetrators, logging images and GPS location, as well as notifying a human about such activity.
Servos, motors, and other sensors will be needed to create a "scoop" and sieve to collect only trash and dump out sand. We're imagining it to work sort of how WALL-E picked up trash.
We were so excited when we received our Donkey Car Kit in the mail, we got to building it straight away.
On the official Donkey Car site, it says you can build it in ~2 hours but it probably won't take that long.
The docs for putting the hardware together will get you most of the way there. The only part it doesn't cover is how to remove the top part of the RC car (since there are many different RC cars you can use to construct a Donkey Car).
We found this video really helpful to correctly remove the top for the desert monster model.
A quick note: when removing the cables connected in the RC car receiver, channel 1 is usually throttle and channel 2 is steering. Keep these in mind when you are connecting them to the servo shield- you want to connect the throttle cable to channel 0 and steering cable to channel 1 on it.
Continuing on the official docs, you'll learn how to ssh into the pi and configure the software needed for controlling the RC. While we were calibrating the steer and throttle, we stuck to lower maximum values for testing.
Since we want to add several new sensors, we designed and printed a new roll cage. It has the motion sensor embedded at the top, a slightly different camera mount to mount the night pi cam and some more screw holes for the various other sensors.
And here's the final result for the first iteration!
After the calibration and this case update, we took the Donkey Car for a spin!
We've printed some custom risers to lower the Donkey Car platform.
We've also picked up some beefier off-road tires from the local hobby shop for the challenges of driving in sand.
Next, we connected the GPS, temp/humidity, and PIR motion sensors. Since more than one sensor uses i2c, we added a small breadboard and glued it to the roll bar. Check out the fritzing diagram or download it in the attached files for the full wiring.
After wiring and assembly, we have:
We've experimented with using an iPad and a cell phone to control the Donkey Car via web application joystick, tilt and key bindings. However, it's worth considering pairing a PS3 controller with the Raspberry Pi for the responsiveness of bluetooth control over your Donkey Car. Use this guide to set up your PS3 controller.
We found that the initial max throttle for the joystick option was too low to get the rover moving. We increased the max throttle in the config.py file to get it going.
#JOYSTICK ... JOYSTICK_MAX_THROTTLE = 1.0 #increased to 1.0 from 0.25
After finding this method was the best control, we gave it a test run outdoors. If you want to drive your Donkey Car outside of your home wifi network, you can set up the raspberry pi as an access point in a standalone network. Use this tutorial to set it up.
Although we'll need to practice more to get really good training data, it was very easy to avoid obstacles and drive over trash. We even began catching trash without trying when LitterBug got caught in some fishing line!
Next, we're going to start designing a training course specifically for recognizing trash and driving over it since there won't be any tracks for LitterBug to follow in the wild.
We've partitioned the yard into a training circuit with two raised planter beds full of herbs and succulents.
We drove simple consistent loops around these obstacles to get a basic "track" without the lanes. After a little practice, we began recording.
After getting a couple laps in, we want to transfer the data from the pi onto a bigger computer for training. The original instructions are not quite right right now, so follow these to get training:
You'll want to copy everything in the /home/pi/mycar directory from your pi onto your local computer. This directory has the files you'll need to get training.
scp -r pi@<PI-IP-ADDRESS> ~/
You'll need to install the same version of tensorflow on your local machine as well as some other helpful libraries. If your default python version is > 3.5 then you can simply call pip like the command below. Otherwise use pip3
sudo apt-get install virtualenv build-essential python3-dev gfortran libhdf5-dev pip install tensorflow==1.8.0 #use pip3 if python3 not default, use --upgrade #if tf already installed
Next, you'll want to clone the donkeycar repo to install donkeycar libraries. In your home directory, run:
git clone https://github.com/wroscoe/donkey donkeycar cd donkeycar
Once in the donkeycar repo, you'll want to replace the donkeycar/parts/keras.py file with an older keras.py file. Replace donkeycar/parts/keras.py with the keras.py script we've included below. Now install the donkeycar library with:
pip install -e . #again, use pip3 if python3 is not default
You should be ready to go! To begin training, run:
python ~/mycar/manage.py train --tub ~/mycar/tub --model ~/mycar/models/mypilot
If you run into any import errors or module not found errors, pip install the missing libraries. We were able to get a pilot model trained on ~15,000 images, but we're going to record more data, retrain, then test it.
We gathered a good first batch of training sessions, looping around the garden obstacle course consistently in various lighting.
We train in various lighting conditions and even use night vision to represent the scene in various ways in an effort to create robust feature detectors for our autonomous driver.
After collecting close to 20,000 samples, we trained our first autopilot! Using the command above, we trained locally and recorded some of our preliminary results. We found that the LitterBug did really well on long stretches and steering across a small dip in one of the corners.
Litterbug learned to turn at each end, but sometimes the timing was off. This caused it to crash mostly around the turns. We think it learned to anticipate momentum to carry it around the bends from our training at high speeds.
Here we consider the distribution of steering angle and throttle speed for a collection of 150,000 training samples we collected over a couple days of driving around the tracks.
When we look at example images from training, we find that LitterBug typically takes the hard turns with fencing or an AC unit in the field of view. We expect our autopilot learns to associate these visual markers with the action of taking a hard turn.
Similarly, when we take a random sample of images where LitterBug is making a hard reverse.
You can imagine how some additional context might help to disambiguate when a wall or fence is in the field of view. Sampling the training distribution helps us intuit the landmarks our classifier might associate with different steering angle and throttle configurations.
Consider the following scatter plot of the throttle speed versus the steering angle for angles +- 0.1 (nearly straight). We add some transparency to the points to show the concentration of large positive throttle speeds when the steering angle is straight.
So far, LitterBug has performed well for its first model but it needs more training if we are to complete a loop around our vaguely defined track.
After combining roughly 250K training images of running the off-road patio track we then train a model with an order of magnitude more data than our earlier attempts. As a result, the autopilot can complete loops nearly as well as a human.
Notice how Litterbug reorients to move clockwise on the track as it has learned from many laps over multiple training sessions. Litterbug demonstrates obstacle avoidance in a vaguely defined off road track needing an occasional assist.
We found Litterbug comes to a stop in front of large obstacles and even attempts to wiggle out. The tires of the 1/16th RC car wheels do not lock out when the throttle is cut allowing for the occasional roll in reverse. By default the neural network autopilot uses reLU activations for the throttle output. We experiment with training using linear activations to allow the output of both positive and negative throttle values.
After a week of training in the desert heat, the car's ECU finally gave out. When we replaced it with a higher end component, we had more control of the drive modes allowing for much easier control and reverse than before.
At this point, we were ready to upgrade to a 1/10th scale platform designed to crawl off road.
We took the design from the donkey car creators and scaled it up to fit the new larger size of the 1/10th scale RC car. Since our 3D printer couldn't print such a long bottom, we opted for a cheap clipboard instead. We removed the hardware and drilled holes above the four supports of the new car.
After attaching the new board, we test how the new rc car drives. Right away, we have much more power and control (including improved reverse!).
Since the board is much bigger, we were able to secure all the extra sensors we added onto it. We swapped the temperature sensor for an accelerometer to collect richer driving data. For the roll cage, we printed two u-shaped roll bars - made it faster/easier to print and used less material.
After the new basic body of the LitterBug is complete, we tried to test how well our autopilot model we trained earlier transfers to this bigger RC.
It does relatively well. It seems that the model has learned the very simple behavior to turn right when it sees an endpoint on the track. It runs into the ends since the dimensions and steering of this car are different. We can use this model as a good starting point to continue training for trash pickup.
To train the LitterBug to identify trash, we chose a variety of outdoor scenes and focus on finding trash in its path. The camera can't see trash very well until it is somewhat close to it. The behavior we are looking for is for LitterBug to roam around until it locks onto a piece of trash, scoops it, and continues looking.
A simple mechanism to pick up trash is to scoop it! Since we're training our LitterBug to drive towards trash, scooping it seems like a good starting point. After thinking through it, we went for a bulldozer design that could lift up and down. We adopted this tiny bulldozer and scaled it up to fit our rc car. Then we attached the hinges to a new bumper we printed.
To control the scoop, we added an extra rc servo to the servo hat and modified the donkey car repo to control it. All the files we edited will be linked down below. We basically programed the right thumb joystick on the PS3 controller to map to the servo on the car and modified the scripts collecting data so that it would also record the scoop movement.
After putting this all together, we had to take the new LitterBug out for a spin!
The scoop works pretty well so far! The scoop defaults to staying slightly up and we lower it when its near trash. The scoop has some small mesh holes to dump out the sand and keep the trash.
We are looking into a solar panel addition to speed up training sessions and bring it closer to using energy efficiently while working outdoors.