Connecting DC motors to the IOIO through the TB6612FNG, a motor driver break out board

This entry is part 3 of 10 in the series Android Robot

The IOIO has the capability to output Pulse Width Modulated (PWM) signals on its pins. You can vary the duty cycle of PWM signals to control the speed of DC motors. It’s recommended that you don’t connect the IOIO directly to motors, but use a motor driver breakout board in between.

The board I’m using to power the DC motors on my Android robot is the TB6612FNG.

And this is the circuit diagram for connecting the IOIO to the TB6612FNG, modified from the book Getting Started With IOIO, by Simon Monk.

IOIO to TB6612FNG motor driver board circuit diagram, modified from the book Getting Started with IOIO (2011 Simon Monk)

The only modifications I’ve made to Simon’s original circuit diagram are:

1. I’m using a 9V power supply instead of his 6V.
2. Consequently, I’ve connected Vm, the motor’s power supply to the 5V
output on the IOIO board instead of directly to the batteries as the
motors I’m using are 6V DC motors.

So, the following are the steps to do this:

1. Solder a header onto the side of the TB6612FNG board with pins Vm to GND.

Solder header onto motor driver board

2. Solder 7 pins onto IOIO board from outputs 39 to 45. Do this so that the pins stick out off the top of the IOIO board and the TB6612FNG can later be threaded onto these pins.

Solder 7 pins on IOIO outputs 39-45.

3. Solder the TB6612FNG to the IOIO using the pins from step 2, so that the outputs 39 to 45 on the IOIO are connected to inputs PWMA to GND on the TB6612FNG.

Solder the TB6612FNG to the IOIO pins

4. Finally, wire all connections as shown in the circuit diagram shown earlier.

IOIO connected to DC motors on robot through the TB6612FNG

Note that I’ve not soldered on a switch as yet in the image above.

Also, the PowerTech power supply I was using in my previous post turned out to be woefully inadequate in powering the motors as it was pumping out a mere 500 mA. So I used a 9V alkaline battery instead.

Watch the video of the robot motors being driven from the phone here:

If your motors turn in the wrong direction, just reverse the motor output polarities on the pins A1/A2 and B1/B2 of the TB6612FNG.

And here’s the Java code to run the motors from an Android app.

/**
 * Modified from code from the book Getting Started with IOIO (2011 Simon Monk)
 * by Jay Chakravarty 2012
 *
 * Simple program to connect to two motors and drive them forward 
 * using Pulse Width Modulation (PWM)
 */

package com.ioiobook.rover;

import ioio.lib.api.DigitalOutput;
import ioio.lib.api.PwmOutput;
import ioio.lib.api.exception.ConnectionLostException;
//import ioio.lib.util.AbstractIOIOActivity;

import ioio.lib.util.BaseIOIOLooper;
import ioio.lib.util.IOIOLooper;

import ioio.lib.util.android.IOIOActivity;

import android.os.Bundle;

public class MainActivity extends IOIOActivity {

    private static final int AIN1_PIN = 41;
    private static final int AIN2_PIN = 40;
    private static final int PWMA_PIN = 39;
    private static final int PWMB_PIN = 45;
    private static final int BIN2_PIN = 44;
    private static final int BIN1_PIN = 43;
    private static final int STBY_PIN = 42;
    private static final int LED_PIN = 0;

    private static final int PWM_FREQ = 100;

    private float left_ = 0;
    private float right_ = 0;

    //private RoverControlView roverControlView_;

    /**
     * Called when the activity is first created. Here we normally initialize
     * our GUI.
     */
   @Override
    public void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);

        //roverControlView_ = new RoverControlView(this);
        //roverControlView_.setBackgroundColor(Color.WHITE);
        //setContentView(roverControlView_);
    }

    /**
     * This is the thread on which all the IOIO activity happens. It will be run
     * every time the application is resumed and aborted when it is paused. The
     * method setup() will be called right after a connection with the IOIO has
     * been established (which might happen several times!). Then, loop() will
     * be called repetitively until the IOIO gets disconnected.
     */
    class Looper extends BaseIOIOLooper {
        private DigitalOutput ain1_;
        private DigitalOutput ain2_;
        private PwmOutput pwma_;
        private PwmOutput pwmb_;
        private PwmOutput led_;
        private DigitalOutput bin2_;
        private DigitalOutput bin1_;
        private DigitalOutput stby_;

        /**
         * Called every time a connection with IOIO has been established.
         * Typically used to open pins.
         * 
         * @throws ConnectionLostException
         *             When IOIO connection is lost.
         * 
         * @see ioio.lib.util.AbstractIOIOActivity.IOIOThread#setup()
         */
        @Override
        protected void setup() throws ConnectionLostException {
            ain1_ = ioio_.openDigitalOutput(AIN1_PIN);
            ain2_ = ioio_.openDigitalOutput(AIN2_PIN);
            pwma_ = ioio_.openPwmOutput(PWMA_PIN, PWM_FREQ);
            pwmb_ = ioio_.openPwmOutput(PWMB_PIN, PWM_FREQ);
            //pwma_ = ioio.openPwmOutput(new DigitalOutput.Spec(PWMA_PIN, OPEN_DRAIN), PWM_FREQ);
            
            
            bin2_ = ioio_.openDigitalOutput(BIN2_PIN);
            bin1_ = ioio_.openDigitalOutput(BIN1_PIN);
            stby_ = ioio_.openDigitalOutput(STBY_PIN);
            stby_.write(true); // On
            led_  = ioio_.openPwmOutput(LED_PIN,PWM_FREQ);
        }

        /**
         * Called repetitively while the IOIO is connected.
         * 
         * @throws ConnectionLostException
         *             When IOIO connection is lost.
         * @throws InterruptedException
         * 
         * @see ioio.lib.util.AbstractIOIOActivity.IOIOThread#loop()
         */
        
        public void loop() throws ConnectionLostException {
        		led_.setDutyCycle((float)0.9);
        		
        		//Setting PWM Duty Cycle for 1st motor
        		//Note: This should not exceed 1.0
        		//Vary between 0 and 1 to achieve different speeds
        		pwma_.setDutyCycle((float)1.0);
        		ain1_.write(false);
        		ain2_.write(true);
        		
        		
        		//Setting PWM Duty Cycle for 2nd motor
        		//Note: This should not exceed 1.0
        		//Vary between 0 and 1 to achieve different speeds
        		pwmb_.setDutyCycle((float)1.0);
        		bin1_.write(false);
        		bin2_.write(true);
            

            try {
            	Thread.sleep(10);
        	} catch (InterruptedException e) {
			}
        }

    }

    /**
     * A method to create our IOIO thread.
     * 
     * @see ioio.lib.util.AbstractIOIOActivity#createIOIOThread()
     */
    @Override
    protected IOIOLooper createIOIOLooper() {
        return new Looper();
    }

}

Power to the IOIO

This entry is part 2 of 10 in the series Android Robot

The IOIO board (unlike the Arduino) does not come with headers attached on it, so you’ve got to solder them on yourself. Use the Arduino headers (again available from Little Bird Electronics) and solder them on to the board. I’ve used an 8 pin header on the VIN side of the board and a 6 pin header for the GND side of the board, which are by the way the two pins across which you can apply 5-15V DC (with a minimum of 500 mA current). I’m using a PowerTech battery eliminator with a variable (3-12V) output voltage power supply. In my experience, these often don’t supply the correct voltage and make sure you check the voltage across the output. In this particular case, I found that the 3V output actually gives a 5.75-6V output, which is what I’m using.

3 V DC Powertech power supply actually 5.75 - 6 V


Use the 3rd hand tool to solder the headers onto the IOIO board. First, thread the header pins onto the top of the IOIO board and tape it to it (so you can hold it up-side down without the header falling off). Then, hold the IOIO board upside-down using the 3rd hand and solder the underside of the board around the pins with a minimum amount of solder on each pin.

Use the 3rd hand to hold the IOIO in place while soldering the headers to it


Do this for both the headers. Finally, check that there is no solder-bridge across successive pins (using the continuity tester on your multimeter). Now make the connection from the positive terminal of the power supply (6V DC) to the VIN socket on the header and from the negative terminal of the power supply to any one of the three GND sockets on the header on the other side of the board. If all is well, your power light on the IOIO should glow red.
Note that I used Arduino Jumper cables which slot directly into the headers. You don’t have to solder them on, they are pretty tight in the headers (won’t come out unless you pull on them) and are perfect for testing. After you’ve finished soldering the headers onto the IOIO board, you can cut off the remainder of the pins protruding from the underside of the board just below the solder joints using a pair of scissors.

Power LED on IOIO glows red if your connections are right

Now, if you attach your phone through the USB cable to the IOIO, it should start charging from it.

Setting Up Your First IOIO Project In Eclipse

This entry is part 1 of 10 in the series Android Robot

If you’ve already been programming Android apps with Eclipse, as I have, you would’ve thought that the downloading and running of the HelloIOIO app would’ve been a synch. As with everything software, you would’ve been wrong. It is just impossible to get new apps involving new libs up and running without some fiddling around!

IOIO, if you don’t already know what it is, is a tiny PCB board invented by Ytai Ben-Tsvi that enables you to use your Android device (tablet/phone) to control sensors and actuators, so you can make it actually do something useful. Like using your cutting board to control the display of your Android device while looking up recipes so you don’t have to touch the screen using your messy hands!

Watch the video of the MIT Media Lab guys building this contraption here: http://player.vimeo.com/video/36862291

The board is sold in Australia by Little Bird Electronics for $55.

You can download the IOIO library and example code from here:
https://github.com/ytai/ioio/wiki/IOIO-Over-Bluetooth

Once you’ve unzipped the folder and imported all the projects into Eclipse, you will get the error “Project has no default.properties file! Edit the project properties to set one.”
This is because at some point, the Android Development Toolkit (ADT) for Eclipse had its properties files renamed from project.properties to default.properties.
So, I renamed the project.properties file to default.properties in all the imported projects (including the libraries IOIOLib, IOIOLibAccessory and the IOIOLibBT).
After this, the library projects IOIOLib, IOIOLibAccessory and IOIOLibBT compiled fine, but my HelloIOIO project still would not compile. I found that the problem was with the IOIOLibAccessory_src and the IOIOLibBT_src libraries in the HelloIOIO folder. I deleted these folders from the HelloIOIO folder as the project did not seem to be using these particular libraries and that solved the problem (for now) and the HelloIOIO project compiled successfully.

If you run the project, you should get the following image on your emulator screen.

Those of you un-familiar with the emulator need to know that you will need to give it 2-3 minutes to boot into the Android home screen and then when you unlock the screen (by swiping at the bottom), only then does the app show up.

If you now connect your phone to your computer through USB, you should get the same screen you got on your emulator on your phone.

SURF Feature Descriptors on Python OpenCV

This entry is part 1 of 1 in the series Image Feature Descriptors

This is a test of SURF features (http://en.wikipedia.org/wiki/SURF, claimed to be faster than the original SIFT feature descriptors) on Python OpenCV.
Parameters used were: (0, 300, 3, 4)
0: (basic, 64-element feature descriptors)
hessian threshold: 300
number of octaves: 3
number of layers within each octave: 4
Time taken for execution: 0.108652 seconds on a 249×306 image (MacBook Air 2.13 GhZ, Core 2 Duo, 4 GB RAM)


import cv

# SURF Feature Descriptor
# 
# Programmed by Jay Chakravarty


import cv
from time import time

    
if __name__ == '__main__':
    cv_image1=cv.LoadImageM("mvg1.jpg")
    cv_image2=cv.LoadImageM("mvg2.jpg")
    cv_image1_grey = cv.CreateImage( (cv_image1.width, cv_image1.height), 8, 1 )
    cv.Zero(cv_image1_grey)
    cv_image2_grey = cv.CreateImage( (cv_image2.width, cv_image2.height), 8, 1 )
    cv.Zero(cv_image2_grey)

    #cv.ShowImage("mvg1", cv_image1)
    #cv.ShowImage("mvg2", cv_image2)  
    cv.CvtColor(cv_image1, cv_image1_grey, cv.CV_BGR2GRAY);
    #cv.ShowImage("mvg1_grey", cv_image1_grey)  
    cv.CvtColor(cv_image2, cv_image2_grey, cv.CV_BGR2GRAY);
    #cv.ShowImage("mvg2_grey", cv_image2_grey);
    
    #CvSeq *imageKeypoints = 0, *imageDescriptors = 0;
    
    # SURF descriptors
    #param1: extended: 0 means basic descriptors (64 elements each), 1 means extended descriptors (128 elements each)
    #param2: hessianThreshold: only features with hessian larger than that are extracted. good default value is 
    #~300-500 (can depend on the average local contrast and sharpness of the image). user can further filter out
    #some features based on their hessian values and other characteristics.
    #param3: nOctaves the number of octaves to be used for extraction. With each next octave the feature size is
    # doubled (3 by default)
    #param4: nOctaveLayers The number of layers within each octave (4 by default)
    
    tt = float(time())    

    #SURF for image1
    (keypoints1, descriptors1) = cv.ExtractSURF(cv_image1_grey, None, cv.CreateMemStorage(), (0, 300, 3, 4))
    
    tt = float(time()) - tt
    
    print("SURF time image 1 = %g seconds\n" % (tt))
    
    #print "num of keypoints: %d" % len(keypoints)
    #for ((x, y), laplacian, size, dir, hessian) in keypoints:
    #    print "x=%d y=%d laplacian=%d size=%d dir=%f hessian=%f" % (x, y, laplacian, size, dir, hessian)
        
    # draw circles around keypoints in image1
    for ((x, y), laplacian, size, dir, hessian) in keypoints1:
        cv.Circle(cv_image1, (x,y), int(size*1.2/9.*2), cv.Scalar(0,0,255), 1, 8, 0)


    cv.ShowImage("SURF_mvg1", cv_image1)
    
    
    tt = float(time())    
    
    #SURF for image2
    (keypoints2, descriptors2) = cv.ExtractSURF(cv_image1_grey, None, cv.CreateMemStorage(), (0, 300, 3, 4))
    
    tt = float(time()) - tt
    
    print("SURF time image 2= %g seconds\n" % (tt))


    # draw circles around keypoints in image2
    for ((x, y), laplacian, size, dir, hessian) in keypoints2:
        cv.Circle(cv_image2, (x,y), int(size*1.2/9.*2), cv.Scalar(0,0,255), 1, 8, 0)
                
                
    cv.ShowImage("SURF_mvg2", cv_image2)




    cv.WaitKey(10000)  

Occupancy Grid

This entry is part 3 of 3 in the series Python Robotics Simulator

The easiest way for a robot to map its environment is using an occupancy grid. As the robot moves through its environment, the readings returned from its laser range finder (LRF) at different angles around it are marked off on a map. As a spot is marked more than once, its probability increases. This simulation shows the creation of an occupancy grid map. However, in the real world, odometry is not perfect and the robot’s position (as estimated by odometry) drifts from its real position, resulting in a skewed map. Hence, the requirement for a loop-closing technique like SLAM (Simultaneous Localization & Mapping) that corrects for this error when the robot doubles back on itself and visits a previously known location.

Occupancy Grid Simulation


See the simulation here:

Particle Filter Localization for a Simulated Mobile Robot

This entry is part 2 of 3 in the series Python Robotics Simulator

This simulation shows a robot (red) moving through a simulated environment and a Particle Filter (PF) trying to localize it in that environment. Initially, the particles (shown in different shades of grey indicating their probabilities) do not know the location of the robot and try and localize the robot from scratch. This is called global localization. After the robot moves about in its environment, the particles coalesce on the robot’s correct position. Thereafter, it is easier for the PF to track the position of the robot and this is called position tracking. Note that the environment is quite symmetric and this is why the PF has difficulty in finding the robot in the beginning, but gets there in the end.

Particle Filter Localization for a Simulated Mobile Robot


The measurements are simulated laser range finder readings (see previous post: Simulated Laser Range Finder Readings Using Python & OpenCV).
The code is written in Python using the OpenCV library (wrapper for python). It is adapted from PF code written by Juan Carlos Kuri Pinto and the Udacity team over at Udacity University’s online Autonomous Driving class (http://www.udacity-forums.com/cs373/questions/14131/unit-3-graphical-demo-of-ro­bot-localization-based-on-particle-filters).

See the video of the simulation here:

I’ll put the code up as soon as I’ve tidied it up a bit.

Setting up OpenCV 2.2 with Python on OS X

This post should be useful to Mac users who want to get OpenCV up and running with Python. I’ve recently migrated to an older (2010 refurbished) MacBook Air (from a 2011 MacBook Pro) which runs OS X 10.6.8 (The Pro ran 10.7.something). I transferred all my installed software from the Pro to the Air using Apple’s Migration Assistant software. However, after this, XCode (4.something) stopped working. I wasn’t able to install new stuff using MacPorts, and so I decided to uninstall XCode, uninstall MacPorts, install HomeBrew (instead of MacPorts to install Unix tools Mac didn’t ship with OS X) & install an older version of XCode (3.2.2). Python 2.6 was already installed and working fine. Here are the steps:

  • Uninstall MacPorts:
    • sudo port -f uninstall installed
    • sudo rm -rf /opt/local
      sudo rm -rf /Applications/DarwinPorts
      sudo rm -rf /Applications/MacPorts
      sudo rm -rf /Library/LaunchDaemons/org.macports.*
      sudo rm -rf /Library/Receipts/DarwinPorts*.pkg
      sudo rm -rf /Library/Receipts/MacPorts*.pkg
      sudo rm -rf /Library/StartupItems/DarwinPortsStartup
      sudo rm -rf /Library/Tcl/darwinports1.0
      sudo rm -rf /Library/Tcl/macports1.0
      sudo rm -rf ~/.macports
    • Remove /opt/local/bin & /opt/local/sbin from your PATH variable. To do this, open a new terminal window and type: open .profile. This will open up the .profile file using TextEdit (assumed installed). Comment out the line (prefix with #) the line: export PATH=/opt/local/bin:/opt/local/sbin:$PATH
  • Install HomeBrew: Open a terminal window and type:  /usr/bin/ruby -e “$(/usr/bin/curl -fksSL https://raw.github.com/mxcl/homebrew/master/Library/Contributions/install_homebrew.rb)”
  • Update HomeBrew: sudo brew update
  • Uninstall XCode 4.0 and install XCode 3.2.2:
    • Note: This step is not required if your XCode is running okay. You can diagnose this by:  sudo brew doctor
    • For example, I got the warning: Warning: You have no /usr/bin/cc.
      This means you probably can’t build *anything*. You need to install the Command Line Tools for Xcode.
    • So, uninstall XCode by:  sudo /Developer/Library/uninstall-devtools –mode=all
    • Download XCode 3.2.2 (without the iOS SDK, it is 745 MB) and install it the regular way (you get a .dmg file)
    • Now, typing gcc in the command line tells me that I have the following C/C++ compiler installed: i686-apple-darwin10-gcc-4.2.1
  • Install OpenCV 2.2: sudo brew install opencv –build32  (Note: –build32 because mine was a 32 bit OS). OpenCV installs in/Library/Frameworks/OpenCV.framework
  • Add Python Path: Open a new terminal and type open .profile and add this line: export PYTHONPATH=”/usr/local/lib/python2.6/site-packages:$PYTHONPATH”
  • Run OpenCV in Python:
    • Now, write the python program to do whatever you want. Import the OpenCV library by adding  import cv up the top of your program.
    • Now, to run a sample Load & Display Image program in python using the OpenCV library:

import cv

im = cv.LoadImageM("image.jpg")
cv.NamedWindow("loaded_image", 1)
cv.ShowImage("loaded_image",im)
cv.WaitKey(2000)

    • Note that there seems to be some bug with OS X and OpenCV with closing GUI windows. So I’ve used the cvWaitKey command above to automatically close the window after 2 seconds.
    • Sample face detection program (from http://recursive-design.com/blog/2010/12/14/face-detection-with-osx-and-python/ , remember to change the location of the Haar Cascade; mine was in /Library/Frameworks/OpenCV.framework/Versions/Current/Resources/) outputs: