Biped Robot

I’ve always wanted to make a walking robot.  I wanted to make something fairly rapidly and cheaply that I could try to get walking.

And so, 24 hours of hardware and software hacking later:


He’s waving only by a short amount because otherwise he falls over 🙂  Took a day and half to do, so overall I’m pretty pleased with it.  It uses 17 MG996R servos, and a Chinese rtrobot 32 servo controller board.

Reverse Engineering Servo board

The controller board amazingly provides INCOMPLETE instructions.  The result is that anyone trying to use this board will find that it just does not work because the board completely ignores the commands that are supposed to work.

I downloaded the example software that they provide, which does work.  I ran the software through strace like:

$ strace  ./ServoController 2>&1 | tee dump.txt

Searching in dump.txt for ttyACM0 reveals the hidden initialization protocol.  They do:

open("/dev/ttyACM0", O_RDWR|O_NOCTTY|O_NONBLOCK) = 9
write(9, "~RT", 3)                      = 3
read(9, "RT", 2)                        = 2
read(9, "\27", 1)                       = 1
ioctl(9, TCSBRK, 1)                     = 0
write(9, "~OL", 3)                      = 3
write(9, "#1P1539\r\n", 9)              = 9

(The TCSBRK  ioctl basically just blocks until nothing is left to be sent).  Translating this into python we get:

import serial
from time import sleep

ser = serial.Serial('/dev/ttyACM0', 9600)
ser.write("#1P2000\r\n")  # move motor 1 to 2000
ser.write("#1P1000\r\n")  # move motor 1 to 1000

(Looking at the strace more, running it over multiple runs, sometimes it writes “~OL” and sometimes “OL”.  I don’t know why.  But it didn’t seem to make a difference.  That’s the capital letter O btw.)


I wanted to have a crude sensor measurement of which way up it is.  After all, how can it stand up if it doesn’t know where up is?  On other projects, I’ve used an accelerometer+gyro+magnetometer, and fused the data with a kalman filter or similar.  But honestly it’s a huge amount of work to get right, especially if you want to calibrate them (the magnetometer in particular).  So I wanted to skip all that.

Two possible ideas:

  1. There’s a really quick hack that I’ve used before – simply place the robot underneath a ceiling light, and use a photosensitive diode to detect the light (See my Self Balancing Robot).  Thus its resistance is at its lowest when it’s upright 🙂   (More specifically, make a voltage divider with it and then measure the voltage with an Arduino).  It’s extremely crude, but the nice thing about it is that it’s dead cheap, and insensitive to vibrational noise, and surprisingly sensitive still.  It’s also as fast as your ADC.
  2. Use an Android phone.

I want to move quickly on this project, so I decided to give the second way a go.  Before dealing with vibration etc, I first wanted to know whether it could work, and what the latency would be if I just transmitted the Android fused orientation data across wifi (UDP) to my router, then to my laptop, which then talks via USB to the serial board which then finally moves the servo.

So, I transmitted the data and used the phone tilt to control the two of the servos on the arm, then recorded with the same phone’s camera at the same time.   The result is:

I used a video editor (OpenShot) to load up the video, then measured the time between when the camera moved and when the arm moved.  I took 6 such measurements, and found 6 or 7 frames each time – so between 200ms and 233ms.

That is..  exactly what TowerKing says is the latency of the servo itself (Under 0.2s).  Which means that I’m unable to measure any latency due to the network setup.  That’s really promising!

I do wonder if 200ms is going to be low enough latency though (more expensive hobby servos go down to 100ms), but it should be enough.  I did previously do quite extensive experimental tests on latency on the stabilization of a PID controlled quadcopter in my own simulator, where 200ms delay was found to be controllable, but not ideal.  50ms was far more ideal.  But I have no idea how that lesson will transfer to robot stabilization.

But it is good enough for this quick and dirty project.  This was done in about 0.5 days, bringing the total so far up to 2 full days of work.

Cost and Time Breakdown so far

Metal skeleton $99 USD
17x MG996R servo motors $49 USD
RT Robot 32ch Servo control board $25 USD
Delivery from China $40 USD
USB Cable $2 USD
Android Phone (used own phone)
Total: $215 USD
Parts cost:

For tools, I used nothing more than some screwdrivers and needle-nosed pliers, and a bench power supply. Around $120 in total. I could have gotten 17x MG995 servos for a total of $45, but I wanted the metal gears that the MG996R provide.

Time breakdown:
Mechanical build 1 day
Reverse engineering servo board 0.5 days
Hooking up to Android phone + writing some visualization code 0.5 days
Blogging about it 🙂 0.5 days
Total: 2.5 days

Future Plans – Q Learning

My plan is to hang him loosely upright by a piece of string, and then make a neural network in tensor flow to control him to try to get him to stand full upright, but not having to deal with recovering from a collapsed lying-down position.

Specifically, I want to get him to balance upright using Q learning.  One thing I’m worried about is the sheer amount of time required to physically do each tests.  When you have a scenario where each test takes a long time compared to the compute power, this just screams out for Bayesian learning.   So…  Bayesian Q-parameter estimation?  Is there such a thing?  A 10 second google search doesn’t find anything.  Or Bayesian policy network tuning?    I need to have a think about it 🙂

Nokia 6110 Part 2 – Display driver

This is a four part series:

Talking to the LCD module

First, the pin mappings as shown in the previous diagram:

                     // Description  Pin on LCD display
                     // LCD Vcc .... Pin 1 +3.3V (up to 7.4 mA) Chip power supply
#define PIN_SCLK  2  // LCD SPIClk . Pin 2 Serial clock line of LCD                          // Was 2 on 1st prototype
#define PIN_SDIN  5  // LCD SPIDat . Pin 3 Serial data input of LCD                          // Was 5 on 1st prototype
#define PIN_DC    3  // LCD Dat/Com. Pin 4 (or sometimes labelled A0) command/data switch    // Was 3 on 1st prototype
                     // LCD CS  .... Pin 5 Active low chip select (connected to GND)
                     // LCD OSC .... Pin 6 External clock, connected to vdd
                     // LCD Gnd .... Pin 7 Ground for VDD
                     // LCD Vout ... Pin 8 Output of display-internal dc/dc converter - Left floating - NO WIRE FOR THIS.  If we added a wire, we could connect to gnd via a 100nF capacitor
#define PIN_RESET 4  // LCD RST .... Pin 9 Active low reset   // Was 4 on prototype
#define PIN_BACKLIGHT 6 // - Backlight controller. Optional.  It's connected to a transistor that should be connected to Vcc and gnd

#define LCD_C     LOW
#define LCD_D     HIGH

We need to initialize the LCD:

void LcdClear(void)
  for (int index = 0; index < LCD_X * LCD_Y / 8; index++)
    LcdWrite(LCD_D, 0x00);

void LcdInitialise(void)
 // pinMode(PIN_SCE, OUTPUT);
  pinMode(PIN_DC, OUTPUT);
  pinMode(PIN_SDIN, OUTPUT);
  pinMode(PIN_SCLK, OUTPUT);
  digitalWrite(PIN_RESET, LOW); // This must be set to low within 30ms of start up, so don't put any long running code before this
  delay(30); //The res pulse needs to be a minimum of 100ns long, with no maximum.  So technically we don't need this delay since a digital write takes 1/8mhz = 125ns.  However I'm making it 30ms for no real reason
  digitalWrite(PIN_RESET, HIGH);
  digitalWrite(PIN_BACKLIGHT, HIGH);
  LcdWrite(LCD_C, 0x21 );  // LCD Extended Commands.
  LcdWrite(LCD_C, 0x80 + 0x31 ); // Set LCD Vop (Contrast).  //0x80 + V_op    The LCD voltage is:  V_lcd = 3.06 + V_op * 0.06
  LcdWrite(LCD_C, 0x04 + 0x0 );  // Set Temp coefficent. //0 = Upper Limit.  1 = Typical Curve.  2 = Temperature coefficient of IC.  3 = Lower limit
  LcdWrite(LCD_C, 0x10 + 0x3 );  // LCD bias mode 1:48. //0x10  + bias mode.  A bias mode of 3 gives a "recommended mux rate" of 1:48 
  LcdWrite(LCD_C, 0x20 );  // LCD Basic Commands
  LcdWrite(LCD_C, 0x0C );  // LCD in normal mode.

/* Write a column of 8 pixels in one go */
void LcdWrite(byte dc, byte data)
  digitalWrite(PIN_DC, dc);
  shiftOut(PIN_SDIN, PIN_SCLK, MSBFIRST, data);

/* gotoXY routine to position cursor 
   x - range: 0 to 84
   y - range: 0 to 5
void gotoXY(int x, int y)
  LcdWrite( 0, 0x80 | x);  // Column.
  LcdWrite( 0, 0x40 | y);  // Row.  

So this is pretty straightforward. We can’t write per pixel, but must write a column of 8 pixels at a time.

This causes a problem – we don’t have enough memory to make a framebuffer (we have only 2kb in total!), so all drawing must be calculated on the fly, with the whole column of 8 pixels calculated on the fly.

Snake game

The purpose of all of this is to create a game of snake. In our game, we want the snake to be 3 pixels wide, with a 1 pixel gap. It must be offset by 2 pixels though because of border.

The result is code like this:

/* x,y are in board coordinates where x is between 0 to 19 and y is between 0 and 10, inclusive*/
void update_square(const int x, const int y)
  /* Say x,y is board square 0,0  so we need to update columns 1,2,3,4
   * and rows 1,2,3,4  (column and row 0 is the border).
   * But we have actually update row 0,1,2,3,4,5,6,7  since that's how the
   * lcd display works.
  int col_start = 4*x+1;
  int col_end = col_start+3; // Inclusive range
  for(int pixel_x = col_start; pixel_x <= col_end; ++pixel_x) {
    int pixelrow_y_start = y/2;
    int pixelrow_y_end = (y+1)/2; /* Inclusive.  We are updating either 1 or 2 lcd block, each with 2 squares */
    int current_y = pixelrow_y_start*2;
    for(int pixelrow_y = pixelrow_y_start; pixelrow_y <= pixelrow_y_end; ++pixelrow_y, current_y+=2) {
      /* pixel_x is between 0 and 83, and pixelrow_y is between 0 and 5 inclusive */
      int number = 0; /* The 8-bit pixels for this column */
      if(pixelrow_y == 0)
        number = 0b1; /* Top border */
        number = get_image(x, current_y-1, (pixel_x-1)%4) >> 3;
      if( current_y < ARENA_HEIGHT) {
        number |= (get_image(x, current_y, (pixel_x-1)%4) << 1);
        number |= (get_image(x, current_y+1, (pixel_x-1) %4) << 5);
      gotoXY(pixel_x, pixelrow_y);

int get_image(int x, int y, int column)
  if( y >= ARENA_HEIGHT)
    return 0;
  int number = 0;
  if( y == ARENA_HEIGHT-1 )
    number = 0b0100000; /* Bottom border */

    return number | get_snake_image(x, y, column);
  if(food.x == x && food.y == y)
    return number | get_food_image(x, y, column);

  return number;

int get_food_image(int x, int y, int column)
  if(column == 0)
    return 0b0000;
  if(column == 1)
    return 0b0100;
  if(column == 2)
    return 0b1010;
  if(column == 3)
    return 0b0100;

/* Column 0 and row 0 is the gap between the snake*/

/* Column is 0-3 and this returns a number between
 * 0b0000 and 0b1111 for the pixels for this column
   The MSB (left most digit) is drawn below the LSB */
int get_snake_image(int x, int y, int column)
  if(column == 0) {
      return 0b1110;
      return 0;
    return 0b1111;
    return 0b1110;

Pretty messy code, but necessary to save every possible byte of memory.

Now we can call


when something changes in board coordinates x,y, and this will redraw the screen at that position.

So now we just need to implement the snake logic to create a snake and get it to play automatically. And do so in around 1 kilobyte of memory!