Practical Exercises

You may do these practical sessions in groups of two or work through them on your own.

The practicals will be held in 4412N.

The practicals will require everyone to bring a laptop computer with them, and you should try to make sure that the correct software is installed on the laptop before the practical sessions.

Information on what this software is, where to download it from, and how to install it, can be found on the robot information page.

You should also read through the description of the practical exercise and print out and read any additional material supplied before the class.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 1

With this exercise, we will start working with the Aibo robots.

Before class you need to:

  1. Download and install the OPEN-R development environment on your machine. There are instructions on the robot information page.
  2. Download the code that you will use as a starting point (see the robot information page for information on how to unpack this file if you don't know how to).

The code comes with a README, which explains the structure of the code, and is reasonably well commented, so it should be comprehensible.

You should only need to modify code in the Behavior subdirectory. This contains one file, MyBehavior.cc, which controls the robot.

Exercise 1 is to first compile the code and run it, and then write your own version of MyBehavior.cc, which makes use of at keast five of the pieces of behavior in BehaviorFunctions.cc.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 2

Continue with what you were doing in Exercise 1

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 3

Exercise 2 is an exercise in dead-reckoning navigation, and the use of odometry. The exercise replicates the work of Borenstein. There are two parts:
  1. Write code that has the robot walk in a square, 3 feet on each side. Thus the robot should start by walking in a straight line for 3 feet, turn through 90 degrees clockwise, walk another 3 feet in a straight line, turn another 90 degrees clockwise, and so on, until it has turned through 360 degrees and walked back to its starting point.

    The aim of the exercise is to minimise the distance that the robot ends up from its starting point.

    Run 10 trials of your code, measuring the absolute distance between where the robot ends up and its starting point.

  2. Repeat but have the robot walk in a square 5 feet on each side.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 4

Continue what you were doing in Exercise 3.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 5

Exercise 5 aims to test some more complex aspects of dead-reckoning navigation, and then look at how we can deal with them: Write code that has the robot look for the ball (you can assume that the ball will be visible without the robot having to turn), walk upto the ball, then turn around and return to its starting point.

Again the aim is to get the robot back as close as possible to its starting point.

The thing that makes this harder than walking in a square is that you need to be able to walk to and from the ball without knowing, at compile time, how far the ball is from the robot.

If you look through the various .cc files in the Behavior directory of the code I gave you, you will find code that looks for the ball, identifies when the ball is seen and so on. What you need to concentrate on in this exercise is tracking how far the robot has walked, and turning as precisely as possible through 180 degrees.

You also have to start using the vision system.

Information from the vision system (the Perception module) is delivered to the Behavior module by the function ReceiveFromPerception. This updates some internal variables (like see_ball) which should provide the information you need.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 6

Write code that uses the markers to navigate around in a combination of dead-reckoning and vision-based marker detection.

Go back to navigating round a 5ft square, but now place a marker 2 feet outside the corner (it needs to be that far back so the robot can see it), and use the fact that the robot can see the markers in order to decide when to turn and what direction to head in.

You should be able to use the same trick as in the code that walks to the ball (fix vision on the marker, turn the robot until the head and body are aligned, walk straight) to get to the marker, and use the same trick as when walking to the ball (measure the neck angle) to decide when you are close enough to it (this is more accurate than using distance estimates).

ReceiveFromPerception writes information about markers in the array marker[]. The four markers you have are marker[0], marker[1], marker[4], marker[5].

You should then be able to walk around the square from any point within it (by first looking for one of the markers. walking to it, and then circumnavigating, stopping when you get back to the first marker.

As before measure the error in your final position (from the point marker as the first corner).

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 7

You are now at a point where you can segue into the project part of the course.

Your project is to get the robot to do something cool. For now that means dream up something you would like it to do, write it down, and send it to me so that we can discuss it.

If you don't have any ideas, I have a long list of suggestions.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]