Practical Exercises

You will do these practical sessions in groups of two.

The first four practical sessions will be held in the classroom, the last three will be held in the Science Center. This is on the fourth floor, room 4102.

The practicals will require every group to bring at least one laptop computer with it. And groups should make sure that the correct software is installed on the laptop before the practical sessions.

Information on what this software is, where to download it from, and how to install it, can be found on the robot information page.

You should also read through the description of the practical exercise and print out and read any additional material supplied before the class.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 1

The first exercise is to write a program for the Lego robot, so that the robot can take part in a pursuit race around a course. The course is a black line on a white background.

The pursuit race will work like this:

  1. Place two robots on opposite sides of the course, facing in an anticlockwise direction.
  2. When switched on, the robots should follow the course in an anticlockwise direction.
  3. When one robot touches the back of the other robot (having caught it up), and then stops, the first robot is declared the winner.
Hopefully we will have enough working robots to stage a small tournament.

This exercise will require some preparation.

The code you need will combine the bumper-handling code of my demo program and (obviously) the line following code that comes packaged with BrickOS.

The code from this exercise must be handed in during the lecture the week after the practical class.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 2

This exercise is intended to experiment with some flocking behaviors. We will program the robots with some simple rules describing how they should behave, and then (hopefully) see larger scale behaviors emerge from the interaction.

We will try several different experiments, and they will require you to write a couple of different programs. However, there will be a common core to all the code.

As ever, this will work best if you write the code before you come to the practical.

The robot itself needs to be modified so that the light sensor "looks" forward along the robot chassis rather than down at the ground.

I will also be giving you a light emitting brick which you connect to the third output port, and set up as an active sensor.

The first program you will need has the following behaviors:

  1. If the robot hits an object, stop, back up, and turn away from it (this can be taken from the wall following code).
  2. If the robot can't detect a light source in front of it (ie a higher reading than the background), it should turn until does.
  3. If the robot can detect a light source in front of it, it should move towards it.
These behaviors should be connected so that each one takes priority over every higher numbered layer. The second program adds some additional functionality:
  1. If the robot hits an object, stop, back up, and turn away from it (this can be taken from the wall following code).
  2. If the robot can't detect a light source in front of it (ie a higher reading than the background), it should turn until does.
  3. If the robot detects a light source in front of it and the light is above a certain threshold value, it should stop, and back up.
  4. If the robot can detect a light source in front of it and the light is below the threshold value, it should move towards it.
Again these behaviors should be connected so that each one takes priority over every higher numbered layer.

The code from this exercise must be handed in during the lecture the week after the practical class.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 3

The third exercise will be Robot Sumo.

The Sumo course is as below:

The course will be built on a large piece of foamcore. About a third of this will be the sumo ring, marked by a rectangle of black electrical tape.

Each robot will have to follow a winding path to the ring from its start position. This path will be (almost) exactly the same for both robots.

The point at which the path meets the ring will be marked with a square of aluminium foil; this gives a light reading above that of the white foamcore and is typically very easy to detect.

Each robot will have to carry the same light source we used for the previous project so that both robots can "see" each other.

The rules of the sumo contest are as follows.

  1. Sumo is a contest between two robots.
  2. Both robots have to follow a line to reach the sumo ring.
  3. Robots have two minutes to reach the ring. If one robot fails to reach the ring two minutes after the start of a contest, that robot loses the contest.
  4. If both robots fail to reach the ring within the two minute limit, the one that is closest to the ring when two minutes have passed will be deemed the winner.
  5. Once in the ring, each robot will try to push the other robot out of the ring. Robots that (in the judgement of the referee) do not attempt to push opponents out of the ring will be deemed the loser of the contest.
  6. Any robot that completely (in the judgement of the referee) leaves the ring after entering it loses.
  7. If neither robot has managed to push the other out of the ring five minutes after the start of the contest (including the two minutes allowed for reaching the ring), then the contest will end. The winner will be the robot that (in the judgement of the referee) came closest to winning the contest.

The code from this exercise must be handed in during the lecture the week after the practical class.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 4

Exercise 4 will be to finish off the Sumo exercise.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 5

With this exercise, we will start working with the Aibo robots.

Before class you need to:

  1. Download and install the OPEN-R development environment on your machine. There are instructions on the robot information page.
  2. Download the code we used at RoboCup 2004 and check that it compiles okay. If it doesn't, there is a problem with your setup.

The code comes with a README, which explains the structure of the code, and is reasonably well commented, so it should be comprehensible.

You should only need to modify code in the Behavior subdirectory of metrobots-robocup-2004. This contains one file, MyTest.cc, which controls the robot.

Exercise 5 is to first compile the code and run it, and then write your own version of MyTest.cc, which makes use of at keast five of the pieces of behavior in BehaviorFunctions.cc.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 6

Exercise 6 is an exercise in dead-reckoning navigation, and the use of odometry. The first parts replicate the work of Borenstein. There are three parts:
  1. Write code that has the robot walk in a square, 3 feet on each side. Thus the robot should start by walking in a straight line for 3 feet, turn through 90 degrees clockwise, walk another 3 feet in a straight line, turn another 90 degrees clockwise, and so on, until it has turned through 360 degrees and walked back to its starting point.

    The aim of the exercise is to minimse the distance that the robot ends up from its starting point.

    Run 10 trials of your code, measuring the absolute distance between where the robot ends up and its starting point.

  2. Repeat but have the robot walk in a square 10 feet on each side.
  3. Write code that has the robot look for the ball (you can assume that the ball will be visible without the robot having to turn), walk upto the ball, then turn around and return to its starting point.

    Again the aim is to get the robot back as close as possible to its starting point.

If you look through the various .cc files in the Behavior directory of metrobots-robocup-2004, you will find code that looks for the ball, identifies when the ball is seen and so on. What you need to concentrate on in this exercise is tracking how far the robot has walked, and turning as precisely as possible through 180 degrees.

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]

Exercise 7

Exercise 7 aims to test some more complex aspects of dead-reckoning navigation, and then look at how we can deal with them:
  1. Write code that has the robot look for the ball (you can assume that the ball will be visible without the robot having to turn), walk upto the ball, then turn around and return to its starting point.

    Again the aim is to get the robot back as close as possible to its starting point.

    The thing that makes this harder than walking in a square is that you need to be able to walk to and from the ball without knowing, at compile time, how far the ball is from the robot.

    If you look through the various .cc files in the Behavior directory of metrobots-robocup-2004, you will find code that looks for the ball, identifies when the ball is seen and so on. What you need to concentrate on in this exercise is tracking how far the robot has walked, and turning as precisely as possible through 180 degrees.

    You also have to start using the vision system.

    Information from the vision system (the Perception module) is delivered to the Behavior module by the function ReceiveFromPerception. This updates some internal variables (like see_ball) which should provide the information you need.

    Note that while you are using the pink ball, the code makes reference to the ball being orange; don't let this confuse you (it won't confuse the robot because we will calibrate the vision so the robot thinks the ball is orange :-)

  2. Write code that uses the markers to navigate around in a combination of dead-reckoning and vision-based marker detection.

    Go back to navigating round a 10ft square, but now place a marker 2 feet outside the corner (it needs to be that far back so the robot can see it), and use the fact that the robot can see the markers in order to decide when to turn and what direction to head in.

    You should be able to use the same trick as in the code that walks to the ball (fix vision on the marker, turn the robot until the head and body are aligned, walk straight) to get to the marker, and use the same trick as when walking to the ball (measure the neck angle) to decide when you are close enough to it (this is more accurate than using distance estimates).

    ReceiveFromPerception writes information about markers in the array marker[]. The four markers you have are marker[0], marker[1], marker[4], marker[5].

    You should then be able to walk around the square from any point within it (by first looking for one of the markers. walking to it, and then circumnavigating, stopping when you get back to the first marker.

    As before measure the error in your final position (from the point marker as the first corner).

[Exercise 1] [Exercise 2] [Exercise 3] [Exercise 4] [Exercise 5] [Exercise 6] [Exercise 7]