Practical Exercises
Practical sessions will all be held in the Science Center at the
GC. This is on the fourth floor, room 4102.
Although the Science Center is equipped with desktop computers, I
anticipate that the projects will work best if every group brings at
least one laptop computer with it. And groups should make sure that
the correct software is installed on the laptop before the
practical sessions.
Information on what this software is, where to download it from, and
how to install it, can be found on the robot information page.
You should also read through the description of the practical exercise and
print out and read any additional material supplied before the
class.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 1
The first exercise is to construct a basic Lego Mindstorms robot, get
an initial program running on it, and then gain experience modifying
that program.
You will be given a set of robot parts when you come to the lab.
The lab has the following steps:
- Following the
plans (which work best in colour), construct the robot.
- Compile the basic rover
program, and run it.
- Modify the program so that the robot follows the black line on
the course provided but stops when it hits an obstacle.
- Optimise the program to make the robot follow the course as fast as
possible.
If we have enough complete working robots by the end of the lab, we
will see who can go round the course the fastest.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 2
The second exercise is to write a program for the robot you built in
Exercise 1, so that the robot can take part in a pursuit race
on the line-following course we used in Exercise 1.
The pursuit race will work like this:
- Place two robots on opposite sides of the course, facing in an
anticlockwise direction.
- When switched on, the robots should follow the course in an
anticlockwise direction.
- If one robot touches the back of the other robot (having caught
it up) within a 3 minute period, and then stops, that robot is
declared the winner. Otherweise the robots draw.
Hopefully we will have enough working robots to stage a small tournament.
This exercise will require some preparation.
The code you need will combine elements of the wall following code
from Exercise 1 (in its use of the bumpers) and (obviously) of the
line following code that comes packaged with BrickOS.
The code from this exercise must be handed in during the lecture the
week after the practical class.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 3
This exercise is intended to experiment with some flocking
behaviors. We will program the robots with some simple rules
describing how they should behave, and then (hopefully) see larger
scale behaviors emerge from the interaction.
We will try several different experiments, and they will require you
to write a couple of different programs. However, there will be a
common core to all the code.
As ever, this will work best if you write the code before you come to the
practical.
The robot itself needs to be modified so that the light sensor "looks" forward along the robot chassis rather than down at the ground.
I will also be giving you a light emitting brick which you connect to
the third output port, and set up as an active sensor.
The first program you will need has the following behaviors:
- If the robot hits an object, stop, back up, and turn away from it (this
can be taken from the wall following code).
- If the robot can't detect a light source in front of it (ie a
higher reading than the background), it should turn until does.
- If the robot can detect a light source in front of it, it should
move towards it.
These behaviors should be connected so that each one takes priority
over every higher numbered layer.
The second program adds some additional functionality:
- If the robot hits an object, stop, back up, and turn away from it (this
can be taken from the wall following code).
- If the robot can't detect a light source in front of it (ie a
higher reading than the background), it should turn until does.
- If the robot detects a light source in front of it and the light is above
a certain threshold value, it should stop, and back up.
- If the robot can detect a light source in front of it and the
light is below the threshold value, it should move towards it.
Again these behaviors should be connected so that each one takes
priority over every higher numbered layer.
The code from this exercise must be handed in during the lecture the
week after the practical class.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 4
Exercise 4 is to get set up for programming the AIBOs.
There are two parts to the exercise:
- Download and install the OPEN-R development environment on your machine. There are instructions on the robot information page.
- Download the code we used at RoboCup 2004 and check that it
compiles okay. If it doesn't, there is a problem with your setup.
If you have time on your hands after doing this, start looking through the code, figuring out how you will modify it to handle Exercise 4.
The code comes with a README, which explains the structure of the code, and is
reasonably well commented, so it should be comprehensible.
You should only need to modify code in the Behavior subdirectory of
metrobots-robocup-2004. My suggestion is that you create a new
"role"along the lines of the test roles MyTest.cc and LocTest.cc, and
modify Behavior.cc appropriately (that will save you having to mess
too much with the existing code and the role-switching).
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 5
Exercise 5 is an exercise is dead-reckoning navigation, and the use of
odometry. There are two parts:
- Write code that has the robot walk in a square, 3 feet on each
side. Thus the robot should start by walking in a straight line for 3
feet, turn through 90 degrees clockwise, walk another 3 feet in a
straight line, turn another 90 degrees clockwise, and so on, until it
has turned through 360 degrees and walked back to its starting point.
The aim of the exercise is to minimse the distance that the robot ends
up from its starting point.
- Write code that has the robot look for the ball (you can assume
that the ball will be visible without the robot having to turn), walk
upto the ball, then turn around and return to its starting point.
Again the aim is to get the robot back as close as possible to its
starting point.
If you look through the various .cc files in the Behavior directory of
metrobots-robocup-2004, you will find code that looks for the ball,
identifies when the ball is seen and so on. What you need to
concentrate on in this exercise is tracking how far the robot has
walked, and turning as precisely as possible through 180 degrees.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]
Exercise 6
This is a two week exercise, but you'll need (and may believe me by
now when I say this) that you'll need two weeks to get the robots to
do this :-)
There are three parts to the exercise:
-
First, modify the code you wrote for the previous exercise to
have the robot walk six feet on each side of the sqaure, and to do
this first clockwise and then counterclockwise.
The idea is that between all the groups, we'll be able to replicate
the Borenstein experiment that we talked about in class.
-
Now, do the same experiment using localization. You should find that
localization works well enough when the robot is stationary, or is
moving slowly (the motion model is not too robust), but is not great
when the robot moves quickly or when you try to turn to a precise
angle (the angle quantisation is rather coarse).
As a result, the best way to approach the problem, I think, is to have
the robot figure out where it starts, and thus the location of the
four points it needs to go to. Then move towards them based on the x
and y coordinates you get from localization, ignoring the angular
component (so do the turn as before). Note also that the resolution of
the x and y measurements is +/- 150mm.
To see examples of the interface with localization, look at the code
in Behavior/Movearound.cc
-
Finally we'll place the ball at an arbitrary location, and you have to
get the robot to walk to it and then return to the start point; again
this is a repeat of an exercise from the last practical, but this time
using localization.
If I manage to get a particle filtering version of localization
running, you can use that instead of the existing version. In that
event, I'll post the new code here.
[Exercise 1]
[Exercise 2]
[Exercise 3]
[Exercise 4]
[Exercise 5]
[Exercise 6]