The objective of the AM36 2007 course was to build an animal-like robot, using Lego Mindstorms NXT, for the RobotsAtPlay Festival 2007. Specific requirements included: Not using wheels for propulsion, some form of interactivity available, and ready in two weeks.
Designing the Robot
The following drawings were made for the initial design of the robot.The head and body were implemented in the basic style sketched here,while several changes were made to the design of the legs. Additionally, some of the planned sensors were not used, and the activities linked to those sensors not implemented.
Head design and a first draft of possible design for legs.
Body seen from top. Approximate proportions, and sensor placement.
The following sections are the design document containing the original plans for interaction, software, use of sensors etc.
The Lego Crocodile is an interactive Lego model, resembling in outlook and behaviour a real crocodile. The model is capable of moving around, either because of activity in its proximity, as well as by its own will. The model will in general remain passive, seeking to obtain a position in the sun, as other animals in the reptile family. Occasionally it will become active, either seeking a better position in the sun, or scavenging for food in its vicinity. The model can also be activated by other animals or humans entering its proximity. Depending on a number of things, including the interval since last eating, the crocodile will attempt to either engage the disturbance, or move to another position to resume sun bathing.
To make the robot interactive, the robot processes sensory input simulating sound, sight and touch. The robot will also have additional behaviour patterns, which are driven by a schedule. Finally, a remote controller, using Bluetooth, can activate several functions of the robot.
The robot will actively try to place it self in a lighted area, seeking sunlight as other animals in the reptile family. It will stay in the light for a period of time, unless disturbed, and occasionally seek out a better spot. The robot can be “stirred” by entering into its field of vision, and will then react according to several variables. Either moving towards the object in its vision, or trying to back away and finding a new place to rest. The robot will send out sounds, warning intruders of its mood. As well as vision, the robot will use hearing to react to objects coming close to it. If stirred by its hearing, the robot will try to move around, seeking to get the disturbance into its field of vision. The robot will seek out food thrown to it, if it’s hungry, and try to consume the food. Continue reading
As written here, we have be doing a classic AI project for the last 3
month and got quite a good result. I’ll just post some material from
our solution, pictures, source code and some comments. There’s a
report available here as well. It includes some of the source code and
some explanation of the design, but does not detail everything.
Sokoban is described in the previously mentioned post, or you can
just try Googling (http://www.google.dk/search?q=Sokoban) it, if you don’t know it. When dealing with robotics
and AI, Sokoban is an interesting problem, because it simulates a
production environment, where the robot is handling crates in a
warehouse, and it combines robotics with a path planning AI
The development was done in two parts, with several iterations in each
part. The first part was exclusivly about evaluating the physical
Sokoban field, and then designing and building a Lego Mindstorms robot
that would be able to solve the puzzle. The second part was about
developing a program for finding the solution, translating the soution
for the robot, and finally fine-tuning the robot.
The Sokoban Robot
Our physical Sokoban “field” consisted of a whiteboard with a grid made
out of black tape on top of it. Each cross section of the tape lines
counted for a cell in the Sokoban game, while walls were represented
by white background (cut off tape lines). So the basic idea was of
course to construct a robot, that could differentiate black from
white, follow black lines, and recognise the cross sections.
The robot was based on the standard robots from the LEGO Mindstorm
set, since we were in a hurry, and then adapted for our specific task.
We took an interative approach and simply build a first version of the
robot, without really working to much with specific construction
consideration. The standard robot was equipped with some light sensors for following the black line, and using some straight forward code, a few test runs were performed.
After a few test runs with the robot some obvious points to consider emerged:
- the turning radius
- the sensor sampling speed
- sensor placement
Starting from the last point… the sensor placement was crucial to the effective movement of the robot. They needed to be close to the black line to ensure that the robot moves straight forward without zig-zag’ing, but still so far apart that the robot had time to react to the black line. The sensors should also be ahead of the robot, to increase the distance between the center of the robot’s turning point and the sensors, which increased the accuracy of the robot. At the same time the sensors could not get to far in front of the robot, since this makes the sensors move so fast, when the robot turns, that the sensor sampling rate becomes to slow. The robot simply doesn’t catch the black lines, when doing turns.
The placement of the sensors also influence the robots turning radius. To far ahead of the robot, and the robot would not be able to turn 90 degrees between to intersections of the lines, due to its increased length.
The turning radius was also influenced by the motors driving each wheel. At first the robot simply increased stopped one wheel, while continuing to drive on the other wheel. This turned out to be inadequate for making the turns sharp enough, so we had to do something additional. We then experimented with letting the wheels drive in opposite directions, which of course decreased the turning radius a lot. This solution did give other problems, including making the robot move past intersections in a sort of stop-and-go pattern, but in the end we stuck to that solution.
This video shows the robot being tested on a sample field. All the lines are unbroken in this example, meaning that there is no wall in this particular puzzle. You can see that the two sensors closest to the robot are used for following the lines, and thus controlling the robot. The front most sensor, which isn’t tested are only used when moving a diamond in the game, to ensure that it is placed right on an intersection.
In the end it turned out that a few modifications were needed before the robot was ready for the real Sokoban puzzle. More about this in my next post.
There is a report available on the complete development of the robot, and the path finding code used when solving the puzzle in this report: AI00 Sokoban Robot Report. The report has detailed chapter on the construction of the robot, incl. test of different sensor positions and different speed settings on the motors.
My next post will be an introduction to the Java program used for solving the Sokoban puzzle. It’s also included in the report above.