From a Robot Soccer Perspective - Semantic Scholar

5 downloads 0 Views 634KB Size Report
a long time, the Micro-Robot World Cup Soccer Tour- ... MiroSot involves multiple robots that need to ..... \obstacle avoidance" and basic actions being \shoot",.
Evolutionary Multi-Agent Robotic Systems (E-MARS): From a Robot Soccer Perspective Prahlad Vadakkepat

Secretary, Federation of International Robot-soccer Association (FIRA) Dept. of Electrical Engineering, KAIST, Taejon-shi, 305-701, Republic of Korea [email protected], [email protected], ra@ ra.net www. ra.net, www.mirosot.org

Abstract

around the world can understand and enjoy [1]. The Micro Robot Wold Cup Soccer Tournament (MiroSot) thus was given birth, and a new interdisciplinary research area emerged, where scientists and technologists from diverse elds like, robotics, intelligent control, communication, computer technology, sensor technology, image processing, mechatronics, arti cial life, etc., work together to make the multi-robot systems a reality [2]. Multi-agent systems (control architecture, communication scheme, vision tracking algorithm, sensing, fault tolerance) have been studied by several research groups in recent years. An architecture called ALLIANCE is presented for fault tolerance in [3]. Cooperation of heterogeneous mobile robots and a collection of particle-likesmall-cell-concept micro-robots, called cellular robots, is developed by Kawauchi et. al. [4]. Mitsmoto et. al. [5] developed micro robots and proposed a self-organizing algorithm using immune networks. Despite the superb idea of introducing a new concept in robotics called multiagent systems, their experiments [8][9][10] can have limited number of agents and resulting in narrow elds of application domains. Computational intelligences such as fuzzy system, neural networks, evolutionary computation and some machine learning algorithms like Q-learning, pro t sharing plan, could be used for robot intelligence. The simulator contests on robot soccer initiated in Japan (RoboCup, www.robocup.org) help researchers to develop new strategies and to test them on a competitive setting. Java Soccer is available at www.cc.gatech.edu/grads/b/Tucker.Balch/JavaBots/. The associated simulator software can be downloaded from these websites. Researchers can design their own teams and can compete on these platforms. Web based remote controlled robots are available at http://muster.cs.ucla.edu:80/w3r3/. The robots can be taken to the charging dock and be controlled to perform various tasks through internet. MiroSot is involved with Micro Robots playing on a playground of 130 cm X 90 cm. The details on MiroSot is available at the FIRA and MiroSot homepages. FIRA has other categories of games as well in addition to MiroSot: Humanoid Robot World Cup Soccer Tournament (HuroSot), Robot World Cup Soccer Tournament (RoboSot) and Nano Robot World Cup Soccer Tournament (NaroSot) with di erent categories of robots (www. ra.net). It is a basic problem to determine what action an agent should take in a given situation. Several researches on

This paper brie y deals with the Evolutionary MultiAgent Robotic System and disccusses the robot soccer platform as its test bed. Though simulator based robot soccer systems and maze contests were in exsitence for a long time, the Micro-Robot World Cup Soccer Tournament (MiroSot) initiated in Korea is one of its rst kinds. The rst and second MiroSot competitions were held in Korea Advanced Institute of Science and Technology (KAIST), in 1996 and 1997 respectively. The robots used in MiroSot are small in size (7.5cm  7.5cm  7.5cm), fully/semi autonomous and without any human operators. MiroSot involves multiple robots that need to collaborate in an adversarial environment to achieve speci c objectives. In multi-robot systems the environment's dynamics can be determined by other robots in addition to the uncertainty that may be inherent in the domain. They have dynamic environments as other robots intentionally a ect the environment in unpredictable ways. The key aspect being the need for robots not only to control themselves, but also to track and control the ball which is a passive part of the environment. The interesting theoretical issue behind MiroSot experiments is the use of soccer as a prototype example of a complex, adaptive system. The author participated as a joint Indo-Korean team in MiroSot'97. A Federational of International Robot-soccer Association (FIRA) was formed on June 5th, 1997 to establish a science and technology world cup and FIRA will host the Robot World Cup'98 in Paris along with FIFA World Cup France.

1 Introduction With the ever increase in number of robots in an industrial environment, scientists/technologists were often faced with issues on cooperation and coordination among di erent robots and their self governance in a work space. This has led to the developments in multi-robot cooperative autonomous systems. The proponents of multi-robot autonomous systems needed a model to test the theories being proposed to test its ecacy and eciency. It is not a surprise that they started focussing on Robot soccer. Robot soccer makes heavy demands in all the key areas of robot technology, mechanics, sensors and intelligence. And it does so in a competitive setting that people

1

tegration system. The selection of what kind of hardware and software among various methods, is a challenging problem to robot designers. Two operating methods are available for soccer games: vision-based soccer robot system and robot-based soccer robot system. Basically, robots, a vision system, a host computer and a communication system are needed for a robot soccer game as in Figure 1. Figure 1: Overview of the soccer robot system with the vision system, host computer and RF communication system. such action selection problems have been done [3, 6, 7]. Particularly, a dynamic environment with competitions between agents makes the problem more dicult and complex. Robot soccer game is associated with cooperation, decision making, planning, modeling, learning, robot architecture, vision tracking algorithm, sensing, communication, and so forth. It also has an explicit performance measure, score. A reactive robot architecture called reactive deliberation is proposed in [15] for a one-on-one game played with simple robots. Ball-pass behavior which is important for cooperation between soccer players can be learned using a neural network through computer simulation [16]. Distributed control architecture for cooperative robot soccer system can be seen in [17], [18], [19]. Related research results on robot soccer is available from the workshop proceedings of MiroSot'96 and MiroSot'97 (Micro-Robot World Cup Soccer Tournament), held at KAIST, Taejon, Korea during November 9{12, 1996 and June 1-5, 1997. The real-time decision making problem and action selection mechanism (ASM) for each agent given its role such as striker and goal-keeper in robot soccer game can be achieved using a multilayer perceptron (MLP) to learn human judgment for the action selection. The effectiveness and applicability of such ASM were demonstrated through real robot soccer game of S-MiroSot (single MiroSot). Di erent types of schemes for soccer robot control are available. In a remote-brainless vision-based soccer robot scheme a host computer controls the robots by commanding the robot velocities like a radio-controlled car. The brain on-board vision-based soccer robot system, the robots will make moves as per the vision data, keeping away from obstacles autonomously. A robot-based scheme, wherein each autonomous robot can make a decision based on information it collects with its sensors and if needed, can communicate with other robots. Robots which can be operated autonomously are needed for the robot-based system. The control structure, behaviors and actions of robots will in uence their performance in a big way in robot-soccer games.

2 Soccer Robot System In addition to the varieties in hardware like CPU, actuators, sensors and so forth, a soccer robot system also di er in control algorithms, strategies and the total in-

2.1 Vision-based soccer robot system

The vision-based system has two types of implementations. The rst one is the remote-brainless soccer robot system, and the other is the brain on-board soccer robot system.

2.1.1 Remote-brainless vision-based soccer robot system

In this system, each of the robots have their own driving mechanism, communication part and CPU board. The computational part controls robot's velocity according to command data received from a host computer. All calculations on vision data processing, strategies, position control of robots, are done in the host computer which controls robots like a radio controlled car. The robot can be implemented easily as its structure is simpler than other systems. Since it has no sensors (encoders), the calculating power of the host computer has to be higher compared to other systems to control positions of robots accurately. Generally, the position control of mobile robots is done by an embedded processor using its encoders or sensors, but it has to be carried out by the host computer in this case. To control the positions of robots accurately, the sampling time of host computer must be very small. The vision processing is a key technology, especially, in this system. As all algorithms of remote-brainless soccer robot system are centralized in the host computer, the communication protocol for multi-agent cooperative system is quite simple. But the problem is that the burden on the host computer will get multiplied with the increase in number of agents.

2.1.2 Brain on-board vision-based soccer robot system

In this system, the robots should have functions such as velocity control, position control, obstacle avoidance and so on. The host computer processes vision data and calculates future behaviors of robots according to strategies and sends commands to the robots using a RF modem. The robots make their moves according to these commands keeping away from obstacles autonomously. The robots have to have sensors for position control (encoders) and obstacle avoidance (IR sensors). The calculation load of the host computer is much less compared with the former system.

2.2 Robot-based soccer robot system

In the robot-based soccer robot system, each robot has many functions for autonomous behaviors. All calculations are done locally by each of the robots. The host

Figure 2: A soccer robot computer processes vision data on the positions of the ball and robots and forward the same to the robots. Each of the robots decides its own behavior autonomously using the received vision data, its own sensor data and strategies. This can be considered as a distributed control system, where each robot has its own intelligence. It may be very hard to implement the robots with established velocity control, position control, obstacle avoidance, communication, decision making, etc. The host computer processes only vision data and can be regarded as a kind of sensor.

3 Architecture of Soccer robots The design philosophy is to realize autonomous microrobots with various functions so that they can be used not only in soccer games, but in a variety of other applications as well. A soccer robot (Figure 2) may not have a body frame, but can have six PCBs with surface mounting devices, as its sides. CMOS-type devices can be used considering power consumption. The complete robot's size as per MiroSot speci cations [12] has to be within (7:5cm)3 . The CPU, eight side IR sensors, eight bottom IR sensors and the caterpillar moving mechanism can make it possible to control robot motion with obstacle avoidance and other intelligent behaviors.

Figure 3: Sensors for obstacle detection with associated ranges. the ball or not. The grid lines on the eld and wall will help robots to correct the accumulated errors. An indirect but useful method is to use a vision camera. Each robot can have two on-board encoders making it possible to calculate its current position. The errors from these dead-reckoning sensors can be corrected by detecting the grid lines on the eld. Eight IR sensors specially attached at the bottom board can be used for this purpose. Eight IR sensors on the sides are helpful for obstacle detection (Figure 3) [18]. Unlike the conventional method of measuring the amount of light [13], the modulated carrier signal can be hired to be insensitive to environment light conditions. The obstacles in such discrete distances as 20cm (very far), 10cm (far), 5cm (close) can be detected by just one pair of sensors.

3.3 Actuators and power unit

Torque, speed and power consumption characteristics are important in choosing the motors. It is popular to x the two motors with their axes aligned, with two casters in the front and rear sides. If this is dicult due to lack of space, two motors with right-angled wheels [13] or with parallel wheels but not aligned [14] might be used. For smooth dribbling, aligning the wheels or adding a 3.1 CPU board third motor for steering is recommended. The robots carry a power source and re-chargeable batteries. To realize high-intelligence, a high-performance CPU must The power the length of competing time, is needed. The more the computation capability and the size andspeci cations, weight are the factors in choosing the number of sensors, the more will be the goals achieved batteries. 9V and 4.8V Ni-Cdkey re-chargeable batteries are autonomously. The Intel 16 bit processor 80C196KC suitable for motors and logic power, respectively. (20MHz) is a good candidate, which has useful functions such as PWM generators and communication modes for multi CPUs expansion. 3.4 Communication

3.2 Sensors

Following are the minimum information each robot should gather itself: (1) its own location in the eld, (2) location of ball, (3) whether the robot has ball or not, and (4) whether there is a blocking robot when kicking

According to MiroSot rules, robots must not be wired, so IR-remote control or RF-digital communication system is necessary. The robot can use full duplex RFdigital communication. The communication between robots facilitates in ful lling a team's objectives. Communication deadlock may happen when the amount of

information being communicated increases, so communication strategy and protocol should be carefully designed.

3.4.1 Position calculation of ball and robots using a color vision system

For object detection using a vision system, an edge detection or a gray level detection method can be adopted. But, if there are objects with similar shapes or brightness, of the object being detected it becomes dicult. In which case, the color information of the object can make the detection easier. Speci cally, if the target object has a certain uniform color, its information on position and direction under the given environment can be calculated by the color information through the vision system, for example, using the RGB values. Based on this method, the directions and positions of the ball and robots with unique color pattern can be calculated. A pixel search method helps in reducing the processing time and increasing the sampling rate under a given color vision system.

Human being Software Speciallized intelligence

Robot-based system

Central controller

Basic actions

Vison-based system Move behavior

Obstacle aviodance behavior

Special fnctions

x yθ

Velocity contoller

Remote-brainless system

Left Motor

Right Motor

Encoder

Encoder

Side sensors

X

Y

θ

Communication

Position compensator

RF modem

Bottom sensors

Hardware

Figure 5: Control structure of the soccer robot instance, if one pixel is far away from other pixels satisfying same conditions, it can be regarded as the pixel of another object. At present, the processing time is about 33 ms because of supporting software libraries. If the memory is accessible directly, the same can be brought down. Using normalized RGB values is very simple, but simplicity makes detection algorithms sensitive to environmental conditions such as illumination level. To solve this problem, the detection method must be studied with more complex schemes such as HSI color space, pattern recognition and fuzzy classi cation.

Figure 4: Robot uniform according to the MiroSot rules.

3.4.2 The algorithm for object detection

A robot uniform is shown in Figure 4. Its pattern conforms to the rule on the robot uniform in MiroSot. The right bottom square is a team color which must be blue or yellow. The left top one is the robot color which is freely assigned not to be confused with the team color. We need to classify 6 colored objects consisting of 2 team colors, 3 robot colors and 1 ball color (orange). The detection method is quite simple. Initially, RGB values for each pixel are read to the host computer memory from the frame bu er of vision board. Then every pixel is normalized by the sum of the three components and examined for satisfying boundary conditions of RGB for 6 colors. If there is any color satisfying the pixel, the X and Y coordinate values of the pixel are stored for that color and its pixel count is increased by 1. In this way, whole pixels on the soccer eld are searched and classi ed to one among the 6 colors whenever the condition is satis ed. Finally, the averages of X and Y summations for each of the colors are calculated. To get robustness from pixel noise, appropriate grouping methods can be incorporated. For

4 Control structure Figure 5 shows a basic control structure for soccer robots [2]. Robot hardware architecture is described in dotted line box, and robot software is represented in solid line box. Each robot has the same control structure except the specialized intelligence part. The specialized intelligence part consists of robot's own special behavior or strategy. The basic behaviors are \move", \obstacle avoidance" and basic actions being \shoot", \position to shoot", \intercept ball", \sweep ball" and \block." These are important behaviors and actions in a robot soccer game. Even if the adopted strategy is good, if moving ability is poor, the system may have inferior performance. A central controller can carry out practical strategies including the above basic behaviors and actions, by selecting an action mode based on present states, locations of ball and of robots. It can also determine proper tactics according to specialized intelligence. Goal-keeper robot is normally around the goal-post and can receive MOVE command when the ball is in defense zone or near the goal-post. The lowest level controller in the control structure is a velocity controller with shortest sampling time. As it goes to a higher level, the sampling time is longer. Especially, the sampling time of the central controller is determined by the data update time from vision system.

4.1 Basic behaviors

4.1.1 Move behavior

When the operator or host computer informs the robot of the coordinates of its destination, the robot rotates and runs toward that point. Since a small orientation error leads to a larger location error eventually, a 16-bit

oat type variable can be declared for the orientation. A method to align the destination point with the robot orientation is to calculate the inner product between robot orientation vector and the destination vector (from the robot position to destination) and to determine whether this value is within the desired boundary or not. With a table of one degree step, the maximum error can be one degree. If the condition on error bound is too strict, the robot may oscillate around the destination. In experiments, when the range of aligning error is set at one degree, the real error is found to be smaller than one degree. If the destination is in right back direction, the robot is let to have a left-turn optionally. After completing the rotation mode, the robot has to run towards the destination. No error correction code can be inserted during run with both wheels at same speed. In this behavior, the initial misalignment may lead to severe deviation at the nal point. But from experimental results the nal deviation is found to be less than 10 cm in the eld of MiroSot, 130 cm  90 cm. Keeping both wheels at same speed is preferable, as di erent speeds may cause unsmooth motion in caterpillar mechanism. As the robot is close to its destination, it can be stopped, when either of the following conditions are met with: i) robot passes over the destination or ii) the di erence between robot position and the destination is smaller than a given threshold.

4.1.2 Obstacle avoidance behavior

Eight sensors can detect obstacles in the front, back, right-hand or left-hand sides (Figure 3). When the robot nds an obstacle in its front, to avoid a collision it can turn left and can move 10 cm forward, then turn to the right and move 10 cm forward. It can then turn and move towards the original destination D if no more obstacles are detected. However, if the robot detects obstacles in front and on left-hand side, it can turn right rst and then move towards original destination in a similar way. When obstacle is in dead angle of a robot, it can turn to the destination once and then may nd the obstacle again. In this case, it can repeat the above procedures.

4.2 Basic actions

The basic actions are on an higher level than the basic behaviors. Only one action is selected by a high level controller (central controller) at a time.

4.2.1 Shoot action and position to shoot action

Shoot and position to shoot actions are used for shooting ball to the goal or for passing it to other robots. All robots will have these actions and mostly the attacking robots use them. Given the ball and goal positions, two relative positions are calculated. The rst is for position to shoot action and the other is for shoot action.

The shoot action is done if the following two conditions are met with: 1) the ball is located in between the robot and goal, and 2) a straight line from the robot to the ball is covered by the goal area. Particularly, to prevent robots from kicking the ball towards ones own side, it can be designed with position to shoot action as in Case 2. After predicting a trajectory of the fast rolling ball, the robot can move to a point to intercept the ball. The trajectory of ball is arrived at, from the current and previous ball positions as its velocity is assumed to be constant within such a short span. Since the ratio of the distance moved by the ball to that of the robot is the same as the ratio of the predicted ball velocity and the maximum possible robot velocity, the intercept position can be calculated easily. Due to the time taken for robots to turn, the vision processing time and the delay time caused by communication and so forth, intercept position errors might exist in practical systems. The positions can be calculated easily from a second-order equation, bringing down the calculation time.

4.2.2 Sweep ball action

When the ball is located on one's own side, the robot can kick it towards the opponent's area. Sweep ball action is same as shoot action in that the robot kicks the ball.

4.2.3 Block action

In contrast to the shoot action, the block action intercepts the ball or keeps the ball o from opponent robots. This action is mainly used by defense robots. The robot accomplishes this behavior by moving its position to the expected attack point, considering the ball and self goal positions. The attack route may depend on the point aimed at by the opponent robot. So, the defense robots' positions are dependent on positions of opponent robots and of the ball. In this action no estimation of ball trajectory is used as in intercept ball action.

5 Action Selection Mechanism for Robot Soccer

Action Selection Mechanism (ASM) is a computational mechanism for each agent to take an appropriate action according to its role such as striker, sweeper or goalkeeper [2]. The following assumptions can be employed:  A speci c role is assigned to each agent.  The number of available actions for each agent is nite.  There is no explicit model of the environment in soccer games.  Agents have a local point of view of the near future. The approach to design an ASM for robot soccer is as follows. To start with, a relatively simple ASM can be designed for a situation without any opponents. After which, the mechanism can incorporate some additional action selection schemes with opponents present. The

θ θ

Figure 6: The structure of action selection mechanism (ASM) as a high level controller opponents are considered as a kind of disturbances to each agent. The ASM for each agent considers the opponents only when the disturbance level crosses a prespeci ed threshold by an opponent robot. The structure of an ASM is shown in Figure 6 [11]. It consists of action set, supervisor, internal motive, intervention, and nal selection modules. Action Set module consists of several actions for each agents to satisfy its speci ed role and provides informations on actions the other modules inquire from the action set, such as runtime parameters and feasibility of each of the actions. Action set is based on basic actions described in the previous section. Supervisor plays the part of an reinforcement for an agent to do certain actions and a modi cation of the attributes of actions. Internal motive module is the action selection module for the situation without considering opponents. Intervention module calculates the level of disturbances from opponents. If the disturbance level due to opponent agents is above some threshold, the intervention module suppresses internal motive module and selects a new action. Human judgment can be used to model opponents' behaviors and strategies and a simple multilayer perceptron (MLP) can be adopted as a tool to learn the same. Human beings are capable of selecting the situations wherein the disturbance level due to opponent agents is very high. The appropriate situation variables for the selected situations calculated, can be used as the training data for MLP. Final Selection module takes into account the outputs of other modules and selects a nal proper action for each agent.

5.1 MLP for intervention module

5.1.1 The structure of MLP

The MLP uses 10 inputs, 2 outputs and 2 hidden layers. The number of nodes in the rst hidden layer is 12, that in the second is 6. The activation function used is a sigmoid function '(v) = 1+e 1?av . While the number of input and output nodes are dependent on a problem, 10 situation variables for input nodes and 2 actions variables for output nodes are considered here. (

)

Figure 7: Situation variables representing in possession of ball.

5.1.2 Outputs from MLP

Actions of intervention modules are given as sweep ball and block. Sweep ball represents an action of kicking away the ball when the disturbance level from opponent agents is above some threshold. Blocking is done when the risk level of opponent scoring a goal is high. The output value of MLP lies between 0 and 1.

5.1.3 MLP Inputs

The situation variables for MLP are as follows:  Situation variables representing in possession of ball (Figure 7): { BR : angle between home robot's velocity vector and directed vector from home robot to ball. { BO : angle between opponent robot's velocity vector and directed vector form opponent robot to ball. { DBR : distance between ball and home robot { DBO : distance between ball and opponent robot  Situation variables representing the risk level of opponent scoring a goal (Figure 8): { DBRG : distance between ball and home goal. { DIRG : distance between goal center and the point of intersection of home goal line with the directed vector from opponent robot through the ball position.  Situation variables representing winning score measure against opponent's goal(Figure 8): { DBOG : distance between the ball and opponent goal

Figure 8: Situation variables representing winning score measure against opponent goal and the risk level of opponent scoring a goal.

{ DIOG : distance between the center of oppo-

nent goal and the point of intersection of opponent goal line with the directed vector from home robot through the ball position.  velocities of the ball and opponent robot:

5.2 Learning of MLP

The training data for MLP can be collected from a real robot soccer game. The game can be played between an agent with ASM without the intervention module and an opponent agent which may have some kind of ASM or other control algorithms. The game data such as X,Y positions, heading of each robot and X,Y positions of ball are stored. Then, a human being can observe the replayed game on a computer 2D graphic display, and can judge the situations where the disturbance level due to opponent agents are very high. The best among the many actions given to the intervention module can be identi ed and corresponding data can be stored. A simple two-dimensional animation can be used as a graphic display. If the game can be displayed on a three-dimensional delicate graphics, the better will be its realization and the judgments of the human being. Situation variables, the inputs to MLP, are thus calculated from a selected game. Desired output of the selected action is set to 1 and the outputs of other actions are set to 0. The error back-propagation algorithm, one of the supervised learning methods, can be used to train the MLP with the above collected training data. The learned MLP, applied to the stored game helps to check whether it works well in the game or not. Once its performance is within the desired levels, the MLP intervention module can be applied to a one-on-one soccer game.

6 Basic Strategies

Each robot will have its own role as striker, defender and goal-keeper. It may also has its own area of action according to its role. The game strategies can be very complicated in order to meet di erent situations in a game. The concept of zone defense can be used to select an appropriate action among basic actions. If the ball is within

a robot's area of action it can select an appropriate action and other robots will not select any actions as the ball is not in their areas of actions. This concept however, has two drawbacks: i) If a robot gets blocked by an opponent robot, the home robot should stop and the game comes to a standstill. If the ball is within an opponent robot's area and the home striker robot is blocked, the opponent team has an advantage, as the goal-keeper and defender robots will get stopped. ii) If the ball is located in one of the boundary areas, two robots can move toward it and may get collided with each other. So the concept of zone defense has to be modi ed to give a priority of action selection to a robot within its own area, while the other robots can move to any place in the playground. A robot in its own area as per its role of action, will be given higher priority for action selection. For instance, the goal-keeper robot has higher priority in goal-keeper area and other robots should select other actions which do not con ict with the goal-keeper's action. When the ball is located in the goal-keeper area, the ball must be kicked out towards opponent's area to reduce the risk of the opponent scoring a goal. In this case, the goal-keeper can select sweep ball action according to its priority, and the other robots may select block action or move toward other places to avoid any con icts with the goal-keeper's action. Besides the modi ed zone defense strategy, several strategies can be used in real soccer games. If the opponent team performance is not so good, a `3-0-0' strategy (three strikers), wherein all robots assuming striker's role can be adopted. Otherwise, a `1-1-1' strategy (a striker, a defender and a goal-keeper) or `0-2-1' strategy (two defenders and a goal-keeper) can be followed. According to the opponent team performance, game importance ( nal or preliminary game) and as per the present game score, di erent strategies can be arrived at. These strategies may include changing a robot's area of action. A `strategy data-base' can be implemented to select a proper strategy according to the game situation. This can be done by a human operator as it may be dicult for the robots to decide themselves.

6.1 Game rule violations

Fouls may get called in robot soccer games as in a real soccer. When penalty-kick, freekick, freeball or goalkick is called according to rules, the game is continued with robots placed at prede ned positions as per rules [12]. In cases of fouls, special strategies can be used, such as `set-position play' as in a real soccer game.

7 Conclusion The control structure, behaviors and basic actions will decide a lot in the outcome of games. An Action Selection Mechanism (ASM) as a high level controller can be used to arrive at the appropriate actions at any instance of time. Modi ed zone defense as a basic strategy helps to tide over the drawbacks of the conventional zone defense. Much research has to be done with regard to strategies, behaviors and actions of robots. The autonomous robot system has to be given

much more attention as, such a system has multitude of applications in the years to come. A note of concern among the researchers is regarding the compromises many do, to win in the robot soccer games. We wish to appeal researchers not to go frenzy or sad by way of their team's performance, but have a request to contribute to the developments in multi-agent systems using robot soccer as a model.

Acknowledgement

The author would like to acknowledge Jong-Hwan Kim and the labmembers of the Intelligent Control Laboratory, Electrical Engineering, KAIST, Korea, for providing the material needed for preparing this paper.

References

[1] John Casti, \a game of three robots," New Scientist, April 26, pp 28-31, 1997. [2] J.-H. Kim, H.-S. Shim, M.-J. Jung, H.-S. Kim and Prahlad V., \Cooperative Multi-Agent Robotic Systems: From the Robot-soccer Perspective", Proceedings of MiroSot'97, pp 3-14, 1997. [3] L. E. Parker, \Heterogeneous Multi-robot cooperation," Ph.D thesis, MIT, AI Lab., Cambridge, MA, Feb. 1994. [4] Y. Kawauchi, M. Inaba and T. Fukuda, \A Principle of Distributed Decision Making of Cellular Robotic System (CEBOT)," IEEE Proc. Int. Conf. Roboics and Auto., Vol.3, pp. 833-838, U.S.A., 1993. [5] N. Mitsmoto, T. Hattori, T. Idogaki, T. Fukuda and F. Arai, \Self-organizing Micro Robotic System (Biologically Inspired Immune Network Architecture and Micro Autonomous Robotic System," IEEE Proc. Int. Sym. on Micro Machine and Human science-Toward Micro-Mechatronics, pp. 261270, Japan, 1995. [6] T. Tyrrell, Computational Mechanisms for Action Selection. PhD thesis, The University of Edinburgh, 1993. [7] T. Tyrrell, \The use of hierarchies for action selection," in From Animals to Animats 2, pp. 138{147, The MIT Press, 1993. [8] M.K Sahota, A.K. Mackworth, S.J. Kingdon and R.A. Barman, \Real-time Control of Soccer playing Robots Using o -board vision : The Dynamic Test bed," IEEE proc. Int. Con. Robots and Auto., pp. 3690-3693, Japan, 1995. [9] C.R. Kuba and H. Zhang, \The Use of Perceptual Cues in Multi-Robot Box-Pushing," IEEE Proc. Int. Conf. Robotics and Automation, pp. 2085-2090, 1996. [10] B.L. Brumitt and A. Stentz, \Dynamic Mission of Planning for Multiple Mobile Robots," IEEE. Proc. of Conf. Robotics and Automation, pp. 2396-2401, 1996.

[11] H.-S. Shim, H.-S. Kim, M.-J. Jung, I.-H. Choi, J.-H. Kim, and J.-O. Kim, \Designing distributed control architecture for cooperative multiagent system and its real time application to soccer robot,", Int. J. of Robotics and Autonomous Systems, 631, (1997), 1-17. [12] J.-H. Kim, MiroSot'96 Booklet (Micro-Robot World Cup Soccer Tournament), Published by MiroSot Organizing Committee, April 1996, Accessible from http://www.mirosot.org/. [13] F. Mondada, E. Franzi and P. Ienne, \Mobile Robot Miniaturization: A Tool for Investigation in Control Algorithms," Proc. of the 3rd Int. Symp. on Experimental Robotics, Japan, Oct. 1993. [14] J.-H. Kim, M.-J. Jung, H.-S. Shim and S.-W. Lee, \Autonomous Micro-Robot `Kity' for Maze Contest," Proc. of Int. Sym. on Arti cial Life and Robotics (AROB), Oita, Japan, Feb. 1996. [15] M. K. Sahota, \Real-time intelligent behaviour in dynamic environments : Soccer-playing robots," Master's thesis, The University of British Columbia, August 1994. [16] P. Stone and M. Veloso, \Broad learning from narrow training : A case study in robotic soccer," CMU Technical Report, November 1995. [17] H.-S. Shim, M.-J. Jung, H.-S. Kim, I.-H. Choi, W.-S. Han, and J.-H. Kim, \Designing distributed control architecture for cooperative multiagent systems," in Proc. of MiroSot'96 Workshop, pp. 19{25, Taejon, Korea, November 1996. [18] J.-H. Kim, H.-S. Shim, H.-S. Kim, M.-J. Jung, I.H. Choi, and J.-O. Kim, \A cooperative multi-agent system and its real time application to robot soccer," in Proc. IEEE Int. Conf. on Robotics and Automation, Albuquerque, New Mexico, Vol. 1, 638643, April 1997. [19] J.-H. Kim, H.-S. Shim, M.-J. Jung, H.-S. Kim, I.H. Choi, and W.-S. Han, \Building a soccer robot system for mirosot'96," in Video Proc. IEEE Int. Conf. on Robotics and Automation, Albuquerque, New Mexico, April 1997. [20] MiroSot'96 and '97, KAIST, Korea, Proceedings, Edited by J.-H. Kim. [21] Subramanyam. S, Prahlad V., K.-C. Kim, J.-H. Kim, \Multi-Agent Centralized Control in Soccer Robots", MiroSot'97 Proceedings, pp 49-52, 1997.