Visual Navigation of Mobile Robot Using Optical Flow ... - Springer Link

6 downloads 3155 Views 3MB Size Report
tion, since we apply the dominant plane detection to detect the feasible region ... images captured by the camera mounted on the mobile robot in a workspace. In.
Visual Navigation of Mobile Robot Using Optical Flow and Visual Potential Field Naoya Ohnishi1 and Atsushi Imiya2 1

2

Graduate School of Science and Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, Chiba, 263-8522, Japan [email protected] Institute of Media and Information Technology, Chiba University, Japan Yayoicho 1-33, Inage-ku, Chiba, 263-8522, Japan [email protected]

Abstract. In this paper, we develop a novel algorithm for navigating a mobile robot using the visual potential. The visual potential is computed from an image sequence and optical flow computed from successive images captured by the camera mounted on the robot. We assume that the direction to the destination is provided at the initial position of the robot. Using the direction to the destination, the robot dynamically selects a local pathway to the destination without collision with obstacles. The proposed algorithm does no require any knowledge or environmental maps of the robot workspace. Furthermore, this algorithm uses only a monocular uncalibrated camera for detecting a feasible region of navigation, since we apply the dominant plane detection to detect the feasible region. We present the experimental results of navigation in synthetic and real environments. Additionally, we present the robustness evaluation of optical flow computation against lighting effects and various kinds of textures. Keywords: Robot navigation, Visual navigation, Potential field, Optical flow, Homography, Multiresolution.

1

Introduction

Visual navigation of mobile robots is recently one of the most challenging problem in the fields of robot vision and computer vision [5,8,12,13,24]. The main purpose of the problem is to determine a control force of the mobile robot using images captured by the camera mounted on the mobile robot in a workspace. In this paper, we develop a navigation algorithm for an autonomous mobile robot using a visual potential field on images observed by a monocular camera mounted on the robot. The visual potential field on an image is an approximation of the projection of the potential field in the workspace to the image plane. The path-planning problem of the mobile robot in the configuration space is to determine the trajectory of the mobile robot. The trajectory is determined as the path from the start point to the destination point without collision with obstacles G. Sommer and R. Klette (Eds.): RobVis 2008, LNCS 4931, pp. 412–426, 2008. c Springer-Verlag Berlin Heidelberg 2008 

Visual Navigation of Mobile Robot

413

Image plane Obstacle

Navigation path

(a)

Ground plane

(b)

(c)

Fig. 1. Visual potential in a robot workspace. By our method, a navigation path without collision with obstacles is derived without the need for an environmental map. (a) Configuration of robot workspace. (b) Image captured by the camera mounted on the mobile robot. (c) Repulsive force from the obstacle.

in the configuration space. The potential field method [11] yields a path from the start point to the destination point using the gradient field. The gradient fields are computed from the map of the configuration of the robot workspace [1,23,26]. For path planning by the potential method, a robot is required to store the map of its workspace. On the other hand, the navigation problem of a mobile robot is to determine the robot motion at an arbitrary time [19]. It means that the robot computes its velocity using real time data obtained in a present situation, such as a range data of obstacles, its position data in a workspace, and images captured by a camera mounted on the robot. Since in a real environment the payload of mobile robots is restricted, for example, power supply, capacity of input devices and computing power, mobile robots are required to have simple mechanisms and devices [8]. We use an uncalibrated monocular camera as a sensor for obtaining information on the environment. This vision sensor is a low-cost device that is easily mounted on mobile robots. Accepting the visual potential field, the robot navigates by referring to a sequence of images without using the spatial configuration of obstacles. We introduce an algorithm for generating the visual potential field from an image sequence observed from a camera mounted on the robot. The visual potential field is computed on an image plane. Using the visual potential field and optical flow, we define a control force for avoiding the obstacles and guiding to the destination of the mobile robot, as shown in Fig. 1. The optical flow field [2,10,14,18] is the apparent motion in successive images captured by the moving camera. This is fundamental feature to recognize environment information around the mobile robot. Additionally, the optical flow field is considered to be fundamental information for obstacle detection in the context of biological data processing [16]. Therefore, the use of optical flow is an appropriate method for the visual navigation of the mobile robot from the viewpoint of the affinity between robots and human beings. In the previous paper [19], we developed a featureless robot navigation method based on a planar area and an optical flow field computed from a pair of successive images. A planar area in the world is called a dominant plane, and it corresponds to the largest part of an image. This method also yields local

414

N. Ohnishi and A. Imiya Perception: Image sequence Obstacle Dominant plane Eyes: Camera Ground plane

Robot motion Mobile robot Obstacle

Fig. 2. Perception and cognition of motion and obstacles in workspace by an autonomous mobile robot. The mobile robot has a camera, which corresponds to eyes. The robot perceives an optical flow field from ego-motion.

obstacle maps by detecting the dominant plane in images. We accepted the following four assumptions. 1. The ground plane is the planar area. 2. The camera mounted on a mobile robot is downward-looking. 3. The robot observes the world using the camera mounted on itself for navigation. 4. The camera on the robot captures a sequence of images since the robot is moving. These assumptions are illustrated in Fig. 2. Therefore, if there are no obstacles around the robot, and since the robot does not touch the obstacles, the ground plane corresponds to the dominant plane in the image observed through the camera mounted on the mobile robot. From the dominant plane, we define the potential field on the image. Using the potential field, we proposed the algorithm for avoiding obstacles [21] and navigating a corridor environment [22] of the mobile robot. In this paper, we present the algorithm for navigating the mobile robot from the start point to the destination point by using the optical flow field as the guide force field to the destination of the robot. In mobile robot navigation by the potential method, an artificial potential field defines the navigation force that guides the robot to the destination. Usually, an attraction to the destination and an repulsive from obstacles are generated to guide the robot to the destination without collision with the obstacles in the robot workspace [6,25]. Basically, these two guiding forces are generated from the terrain and the obstacle configuration in the workspace, which are usually pre-input to the robot. Then, we use the optical flow field as the guide field to the destination of the robot, since the flow vectors indicate the direction of robot motion to a local destination. Furthermore, using images captured by the camera mounted on the robot, the potential field for the repulsive force from obstacles is generated to avoid collision. The mobile robot dynamically computes the local path with the camera capturing the images.

Visual Navigation of Mobile Robot

2

415

Dominant Plane Detection from Optical Flow

In this section, we briefly describe the algorithm for dominant plane detection using the optical flow observed through the camera mounted on the mobile robot. The details of our algorithm are described in [19]. Optical Flow and Homography. Setting I(x, y, t) and (x, ˙ y) ˙  to be the timevarying gray-scale-valued image at time t and optical flow, optical flow (x, ˙ y) ˙   at each point (x, y) satisfies Ix x˙ + Iy y˙ + It = 0.

(1)

The computation of (x, ˙ y) ˙  from I(x, y, t) is an ill-posed problem, Therefore, additional constraints are required to compute (x, ˙ y) ˙  . We use the Lucas-Kanade method with pyramids [4] for the computation of the optical flow. Therefore, Eq. (1) can be solved by assuming that the optical flow vector of pixels is constant in the neighborhood of each pixel. We call u(x, y, t), which is a set of optical flow (x, ˙ y) ˙ computed for all pixels in an image, the optical flow field at time t. Optical Flow on the Dominant Plane. We define the dominant plane as a planar area in the world corresponding to the largest part in an image. Assuming that the dominant plane in an image corresponds to the ground plane on which the robot moves, the detection of the dominant plane enables the robot to detect the feasible region for navigation in its workspace. Setting H to be a 3 × 3 matrix [9], the homography between the two images of a planar surface can be expressed as (x , y  , 1) = H(x, y, 1) ,

(2)

where (x, y, 1) and (x , y  , 1) are homogeneous coordinates of corresponding points in two successive images. Assuming that the camera displacement is small, matrix H is approximated by affine transformations. These geometrical and mathematical assumptions are valid when the camera is mounted on a mobile robot moving on the dominant plane. Therefore, the corresponding points p = (x, y) and p = (x , y  ) on the dominant plane are expressed as p = Ap + b,

(3)

where A and b are a 2 × 2 affine-coefficient matrix and a 2-dimensional vector, which are approximations of H. Therefore, we can estimate the affine coefficients using the RANSAC-based algorithm [7][9][20]. Using estimated affine coefficients, we can estimate optical flow on the dominant plane (ˆ x, yˆ) , (ˆ x, yˆ) = A(x, y) + b − (x, y) ,

(4)

ˆ y, t) for all points (x, y) in the image. We call (ˆ x, yˆ) planar flow, and u(x, planar flow field at time t, which is a set of planar flow (ˆ x, yˆ) computed for all pixels in an image.

416

N. Ohnishi and A. Imiya

Camera displacement T Image Plane Camera

Optical flow

Obstacle

T

Dominant plane

T

Fig. 3. The difference in the optical flow between the dominant plane and obstacles. If the camera moves a distance T approximately parallel to the dominant plane, the optical flow vector on the obstacle and on the dominant plane areas are the same distance T . But, that they differ at the same time Therefore, the camera can observe the difference in the optical flow vectors between the dominant plane and obstacles.

Dominant Plane Detection. If an obstacle exist in front of the robot, the planar flow on the image plane differs from the optical flow on the image plane, as shown in Fig. 3. Since the planar flow (ˆ x, yˆ) is equal to the optical flow (x, ˙ y) ˙  on the dominant plane, we use the difference between these two flows to detect the dominant plane. We set ε to be the tolerance of the difference between the optical flow vector and the planar flow vector. Therefore, if   (x, ˙ y) ˙  − (ˆ (5) x, yˆ)  < ε is satisfied, we accept the point (x, y) as a point on the dominant plane. Then, the image is represented as a binary image by the dominant plane region and the obstacle region. Therefore, we set d(x, y, t) to be the dominant plane, as  d(x, y, t) =

255, if (x, y) is on the dominant plane . 0, if (x, y) is on the obstacle area

We call d(x, y, t) the dominant plane map. Our algorithm is summarized as follows: 1. 2. 3. 4.

Compute optical flow field u(x, y, t) from two successive images. Compute affine coefficients in Eq. (3) by random selection of three points. ˆ y, t) from affine coefficients. Estimate planar flow field u(x, Match the computed optical flow field u(x, y, t) and estimated planar flow ˆ y, t) using field u(x,  Eq. (5).  5. Assign the points (x, ˙ y) ˙ − (ˆ x, yˆ)  < ε as the dominant plane. If the dominant plane occupies less than half the image, then return to step(2). 6. Output the dominant plane d(x, y, t) as a binary image.

Figure 4 shows examples of the captured image I(x, y, t), the detected dominant ˆ y, t). plane d(x, y, t), the optical flow field u(x, y, t), and the planar flow field u(x, In the image of the dominant plane, the white and black area show the dominant plane and obstacle area, respectively.

Visual Navigation of Mobile Robot

417

Fig. 4. Examples of dominant plane and planar flow field. Starting from the left, the captured image I(x, y, t), detected dominant plane d(x, y, t), optical flow field u(x, y, t), ˆ and planar flow field u(x, y, t).

3

Determination of Robot Motion Using Visual Potential

In this section, we describe the algorithm used for the determination of robot moˆ y, t). tion from the dominant plane image d(x, y, t) and the planar flow field u(x, Gradient Vector of Image Sequence. The robot moves on the dominant plane without collision with obstacles. Therefore, we generate an artificial repulsive force from the obstacle area in the image d(x, y, t) using the gradient vector field. The potential field on the image is an approximation of the projection of the potential field in the workspace to the image plane. Therefore, we use the gradient vector of the dominant plane as the repulsive force from obstacles. Since the dominant plane map d(x, y, t) is a binary image sequence, the computation of the gradient is numerically unstable and unrobust. Therefore, as a preprocess to the computation of the gradient, we smooth the dominant plane map d(x, y, t) by convolution with Gaussian G∗, that is, we adopt g such that  ∂ (G ∗ d(x, y, t)) ∂x g(x, y, t) = ∇(G ∗ d(x, y, t)) = ∂ ∂y (G ∗ d(x, y, t)) as the potential generated by obstacles. Here, for a 2D Gaussian, G(x, y) = G ∗ d(x, y, t) is given as G ∗ d(x, y, t) =





−∞





−∞

1 − x2 +y2 2 e 2σ . 2πσ

(6)

G(u − x, v − y)d(x, y, t)dudv.

(7)

In this paper, we select the parameter σ to be half the image size. An example of the gradient vector field is shown in Fig. 5(middle). Optical Flow as Attractive Force. From the geometric properties of flow field and the potential, we define potential field p(x, y, t) as  ˆ y, t), if d(x, y, t) = 255 g(x, y, t) − u(x, p(x, y, t) = (8) g(x, y, t), otherwise, ˆ y, t) is attractive force field to the destination. where u(x,

418

N. Ohnishi and A. Imiya

Fig. 5. Examples of potential field p(x, y, t) computed from the examples in Fig. 4. Starting from the left, the dominant plane after Gaussian operation G ∗ d(x, y, t), gradient vector field g(x, y, t) as repulsive force from obstacles, and visual potential field p(x, y, t)

In Eq. (8), the gradient vector field g(x, y, t) is a repulsive force from obˆ y, t) is an artificial attractive force to the stacles and the planar flow field u(x, ˆ y, t) represents the camera motion, the destination. Since the planar flow u(x, ˆ y, t) and the gradient vector sum of the sign-inversed planar flow field −u(x, field g(x, y, t) is the potential field p(x, y, t). However, in the obstacle area in an ˆ y, t) is set to zero, since the planar flow field image, the planar flow field u(x, represents the dominant plane motion. If obstacles do not exist in an image, the above equation becomes ˆ y, t). p(x, y, t) = −u(x,

(9)

Then, the robot moves according to the planar flow field. An example of the potential field p(x, y, t) computed from the examples in Fig. 4 is shown in Fig. 5(right). Using the visual potential, we introduce a mechanism for guiding the robot to the destination. Setting θd (t) to be the direction angle between the destination and the robot direction, we define the guiding force to the destination as the mixtures of ˆ d (x, y, t) = u ˆ t (x, y) cos θd (t) + u ˆ r (x, y) sin θd (t), u

(10)

ˆ r (x, y) are the pure translational and rotational planar flow ˆ t (x, y) and u where u fields, respectively, as shown in Fig. 6. ˆ d (x, y) is the desired flow generated from images captured The optical flow u by the camera mounted on the robot, if the robot is moving to the direction of

Fig. 6. Optical flow field as attractive force field. (left) Translational planar flow field ˆ r (x, y). ˆ t (x, y). (right) Rotational planar flow field u u

Visual Navigation of Mobile Robot Moving direction R(t)

Visual potential on image plane

y axis p(t)Navigation force

T(t)

θ(t) p(t)

Obstacle

419

θ(t)

x axis

Navigation path Ground plane

Image plane

¯ Fig. 7. Navigation from potential field. Starting from the left, the navigation force p(t), ¯ and y axis is θ(t), and robot displacement the angle between the navigation force p(t) T (t) and rotation angle R(t) at time t determined using θ(t).

Capture image I(x,y,t) Detect the dominant plane d(x,y,t) Compute gradient vector field g(x,y,t) from d(x,y,t) ^ Compute attractive force field u(x,y,t) from the direction θd(t) Compute visual potential field p(x,y,t) ^ from g(x,y,t) and u(x,y,t) Compute navigation force p(t) from p(x,y,t) Robot moves T(t) and R(t) t:=t+1

Fig. 8. Closed loop for autonomous robot motion. Computations of g(x, y, t), p(x, y, t), and T (t) and R(t) are described in sections 3.1, 3.2, and 3.3, respectively. The algorithm for the detection of dominant plane d is described in section 2.

the destination in the workspace without obstacles. The pure optical flow fields ut and ur are computed if the velocity of the robot is constant. Therefore, we use the control force  ˆ d (x, y, t), if d(x, y, t) = 255 g(x, y, t) − u (11) pd (x, y, t) = g(x, y, t), otherwise, for the guiding of the robot to the destination. ¯ using the Navigation by Potential Field. We define the control force p(t) average of the potential field pd (x, y, t) as  1 ¯ = p(t) pd (x, y, t)dxdy, (12) |A| (x,y) ∈A where |A| is the size of the region of interest in an image captured by the camera mounted on the mobile robot. ¯ to the nonholonomic mobile robot, we Since we apply the control force p(t) require that the control force is transformed into the translational velocity and

420

N. Ohnishi and A. Imiya

(a)

(b)

(c)

(d)

(e)

(f)

Fig. 9. Experimental results for detecting the dominant plane for some of various illuminations and textures environment. Top row is the captured image. Bottom row is the dominant plane detected from the top image. (a)-(c) Illumination-changed images. (a) Dark medium. (b) Medium medium. (c) Bright illumination. (d)-(f) Texture-changed images. (d) Granite texture. (e) Leopard texture. (f) Agate textured floor and wave textured block.

rotational velocity. We determine the ratio between the translational and rota¯ tional velocity using the angle of the control force p(t). We assume that the camera is attached to the front of the mobile robot. Therefore, we set parameter θ(t) to be the angle between the navigation force ¯ and y axis y = (0, 1) of the image, which is the forward direction of the p(t) mobile robot, as shown in Fig. 7. That is, θ(t) = arccos

¯ p(t), y . ¯ |p(t)||y|

(13)

We define robot the translational velocity T (t) and the rotational velocity R(t) at time t as T (t) = Tm cos θ(t), R(t) = Rm sin θ(t),

(14)

where Tm and Rm are the maximum translational and rotational velocities, respectively, of the mobile robot between time t and t + 1. Setting X(t) = (X(t), Y (t)) to be the position of the robot at time t, from Eq. (14), we have the relations  ˙ ˙ 2 + Y˙ (t)2 = T (t), tan−1 Y (t) = R(t). X(t) (15) ˙ X(t) Therefore, we have the control law ˙ X(t) = T (t) cos R(t), Y˙ (t) = T (t) sin R(t).

(16) (17)

The algorithm for computing robot motion T (t) and R(t) and the closed-loop for autonomous robot motion is shown in Fig. 8.

100

100

80

80

Detection Rate [%]

Detection Rate [%]

Visual Navigation of Mobile Robot

60 40 20 0

421

60 40 20 0

0

2

4

6 8 Lighting

10

12

14

0

(a)

1

2

3 Texture

4

5

6

(b)

Fig. 10. (a) Detection rate against the illumination change. (b) Detection rate against the texture change.

4

Experimental Results

Robustness Evaluation of Dominant Plane Detection against Lighting and Texture. We have evaluated the performance of the algorithm against the difference of textures and lighting condition using synthetic data. Figure 9 shows the same geometric configuration with different textures and different illuminations The bottom row is the results of the dominant plane detection for these different conditions. Figure 10 shows the detection rate of the dominant plane against the difference of conditions. The detection rate is calculated by comparing the detected dominant plane with the ground truth, which is manually detected. These results show that the dominant plane is a robust and stable feature against the change of statistic property of the images, which is described by textures on the objects in the workspace, and physical property of images, which is described by the illumination properties in the workspace.

o

o

d (a)

d

o

d (b)

(c)

Fig. 11. Experimental results in the synthetic environment with the different three destinations. The sequence of triangles in the map are the trajectory of the mobile robot, the black regions are obstacles, and the gray circle is the destination. (a) The destination is set on the bottom-right corner. (b) The destination is set on the bottomleft corner. (c) The destination is set on the top-right corner. o and d mean the start and destination points, respectively.

422

N. Ohnishi and A. Imiya

o d o d o d

Fig. 12. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(a)

Experiments in Synthetic Environment. Figure 11 shows experimental results in the synthetic environment with different three destinations. In this figure, the sequence of triangles in the map are the trajectories of the mobile robot, the black regions are obstacles, and the gray circle is the destination. Figure 12 shows the robot position in the environment, the captured image

o d o d o d

Fig. 13. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(b)

Visual Navigation of Mobile Robot

o

d

o

d

o

d

423

Fig. 14. Starting from the left, the robot position in the environment, the captured ˆ image I(x, t, t), the attractive force field u(x, y, t), the visual potential field p(x, y, t), ¯ and the control force p(t) at 10th, 30th, and 60th frames of examples shown in Fig. 11(c)

ˆ y, t), the visual potential field p(x, y, t), I(x, t, t), the attractive force field u(x, ¯ at 10th, 30th, and 60th frames in Fig. 11(a). and the control force p(t) These experimental results show that our method is effective for robot navigation using visual information captured by the mobile robot itself. Experiments in Real Environment. We show an experimental result in a real environment with one obstacle. The destination is set beyond an obstacle, that is, an obstacle separates the geodesic path from the origin to the destination. Therefore, on the image, the destination appears on the obstacle. The direction angle θd (t) between the destination and the robot direction is computed by θd (t) = ∠[XD (t), XD (0)],

(18)

for XD (t) = X(t) − D, where D is the coordinate of the destination point at t = 0, which is given to the robot at the initial position. Figure 15 shows the snapshot of the robot in the environment, the captured image I(x, t, t), the detected dominant plane d(x, t, t), the visual potential field ¯ p(x, y, t), and the control force p(t) at 10th, 50th, 100th, 150th, and 200th frames. To avoid collision with the obstacle on the geodesic path from the origin to the destination, the robot first turned to the left and turned to the right after passing through beside an obstacle. The mobile robot avoided the obstacle and reached to the destination.

424

N. Ohnishi and A. Imiya

Fig. 15. Starting from the left, the snapshot of the robot in the environment, the captured image I(x, t, t), the detected dominant plane d(x, t, t), the visual potential field ¯ at 10th, 50th, 100th, 150th, and 200th frames. To p(x, y, t), and the control force p(t) avoid collision with the obstacle on the geodesic path from the origin to the destination, the robot first turned to the left and turned to the right after passing through beside an obstacle.

5

Conclusions

We developed an algorithm for navigating a mobile robot to the destination using the optical flow and visual potential field captured by a camera mounted on the robot. We generated a potential field of repulsive force from obstacles to avoid collision, using images captured with the camera mounted on the robot. Our method enables a mobile robot to avoid obstacles without an environmental map. Experimental results show that our algorithm is robust against the fluctuation of the displacement of the mobile robot. Images observed in the RGB color space are transformed to images in the HVS color space. Images in the H-channel are invariant to the illumination change. Therefore, parameters computed from the H-channel image are invariant to the illumination change. This property suggests that the computation optical flow and dominant plane from the H-channel is stable and robust against the illumination change [3,15,17]. The visual potential field used for guiding the robot is generated from visual information captured with a camera mounted on the robot without the use of

Visual Navigation of Mobile Robot

425

any maps that describe the workspace. We presented that optical flow vectors indicate the direction of robot motion to the local destination. Therefore, the optical flow field is suitable for the guide field to the destination of the robot.

References 1. Aarno, D., Kragic, D., Christensen, H.I.: Artificial potential biased probabilistic roadmap method. IEEE International Conference on Robotics and Automation 1, 461–466 (2004) 2. Barron, J.L., Fleet, D.J., Beauchemin, S.S.: Performance of optical flow techniques. International Journal of Computer Vision 12, 43–77 (1994) 3. Barron, J., Klette, R.: Qunatitiative colour optical flow. International Conference on Pattern Recognition 4, 251–254 (2002) 4. Bouguet, J.-Y.: Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm. Intel Corporation, Microprocessor Research Labs, OpenCV Documents (1999) 5. Bur, A., et al.: Robot navigation by panoramic vision and attention guided features. In: International Conference on Pattern Recognition, pp. 695–698 (2006) 6. Conner, D.C., Rizzi, A.A., Choset, H.: Composition of local potential functions for global robot control and navigation. International Conference on Intelligent Robots and Systems 4, 3546–3551 (2003) 7. Fischler, M.A., Bolles, R.C.: Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Comm. of the ACM 24, 381–395 (1981) 8. Guilherme, N.D., Avinash, C.K.: Vision for mobile robot navigation: A survey. IEEE Trans. on PAMI 24, 237–267 (2002) 9. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision. Cambridge University Press, Cambridge (2000) 10. Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artificial Intelligence 17, 185–203 (1981) 11. Khatib, O.: Real-time obstacle avoidance for manipulators and mobile robots. International Journal of Robotics Research 5, 90–98 (1986) 12. Lopez-Franco, C., Bayro-Corrochano, E.: Omnidirectional vision and invariant theory for robot navigation using conformal geometric algebra. In: International Conference on Pattern Recognition, pp. 570–573 (2006) 13. Lopez-de-Teruel, P.E., Ruiz, A., Fernandez, L.: Efficient monocular 3d reconstruction from segments for visual navigation in structured environments. In: International Conference on Pattern Recognition, pp. 143–146 (2006) 14. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: International Joint Conference on Artificial Intelligence, pp. 674–679 (1981) 15. Maddalena, L., Petrosino, A.: A self-organizing approach to detection of moving patterns for real-time applications. In: Second International Symposium on Brain, Vision, and Artificial Intelligence, pp. 181–190 (2007) 16. Mallot, H.A., et al.: Inverse perspective mapping simplifies optical flow computation and obstacle detection. Biological Cybernetics 64, 177–185 (1991) 17. Mileva, Y., Bruhn, A., Weickert, J.: Illumination-robust variational optical flow with photometric invarinats. In: DAGM-Symposium, pp. 152–162 (2007)

426

N. Ohnishi and A. Imiya

18. Nagel, H.-H., Enkelmann, W.: An investigation of smoothness constraint for the estimation of displacement vector fields from image sequences. IEEE Trans. on PAMI 8, 565–593 (1986) 19. Ohnishi, N., Imiya, A.: Featureless robot navigation using optical flow. Connection Science 17, 23–46 (2005) 20. Ohnishi, N., Imiya, A.: Dominant plane detection from optical flow for robot navigation. Pattern Recognition Letters 27, 1009–1021 (2006) 21. Ohnishi, N., Imiya, A.: Navigation of nonholonomic mobile robot using visual potential field. In: International Conference on Computer Vision Systems (2007) 22. Ohnishi, N., Imiya, A.: Corridor navigation and obstacle avoidance using visual potential for mobile robot. In: Canadian Conference on Computer and Robot Vision, pp. 131–138 (2007) 23. Shimoda, S., Kuroda, Y., Iagnemma, K.: Potential field navigation of high speed unmanned ground vehicles on uneven terrain. In: IEEE International Conference on Robotics and Automation, pp. 2839–2844 (2005) 24. Stemmer, A., Albu-Schaffer, A., Hirzinger, G.: An analytical method for the planning of robust assembly tasks of complex shaped planar parts. In: IEEE International Conference on Robotics and Automation, pp. 317–323 (2007) 25. Tews, A.D., Sukhatme, G.S., Matari´c, M.J.: A multi-robot approach to stealthy navigation in the presence of an observe. In: IEEE International Conference on Robotics and Automation, pp. 2379–2385 (2004) 26. Valavanis, K., et al.: Mobile robot navigation in 2D dynamic environments using an electrostatic potential field. IEEE Trans. on Systems, Man, and Cybernetics 30, 187–196 (2000)