OBSTACLE AVOIDANCE IN LOCAL NAVIGATION Daniel Castro*, Urbano Nunes†, António Ruano*
*Institute of Systems and Robotics – ISR
Faculty of Science and Technology
University of Algarve
Campus de Gambelas
8000-117 Faro, Portugal
fax: +351 289 819 403
e-mail: dcastro@ualg.pt, aruano@ualg.pt
†Institute of Systems and Robotics – ISR
Electrical and Computer Eng. Department
University of Coimbra, Polo II
3030-290 Coimbra, Portugal
email: urbano@isr.uc.pt
Keywords: Obstacle detection, object tracking, local navigation.
Abstract
A reactive navigation system for an autonomous non-holonomic mobile robot in dynamic environments is presented. A new object detection algorithm and a new reactive collision avoidance method are presented. The sensory perception is based in a laser range finder (LRF) system. Simulation results are presented to verify the effectiveness of the proposed local navigational system in unknown environments with multiple moving objects.
1Introduction
Indoor mobile robots may need to carry out missions in hazardous and/or populated environments. A typical application is to assist human beings in indoor environments, like offices. These robots should be able to react appropriately to unforeseen changes in the environment or to unpredictable objects blocking their trajectories. Local navigation techniques are the ones responsible to achieve these react
ive issues. These techniques are sensory-based approaches, using only local sensory information, or an additional small fraction of the world model.
Some of the most popular reactive collision avoidance methods are based on artificial potential fields [10], where the robots steering direction is determined assuming that obstacles assert repulsive forces on the robot and the goal asserts attractive forces. These methods are extremely fast and they typically consider only a small subset of obstacles near the robot. An extended version of this approach was introduced in [9]. In the vector field histogram approach [1] an occupancy grid representation is used to model the robots environment. The motion direction and velocity of the robot are computed from the transformation of the occupancy information into a histogram description. All these methods calculate the desired motion direction and steering commands in two different steps, which is not acceptable in a dynamic point of view.
The curvature-velocity method [13] and the dynamic window (DW) approach [6] are based on the steer angle field approach [4]. It is assumed that the robot moves in circular paths, and the search of motion commands is performed directly in the space of translational and rotational velocities. In both approaches the robot kinematics and dynamic constraints are considered by constraining the
search space to a set of admissible velocities. They show good results for obstacle avoidance at high velocities (60 cm/s ~ 95 cm/s) although suffer of the local minima problem.
The DW approach was extended to use a map in conjunction with sensory information [7] to generate collision free motion. A Bayesian approach to obstacle avoidance was linked with global path planning [8]. These approaches require a previous knowledge of the environment for the execution of the motion command.
A global dynamic window approach was proposed in [2] to non-holonomic and holonomic robots. This approach allow robust execution of high-velocity, go-to-goal, local minima free, reactive motion for mobile robots in unknown and dynamic environments, combining a wave-propagation technique [11] starting at the goal position with the DW approach.
With obstacle motion prediction, a different framework called the velocity obstacle (VO) approach was proposed in [5], to determine potential collisions and compute the collision-free path of a robot moving in a time varying environment.
This paper is organised as follows. In Section 2, the system architecture is presented. In Section 3, the dynamic window approach is briefly elaborated. Section 4, presents the proposed LRF based obstacle
detection and object tracking (perception of object’s motion) algorithms. Section 5 introduces the new reactive local navigation and obstacle avoidance methods. In Section 6, simulation results are shown with discussion. Finally, concluding remarks are given in Section 7. 2System architecture
Our system is based in a hierarchical architecture, with three layers: on the bottom level is the Steering Control system, which is responsible for velocity control and motor driving signals; next is the Reactive Local Navigation system, the one we address in this work, responsible for all local needs, like obstacle detection, obstacle classification, reactive local navigation and obstacle avoidance;on the top is the Strategic Navigation system, which has the ability to coordinate and schedule all robot processes in order to achieve the robot mission. This high level layer is compose by the following subsystems: Global Path Planning, Cartographer (building and reading maps), Interface and Communications, and the Mission Planner. The Mission Planner is where all high level decisions are taken.
3Dynamic window approach
The DW approach [6] is a sensor-based collision avoidance technique that takes into account the kinematics and dynamic constraints of differential and synchro-drive robots. Kinematics constraints are taken into account by directly searching the velocity space Vs of the robot. This space is defined by the set of tuples ()ω,v of longitudinal velocities v
and rotational velocities
velocities. The set Va of admissible velocities is defined as ()()(){}b b v dist v
v dist v v Va ωωωωω&&.,.2.,.2,,≤∧≤=(1)The function ()ω,v dist represents the distance
to the closest obstacle on the curvature defined by the velocity (v , is the radius of the circular trajectory and γ the angle defined in Figure 2. The accelerations for breakage are b v
& and b ω&.Introducing a rectangular DW, we could reduce the search space to all velocities that can be reached within the next time interval, according to the dynamic limitations of the robot, given its current velocity and its acceleration capabilities. The dynamic window Vd is defined as
()[][]{}h h h v v h v
v v v Vd c c c c .,..,.,,ωωωωωω&&&&+−∈∧+−∈=(2)
where h is the time interval during which
accelerations v
& and ω
& will be applied and (v c , is the angle between the direction of motion and the goal
heading, is maximal to trajectories that are directed
towards the goal position. For a realistic measure of the target heading we
have to consider the
dynamics of the rotation, therefore
Using this approach, robust obstacle avoidance behaviors have been demonstrated at high velocities [6]. However, since the dynamic window approach only considers goal heading and no connectivity information about free space, it is still susceptible to local minima. Figures 3 and 4 show for the same time instante, a robot scan with a moving object and the respective velocity space, with a DW for v& = 50 cm/s2 and ω& = 60 º/s2.
4LRF based obstacle detection
Our object detection system is laser range finder based, which detects dynamic activity from two consecutive laser scans. Object detection is achieved by the segmentation of one or two consecutive scan sets, and vectorization of the segments found. Object tracking is also possible from the object classification procedure presented in this paper. It can be combined with the DW reactive navigational approach (in section 5) as shown in Figure 5.
A typical laser range finder sensor has a view angle of about 180°, scanned with an angular resolution of 0.25°-1°, a distance range from 10 m (indoor devices) to more than 100 m (outdoor devices), and an accuracy of the distance measurement from Figure 5: LRF based obstacle detection system.±1mm to ±5cm, depending all these specifications of the sensor operational mode and model. As an e
xample, the SICK LMS 200-30106 indoor model [12] has a scanning frequency of 75Hz, ±15 mm /±4 cm accuracy values for distance ranges of [1 m to 8 m] / [8 m to 20 m], 10 mm resolution, and an angular resolution of 0.5° / 1° for 180° scan.
Our object detection method is quite similar to the Dietmayer et al. [3] method. Some modifications were introduced in the data preparation, before the segmentation, and in the segments filtration process.
In this work the LRF has a scan angle of 180°, with an angular resolution ∆α. The laser range data measured in the instant k is represented by the set LRF K, of N data points P i, with angle αi and distance value r i.
[]
∈
=
=N
i
r
P
LRF
i
i
i
k,0
,
α
(4)
4.1 Data analysis
The set of scan points that have some dynamic activity between two consecutive measurements is represented by SEGM k. The point selection is done by the calculation of an error value for the measurements of the same scan point in two different scans. A constant ∆P is used as an error threshold. Figure 6, shows two consecutive laser scans, for two different robot positions, (LRF k-1) and (LRF k). The set of scan points can be determined by,
>
∆
>
−
−
∈
∧
∈
=
1
,
:1
k
P
j P
i P
k
LRF
j P
k
LRF
i P
k
SEGM
(5)
with
()
()θ
α
α
P
k x
k x
j
j r
k y
k y
j
j r
i
∆
−
−
−
−
−
−
−
=
1
cos
.
,
1
sin
.
arctan
(6)
and
[]
[]
[]θ
θ
ω
ω
ω
P
Pv
j
Pv
Pv
j
P
Pv
j
end
end
ini
ini
∆reactive to
+
∆
−
∈
∆
−
∆
+
∈
∆
+
∆
+
∈
<
=
>
180
,0
180
,
180
,
,0
,0
,0
(7)
(v,ω)
The angular displacements
()0,,.arctan ==∆i r h v Pv i ini (8)()180,,.arctan ==∆i r h v Pv i end (9)
and
()12θθθ−=∆P (10)
are introduced to compensate the robot motion trajectory. The sampling rate is given by h . The reference of the angular velocity ω is counter clockwise. Figure 7 shows the three possible robot motion movements in each instant. The gray areas are the ones where scan points cannot be compared by successive scan data. Parameter ∆P θ is associated to circular motion (v =0,ω≠0), see Figure 7a). In Figure 7b), related with the forward motion (v ≠0,ω=0), usually ∆Pv ini and ∆Pv end are different due to the robot position in space. ∆Pv ini =∆Pv end only occurs for the robot moving in the middle of two wall, parallel to them. The last situation (see Figure 7c)) is the most common one for a non-holonomic robot. In this last case, both motion parameters are taken into account.4.2 Segmentation
The segmentation process is based on the
computation of the distance between two consecutive scan points, calculated by
α∆++++=−+=+cos ..1.22211)1,(i
r i r i r i r i P i P i P i P d (11)
Figure 6: Figure shows two consecutive robot positions for a robot motion with linear and
rotational components (v
≠0,ω≠0).
If the distance is less than the threshold
(){}1101,
min ,+++≤i i i i r r C C P P d (12)
with
(
)α∆−=cos 121C (13)
the point P i+1 belongs to the same segment as point P i . The constant C 0 allows an adjustment
of the algorithm to noise and strong overlapping of pulses
in close range [3]. The constant C 1 is the distance between consecutive points in each scan.
This distance is also used to search the scan data,on the left and on the right, for points not selected in data analysis. This operation allows the detection of points with different dynamic activity from the rest of the points just selected, like edge segment corners. Special tests could be done to combine segments that probably belong to the same object.4.3 Filtration
Pairs and isolated scan points are rejected.Segments have a minimum size established in the beginning (3 points). It is also a simple way of noise filtering.
4.4 Object classification
After the basic segmentation process, the segments
founded need to be described by a set of points {A,B,C }, where {A } is the closest point to the sensor
Figure 7: Typical robot motion movements, a)-Circular motion (v =0,ω≠0); b)- Forward motion (v ≠0,ω=0); c)- Curve motion (v ≠0,ω≠0).
j+1j
and the other two {B,C }, the two segment extremes. Object classification could be done with a certain probability, by matching the segment representation data, points {A ,B ,C }, with models of possible objects that could exist in the robot scene.Generally, most objects of interest in the robot’s or vehicle environment are either almost convex or can be decomposed into almost convex objects.
A database of such possible objects is necessary,
and a priori knowledge of the working environment is crucial to create it.
4.5 Object tracking
In order to perceive the object motion it is necessary to select a reference point of the object.Usually, segment centre point A is the one chosen
[3,14]. In Figure 8, the object’s velocity is
calculated by the displacement of point A between two successive time instants. Point A at the instant k is defined in polar coordinates by the distance r(A s,k ) and angle αs,k , referred to LRF space.The Cartesian position of the segment S at instant k is calculated by the following expressions:
()()()
k s k s robo k s A A r x x ,,,cos .δθ++=,
()()()k s k s robo k s A A r y y ,,,sin .δθ++= (14)
(){}
2,,,,ℜ∈∧k s k s k s A y x and the velocity, as follows,
()
h
x x v k s k s k s x 1,,,−−=
,
()h y y v k s k
s k
s y 1,,,−−= (15)Figure 8: Object position and velocity estimation.
5 Reactive local navigation method
Our method combines a LRF based object detection methodology (allowing velocity and heading estimation) with a velocity based approach, like the DW approach. Similar to Fiorini’s VO approach,we have a collision avoidance technique, which takes into account the dynamic constraints of the robot.
This method aims to calculate avoidance
trajectories simply by selecting velocities outside a predicted colision cone, penalizing the cone and nearby velocity areas and adding a bonus to escape
velocities. The new objective function to be
maximized is:()()()
(()())ωεωδωβωα,.,.,.,.,v bonus v vel v dist v head Max w v G ++= (16)where the alignment of the robot to a goal position,the distance to the closest obstacles and its velocity can be combined with a bonus/penalty function.Function ()ω,v bonus is described as follows,
∆+∆+−∆−∆+−∈∆+∆−∈−=ωωωωωR r v R r v w otherwise
r v r v w v bonus ˆ,ˆˆ,ˆ,1,0,1),((17)
where r
ˆ is the estimated collision radius, ω∆ an angular increment necessary to penalize the collision area and nearby areas. This prevents the robot to get undesirable collisions, from unknown changes in the object path.
The collision radius is estimated by the object
collision prediction. Knowing the robot’s and object’s actual position,R P and O P , and their actual
velocities, it is easy to compute their estimated positions, R P ˆ and O P ˆ. From the estimated robot position we estimate the collision radius by simple trigonometric equalities. Considering the inner triangles of the collision arc (see Figure 9) between the robot and object estimated positions, we have DBC ABC ∆∆~ and so the following equalities result:b
c
BD BC AB BD a b === (18)
robot
global
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论