Position-based visual servo control of autonomous
robotic manipulators
Gangqi Dong,Z.H.Zhu nflex布局按比例
Department of Earth and Space Science and Engineering,York University,4700Keele Street,Toronto,Ontario,Canada M3J1P3
a r t i c l e i n f o
Article history:
Received22December2014
Received in revised form
27April2015
Accepted22May2015
Available online7June2015
Keywords:
Autonomous capture
Non-cooperative target
Extend Kalman filter
Robotic manipulator
Position-based visual servo
On-orbit servicing
a b s t r a c t
This paper concerns the position-based visual servo control of autonomous robotic
manipulators in space.It focuses on the development of a real-time vision-based pose
and motion estimation algorithm of a non-cooperative target by photogrammetry and
position of the day
extended Kalman filter for robotic manipulators to perform autonomous capture.Optical
flow algorithm is adopted to track the target features in order to improve the image
processing efficiency.Then,a close-loop position-based visual servo control strategy is
devised to determine the desired pose of the end-effector at the rendezvous point based
on the estimated pose and motion of the target.The corresponding desired joint angles of
the robotic manipulator in the joint space are derived by the inverse kinematics of the
robotic manipulator.The developed algorithm and position-based visual servo control
strategy are validated experimentally by a custom built robotic manipulator with an eye-
in-hand configuration.The experimental results demonstrate the proposed estimation
algorithm and control scheme are feasible and effective.
&2015IAA.Published by Elsevier Ltd.All rights reserved.
1.Introduction
Robotic manipulators have been widely used in space
for docking,assembling,repairing and other on-orbit
servicing operations[1–5].For instance,Mobile Servicing
System(MSS)or Canadarm2[6],Japanese Experiment
Module Remote Manipulator System(JEMRMS)[7]and
European Robotic Arm(ERA)[8]are typical examples of
robotic manipulators performing assembly,maintenance,
and payloads exchanging tasks on International Space
Station.These operations were conducted either autono-
mously or by human astronauts.Robotic manipulators
mounted on Mars exploration rovers,such as,Viking1
and2[9],Spirits and Opportunity[10],Phoenix[11]and
Curiosity[12],were designed to collect soil samples and/or
place instruments on a target.These tasks were performed
by preprogrammed commands and controlled from the
Earth directly or relayed by the Mars Orbiter.Camerasqualification可数吗
were used in these missions to monitor the movements of
the manipulators and take photographs of the surround-
ings.Robotic manipulators of orbital docking systems,
such as the Shuttle Remote Manipulator System[13]and
Orbital Express[14],performed tasks of grappling,dock-
ing,refueling,repairing and/or servicing another space-
craft.Pure experimental systems,such as,ROTEX(Robot
Technology Experiment)and ETS-VII(Engineering Test
Satellite)[15]demonstrated the operations of assembling,
grasping,docking and exchanging orbit replaceable units
by robotic manipulators.Most of these missions employed
human-in-the-loop control.Manual control from the Earth
may result in long time delay,while sending astronauts
into space to perform the tasks suffers higher cost and the
精通java需要多久
possibility of life loss.To address these challenges,auton-
omous control is required and becomes a research high-
light in the field of robotic technology[16,17].
Contents lists available at ScienceDirect
journal homepage:www.elsevier/locate/actaastro
Acta Astronautica
/10.1016/j.actaastro.2015.05.036
0094-5765/&2015IAA.Published by Elsevier Ltd.All rights
reserved.
n Corresponding author.Tel.:þ14167362100x77729.
E-mail address:gzhu@yorku.ca(Z.H.Zhu).
Acta Astronautica115(2015)291–302
Autonomous control of robotic manipulator to track and grasp a moving target requires the precise knowledge of the target's pose and motion.Because of the non-intrusive,non-damaging and non-contact nature,compu-ter vision is favored exclusively as a sensing system to obtain the required information [2,16,18–22].Accordingly,visual servo control system has been developed to control the pose of manipulator's end-effector with respect to the target based on the feedback of vision system.For instance,the position of a known moving object in the image plane can be tracked with a single mobile camera based on past images and past control inputs to the mobile platform [23].The autonomous capture of a non-cooperative target by a robotic manipulator requires not only to track the motion of target [24,25]but also to predict the rendezvous point and follow a specific approaching trajectory by the end-effector based on the estimated pose and motion of the target [16,19].
The camera configuration in a visual servo robotic system can be either eye-in-hand or eye-to-hand [26].The eye-in-hand camera is mounted on the end-effector to provide a close and precise view of the target while the eye-to-hand camera is installed beside the robot to moni-tor the whole workspace with a broad and relative less accurate scene of the target [27].Based on the errors employed in control,the robotic visual servo may be categorized as:image-based,position-based,and
hybrid visual servo [28,29].The image-based visual servo (IBVS)controls robots by the error between the projected desired and actual positions in the 2D (two dimensional)image plane via an image Jacobian without reconstruction of the target.Thus,it is free from target model errors and less sensitive to camera calibration errors and measurement noise in images.Considerable efforts [30,31]have been devoted to track moving targets in 3D (three dimensional)space with eye-in-hand cameras using IBVS.Extended Kalman filter was introduced into the IBVS algorithm to address the navigation errors and actuation delays [32].The perturbation to eye-in-hand cameras by the flexibility of robotic manipulator [33]was investigated to enhance the robustness of IBVS algorithm.However,the IBVS lacks 3D depth information of a target and additional measure is required to estimate the depth.The position-based visual servo (PBVS)controls the error between the desired and actual poses and motion of the end-effector directly in the 3D workspace.The advantage of the PBVS is that the pose of end-effector can be controlled relative to the target directly and naturally,while the drawbacks are that the pose and motion estimation is prone to camera calibration errors,target model accuracy,and image measurement noise.These challenges have been successfully addressed by many researchers to eliminate image errors caused by an uncalibrated camera [34,35]and suppress the image noise due to the vibration of camera resulting from flexible manipulators [36].Finally,the hybrid visual servo,referred as 2½D visual servo in the literature,evaluat
es the control errors partially in the 3D workspace and partially on the 2D image plane.Although effective,the hybrid system is generally more complex than either IBVS or PBVS for implementation.In the current work,we adopted a single,calibrated and eye-in-hand camera with PBVS to simplify the system configuration and implementation in autono-mous capture of non-cooperative targets.
The key issue in the autonomous capture of non-cooperative targets by PBVS robotic manipulator is the estimation of target's pose and motion with visual
feedback
G.Dong,Z.H.Zhu /Acta Astronautica 115(2015)291–302
292
to reconstruct the target in3D space.The pose and motion estimated by the eye-in-hand camera in PBVS are prone to image jittering,residual vibration and unexpected distur-bances of camera or the end-effector.Considerable efforts have been devoted to extract information from visual images in the past decades[37–40].Different methodologies have been developed,such as,analytic or geometric method, learning-based method,offline method and filtering method.The geometric method,such as the photogramme-try,is widely used when the camera is properly calibrated and the target is known.It extracts six degrees of freedom information(pose)of the target from2D images.However, the geometric method is memoryless and its result is noisy if the image data is not smooth either due to the jittering of image processing or the mechanical disturbance of the end-effector in case of eye-in-hand camera.The learning-based method requires a sufficient large set of images of target in different poses,which is usually not available when the target is non-cooperative.The offline methods are not suitable for the robotic manipulator to track,approach and capture the target in real-time.Anot
her potential solution for the estimation of pose and motion is filtering.Kalman filter(KF)is a widely used filtering algorithm to estimate unknown variables based on a set of noisy measurements observed over time with initial conditions.Since being proposed in1960[41],the KF has been applied widely with many variations and extensions beyond the original pro-posed linear system[42].Although the initial conditions do not change the convergence property of the KF,they do affect the performance of the filter,especially when dealing with the non-cooperative target in real-time where the initial conditions are unknown.
The focus of this study is to improve the estimation of pose and motion in real-time PBVS control to address the challenges of image jittering and disturbances of camera caused by the flexibility in joints,actuation delays and rough motion of stepper motor.In our previous PBVS work [16],the challenges were studied by a dual KF approach where the first KF was introduced in the image space and the second KF was used in the3D space.The first KF not only reduces the image noise due to jittering to prevent the errors propagating into photogrammetry algorithm but also provides substitute image data momentarily to enhance the tracking robustness in case of an outrage in the vision system occurs.The second KF was designed to suppress impact of the residual vibration and unexpected distribution of the camera and the sudden motion of a target.Although effective,the appro
ach is computational cumbersome and increases the actuation delay.To address the issue,a new methodology was developed in the current work to integrate the photogrammetry and extended Kalman filter(EKF)to estimate the target's pose and motion in real-time for PBVS.Combining with the inner closed-loop control of robotic manipulator,the visual estimation of pose and motion of a target as well as the control errors in3D space forms an outer closed-loop control.It is worth pointing out that the effectiveness of EKF in improving the robustness of visual servo[32,33]has also been demonstrated in IBVS.For instance,the EKF introduced in the image space not only improves the estimation accuracy of target's kinematics state but also substitutes state estimates when the target is lost locking [33].These works showed that the introduction of EKF into IBVS is effective to prevent the tracking failure in a dynamic situation due to image noises and actuation delay.The current approach is validated experimentally by a custom-built robotic manipulator[16]with an eye-in-hand camera mounted closely to the end-effector.The experimental results demonstrate the effectiveness and robustness of the proposed approach by successfully tracking,approaching and capturing a non-cooperative target autonomously.
2.Position-based visual servo of robotic manipulator 2.1.Camera model and photogrammetry
The pose of a target can be described by the Cartesian coordinates x To;y To;z To
ÈÉT
of a target-fixed frame origin and the Euler angles of that frameθx;θy;θz
ÈÉT
regarding to the camera frame.The rotational matrix from the camera frame to the target frame,R CT,can be developed by rotating x-axis of the camera frame byθx first and followed by rotating y-axis of the camera frame byθy and rotating z-axis of the camera frame byθz.Accordingly,the rotational matrix from the target frame to the camera frame can be expressed as R TC¼R CT T,such that
R TC¼
C y C zÀC y S z S y
C x S zþS x S y C z C x C zÀS x S y S zÀS x C y
S x S zÀC x S y C z S x C zþC x S y S z C x C y
2
64
3
75ð1Þ
Here,S x¼sinθx;S y¼sinθy;S z¼sinθz C x¼cosθx;C y¼cosθy;C z¼cosθz:
Assume the coordinates,x T;y T;z T
ÈÉT
,of a feature point on the target are known in the target frame,which implies the vision system is calibrated in advance.Then,the homogeneous relationship between the target and camera frames can be described by
x C
y C
z C
1
8
>>>
<
>>>
:
9
>>>
=
>>>
;
¼
R TC
x To
y To
z To
0001
2
66
66
4
3
77
77
5
x T
y T
z T
1
8
>>>
<
>>>
:
9
>>>
=
>>>
;
ð2Þwhere x C;y C;z C
ÈÉT
is the coordinates of the same point in the camera frame.
Consider a pinhole camera model as shown in Fig.1, denote r ij for elements of R TC.The feature point on the target is projected onto the image plane by Eq.(3),such as x m¼Àf
x C
C
Àf
¼Àf
r11x Tþr12y Tþr13z Tþx To
21T
þr22
T
þr23Tþy
To
Àf
z m¼Àf
z C
C
Àf
¼Àf
r31x Tþr32y Tþr33z Tþz To
21T
þr22
T
þr23Tþy
To
Àf
8
>><
>>:ð3Þ
where f stands for the focal length of the camera and x m;z m
f g T denotes the projected image coordinates of the feature point.For a calibrated camera,the focal length is known in advance.
Definingη¼y CÀf in Eq.(3)leads to
ηx mþx C f¼0
ηz mþz C f¼0
(
ð4Þ
G.Dong,Z.H.Zhu/Acta Astronautica115(2015)291–302293
Further define the left side of Eq.(4)by F ¼ηx m þx C f
and G ¼ηz m þz C f ,where F and G are functions of the target pose in the camera frame and the projected image coordi-nates of the feature point on the target.Linearizing F and G by the Taylor expansion in the vicinity of x m ;z m ;x To ;y To ;z To ;θx ;θy ;θz ÈÉT leads to E 2Â1þδ2Â1¼J c 2Â6dX p 6Â1ð5Þ
where
E 2Â1¼
dx m
dz m
(
)
;δ2Â1¼
F 0G 0
(
)
;
dX p 6Â1¼dx To ;dy To ;dz To ;d θx ;d θy ;d θz ÈÉT
;
J c 2Â6¼À
1
∂F ∂x To
0∂F ∂y To  0∂F ∂z To  0∂F ∂θx  0∂F ∂θy  0
∂F ∂θz  0∂G ∂x To  0
∂G ∂y To
0∂G ∂z To  0∂G ∂θx
∂G ∂θy
∂G ∂θz
8><>:9>=>;:Eq.(5)contains two independent equations for six unknowns (pose of target).Theoretically,one needs only three feature points to solve for the six unknowns.How-ever,this approach may result in four ambiguous poses [28,29].To eliminate the ambiguity and increase the robustness,minimum four feature points are widely adopted in literature,which leads to eight equations with six unknowns,such that E 8Â1þδ8Â1¼J c 8Â6dX p 6Â1
ð6Þ
The unknowns are solved by an iterative least square approach assuming the zero residual error in the measure-ment,such that E 8Â1¼0;
dX p 6Â1¼ðJ T c 8Â6J c 8Â6Þ
À1J T
c 8Â6δ8Â1ð7Þ
By inputting the known image coordinates of feature
points and an initial guess of the target pose,the algorithm iterates to correct previous guess until the correction is less than a pre-set ,‖dX p ‖r ε.In practice,it is common to use the previous target pose as the initial guess to reduce the iterations.As aforementioned,the photogrammetry is memoryless and prone to the image
noise,which may result in large fluctuation of estimated
controller not support
target pose.Thus,the computational time of photogram-metry may increase if the initial guess,which is the previous pose as mentioned,is far away from the current pose.As a result,the sampling time-step of vision system may be adjusted,which is not desirable when dealing with real-time pose estimation.Another shortcoming of the photogrammetry is that it does not estimate the motion of target directly,which is an important parameter for trajectory planning of the robotic manipulator to pe
rform autonomous capture in a dynamic environment.To address these challenges,an extended Kalman filter (EKF)with photogrammetry is presented in the following.
2.2.Extended Kalman filter
The Kalman filter is an optimal estimation algorithm for a linear system with independent white noise of normal distribution [41,42].The camera model in Eq.(3)is highly nonlinear and the extended Kalman filter has been adopted to estimate the pose and motion of a dynamic target.Assume the motion of the target is approximately constant within the sampling interval if the sampling time-step is sufficiently small.Then,the motion of the target can be approximated by the first order equation of motion.Define the system variable vector as X ¼x To ;_x
To ;y To ;_y To ;z To ;_z To ;θx ;_θx ;θy ;_θy ;θz ;_θz ÈÉ
T By assuming the system's acceleration vector
ω¼€x
To ;€y To ;€z To ;€θx ;€θy ;€θz ÈÉ
T as the process noise with a normal distribution with zero mean and covariance matrix Q ,the system model is expressed as X k ¼AX k À1þB ωk À1ð8Þ
where A ¼diag A 1
A 1
A 1
A 1
A 1
A 1Â
Ã;
A 1¼
1
t 0
1
!;
Fig.1.Pin-hole camera model.
G.Dong,Z.H.Zhu /Acta Astronautica 115(2015)291–302
294
B ¼diag B 1
B 1B 1B 1B 1
B 1Â
Ã;
B 1¼
t 2=2t
():
k is sample time-step and t is the sample time.
The measurement model is developed from the pin-hole camera model in Eq.(3)for n feature points,such that Z k ¼h X k ðÞþμk
Z ¼x m 1;z m 1;⋯x mn ;z mn f g T
;
h U ðÞ¼h x 1;h z 1;⋯h xn ;h zn
ÈÉT
ð9Þ
and h xi X ðÞ¼Àfdirectoryindex参数的含义
x Ci
ηi
;h zi X ðÞ¼Àf
z Ci ηi
ð10Þ
where μis the measurement noise that obeys a normal distribution with zero mean and covariance matrix R .
The EKF requires initial conditions and measurements observed over time.Since initial conditions of a non-cooperative target are unknown,an inappropriate initial guess may lead to poor performance of
the EKF.To improve its performance and accelerate the convergence rate,we initialize the state variable vector by the photo-grammetry in the algorithm.Based on the above defini-tions,a recursive pose and motion estimation algorithm is derived as shown in Table 1,where H is the Jacobian matrix formed by the first order partial differential of the
measurement model respect to system variable,K g is the Kalman gain at time step k ,P is the covariance matrix of the system state variable,Q and R are the process and measurement noise covariance matrices.
2.3.Kinematics of robotic manipulator
The autonomous capture will be conducted by a custom-built six degrees of freedom (6DOF)robotic manipulator with an eye-in-hand configuration as shown in Fig.2.The robotic manipulator consists of three links and one end-effector with five revolute and one prismatic joints.The eye-in-hand camera is mounted closely to the end-effector.The first three revolute joints,namely torso (θ1),shoulder (θ2)and elbow (θ3),control the position of the end-effector while the last two revolute joints and one prismatic joint,namely wrist roll (θ4),wrist yaw (θ5)and gripper (d 6),provide dexterous orientation and griping function for capture operation.Thus,the translation and rotation of the end-effector can be considered separately from the wrist to simplify the controller design.
The kinematics of robotic manipulator provides the forward relationship from the joint angles to the position of the end-effector (the wrist center),such that X g ¼K f ΘðÞ
ð11Þ
where Θ¼θ1;θ2;θ3f g T .
Table 1
Pose and motion estimation algorithm.1.Given an initial guess of target pose:X p ¼X p 0;2.Input measurement;
3.Start photogrammetry loop
4.{
5.evaluate pin-hole camera model Jacobian J c and image error vector δ;
6.calculate pseudo-inverse of Jacobian:J c þ¼J c T J c  À1
J c T ;7.calculate previous-guess-correction:dX p ¼J c þδ;8.if (‖dX p ‖o tolerance)9.{break;}10.else
11.{X p ¼X p þdX P ;}12.}
13.Return X p ;
14.Augment to state variable:X 0¼X p ;_X
p
;15.
Initialize EKF:X 0(initial state variable),
P 0(state variable covariance matrix),Q (process noise covariance matrix),
R (measurement noise covariance matrix);16.Start EKF loop 17.{
18.
estimate next state variable and covariance matrix:X k j k À1¼AX k À1j k À1
P k j k À1¼AP k À1j k À1A T þBQ B T
19.evaluate Jacobian of measurement model:H k ¼∂h X ðÞ=∂X
X ¼X k j k À1
20.
calculate Kalman gain:
K g ¼P k j k À1H T k H k P k j k À1H T
k þR
À1
21.
update state variable and covariance matrix:
X k j k ¼X k j k À1þK g Z k Àh X k j k À1ÀÁÂÃ
P k j k ¼P k j k À1ÀK g H k P k j k À1
}
G.Dong,Z.H.Zhu /Acta Astronautica 115(2015)291–302295

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。