Behavior Robot
Introduction
As a design strategy, the behavior-based approach has produced intelligent systems for use in a wide variety of areas, including military applications, mining, space exploration, agriculture, factory automation, service industries, waste management, health care, disaster intervention and the home. To understand what behavior-based robotics is, it may be helpful to explain what it is not. The behavior-based approach does not necessarily seek to produce cognition or a human-like thinking process. While these aims are admirable, they can be misleading. Blaise Pascal once pointed out the dangers
inherent when any system tries to model itself. It is natural for humans to
model their own intelligence. The problem is that we are not aware of the
myriad internal processes that actually produce our intelligence, but rather
experience the emergent phenomenon of "thought." In the mid-eighties,
Rodney Brooks (1986) recognized this fundamental problem and
responded with one of the first well-formulated methodologies of the
behavior-based approach. His underlying assertion was that cognition is a
chimera contrived by an observer who is necessarily biased by his/her own
perspective on the environment. (Brooks 1991) As an entirely subjective
fabrication of the observer, cognition cannot be measured or modeled
scientifically. Even researchers who did not believe the phenomenon of cognition to be entirely illusory, admitted that AI had failed to produce it. Although many hope for a future when intelligent systems will be able to model human-like behavior accurately, they insist that this high-level behavior must be allowed to emerge from layers of control built from the bottom up. While
some skeptics argue that a strict behavioral approach could never scale up to human modes of intelligence, others argued that the bottom-up behavioral approach is the very principle
underlying all biological intelligence. (Brooks 1990) To many, this theoretical question simply was not the issue. Instead of focusing on designing systems that could think intelligently, the emphasis had changed to creating agents that could
A Nomad robot used by many researchers to study behavior within a laboratory setting.
act intelligently. From an engineering
point of view, this change rejuvenated
robotic design, producing physical robots
that could accomplish real-world tasks
without being told exactly how to do them.
From a scientific point of view,
researchers could now avoid high-level,
armchair discussions about intelligence. Instead, intelligence could be assessed
more objectively as a measurement of
rational behavior on some task. Since
successful completion of a task was now
the goal, researchers no longer focused
on designing elaborate processing systems and instead tried to make the coupling between perception and action as direct as possible. This aim remains the
distinguishing characteristic of behavior-based robotics.
The sub-sections which follow explain the roots of behavior based robotics, how it rose as a counter to the symbolic, deliberative approach of classical AI and how it has come to be a standard approach for developing autonomous robots.
A special thanks to Ronald Arkin whose book, Behavior Based Robotics , has greatly
influenced this report.
Understanding the Context of Classical AI
Classical AI spent decades trying to model human-like intelligence, using knowledge-based systems
that processed representation at a high, symbolic level. Symbolic representation was considered of paramount importance because it allowed agents to operate on sophisticated human concepts and report on their action at a linguistic level. As Donald Michie stated, "In AI-type learning, explainability is all." (Michie 1988) Since the goal of early AI was to produce human-like intelligence, researchers used human-like approaches. Marvin Minsky, in many ways a father of the field of AI, believed an intelligent machine should, like a human, first build a model of its environment and then explore solutions abstractly before enacting strategies in the real world. (McCarthy et al. 1955) This emphasis on symbolic representation and planning had a great effect on robotics and spurred control strategies where functionality was coded using languages and programming architectures that made conceptual sense to a human designer. Although many of the strategies developed were both elaborate and elegant, the problem was that the intelligence in these systems belonged to the designer. The robot itself had little or no autonomy and often failed to perform if the environment changed. While classical AI viewed intelligence as the ability of a program to process internal encodings, a behavior-based approach considers intelligence to be demonstrated through "meaningful and purposeful" action in an environment. (Arkin 1999)
While many perceived the behavior-based movement to have forsaken the goal of human-like intellig
ence, others maintained that high-level intelligence would indeed arise once a strong, low-level foundation had been laid. Agre and Chapman argued that, in fact, human beings are Junior: An all-terrain robot recently used to deploy a gamma-locating
device within a radioactive environment.
actually much more reactive than we imagine ourselves to be. (Agre and Chapman 1987) The planning and cognition that we are consciously aware of represents only the tip of a cerebral iceberg comprised mostly of unconscious, reactive motor skills and implicit behavior encodings. In a sense, the behavioral approach did not abandon modeling human intelligence as much as human consciousness. One of the sideeffects has been that many behavior-based approaches produce systems that are anything but 'explainable.' High scientific aims aside, a main reason the behavior-based community is so intent on developing automated learning techniques is that a human designer often finds it excruciatingly tedious or impossibly difficult to orchestrate many behaviors operating in parallel. It is worse than frustrating to debug behavior that emerges from the interplay of many layers of asynchronous control. At times, a truly well-implemented, behavior-based approach will result in successful strategies the researchers themselves cannot explain or understand.
Reactive vs. Deliberative
There is still considerable debate over the optimal role of internal representation. (Clark & Grush 1999) Many researchers believe that a robot cannot assign meaning to its actions or environment without representing them, even if indirectly. (Pylyshyn 1987) (Fodor 1987) Others believe that reliance on internal representation thwarts a robot抯ability to act quickly across domains. An important figure for the field of behavior-based robotics, Rodney Brooks, declared planning to be "just a way of avoiding figuring out what to do next." (Brooks 1987) Strategies which require that action be mediated by some symbolic representation of the environment are often called deliberative. In contrast, reactive strategies do not exhibit a steadfast reliance on internal models, but displace some of the role of representation onto the environment itself. Instead of responding to entities within a model, the robot can respond directly to perception of the real world. Thus, reactive systems are best characterized by a direct connection between sensors and effectors. Control is not mediated by a model but rather occurs as a low level pairing between stimulus and response.
If a task is highly structured and predictable it may make sense to use a deliberative approach. For example, if an intelligent agent is embedded in an entirely virtual environment, then it is often possible to encode every aspect of the environment with some semantic representation. In complex, real-world domains where uncertainty cannot be effectively modeled, however, robots must have a means of reacting to an infinite number of possibilities.
Some behavior-based strategies use no explicit model of the environment. In the late 1980抯Schoppers believed that if a programmer knew enough about an environment, s/he could make a set of stimulus-response pairs sufficient to cover every possibility. (Schoppers 1989) Clearly, such an approach is only possible in restricted domains such as a chess game or micro-world where there are a limited number of possible states. For more complicated domains it is necessary to find an appropriate balance between reactive and deliberative control.
Systems that seek to completely avoid internal representation are ill-equipped for the many tasks that require memory or communication. On the other hand, systems that must transmute all perception and action through an internal model will be necessarily confounded in some new environment. The key is that the model should not drive development. Rather, control should be built from the bottom up and distributed across the system. For a reactive design
methodology to work, it is necessary that behavior be decomposed into atomistic components. Often, design will include a developmental phase during which these components can be honed and joined together. First, the designer builds a minimal system and then exercises it, using an ongoing loop to evaluate performance and add new competence.
A Basis in Biology
Behavior-based robotics uses biology as the best model for understanding intelligence. Most roboticists do not model biological organisms directly, but rather look to nature for insight and direction. Increasingly, researchers have adopted the notion that high-level cognition is an
impractical, debilitating goal, and have begun to model the lower animal world. While there is a definite danger of trying to stretch metaphors too thin, the fact is that biological models offer our best hope for creating adaptive behavior.
Biology serves not only as inspiration for underlying methodologies, but also for actual robot hardware and sensors. At the Centre National de la Recherche Scientifique in France, researchers discovered that a simple household fly navigates using a compound eye
comprised of 3,000 facets which operate in parallel to monitor visual motion. In response,
roboticists built an artificial robot eye with 100 facets that can provide a 360-degree panoramic view. (Pranceshini, Pichon and Blanes 1992) Artificial bees can simulate the dance patterns and sounds of real bees sufficiently well to actually communicate with other bees. (Kirchner & Town 1994) Others have managed to build robot cockroaches (Quinn and Espenschied 1993) and even ants capable of leaving and detecting pheromone trails. (Russell, Thiel, &
Mackay-Sim 1994).
It is possible to view these successes as evidence
supporting the behavior-based approach. In other
words, if most animals do not rely on cognition to act,
why should robotics? Roboticists' preoccupation with
high-level semantic thought merely reflects the
anthropomorphic bias of human designers. To better
understand the behavioral architecture of a low-level
animal, scientists severed the connection between a
frog
抯 spine and brain. The goal was to remove all centralized control so that all action was produced
reactively and without "thought." Scientists stimulated
particular points along the spinal cord and found much of the behavior of a frog was encoded directly into the spine. There are twenty locations along the spine, each of which can react with a different, essential motion. Stimulating one location will prompt the frog to wipe its head
whereas another will cause it to jump. If the spine is stimulated in two points simultaneously it is possible to combine behaviors and produce a more complex form of behavior. (Bizzi, Mussa-Ivaldi, & Giszter 1991)
ARIEL: a behavior-based robot developed by iRobot to locate and detonate mines within the surf-zone.
This finding bears out a fundamental premise of
design翻译the behavior-based approach: that sophisticated,
high-level behavior can emerge from layered
combinations of simple stimulus-response
mappings. Instead of careful planning based on
modeling, high-level behavior such as flocking or
foraging can be built by blending low-level
behaviors such as dispersion, aggregation,
homing and wandering. Strategies can be built
directly from behaviors, whereas plans must be
based on an accurate model.
Of course, it is not only Biology which supplies
insight to the field of robotics. As multi-disciplinary
approaches become more prevalent, inspiration
should flow freely between robotics, neuroscience,
psychology, cognitive science, and a host of other fields. Control Architecture Now that we have explained why reactive control is useful, it remains to be shown how this reactive control is actually accomplished. There must be some control architecture that puts these conceptual ideas to work. Maya Mataric defines the purpose of architecture as "a principled way of organizing a control system," and further explains that, "in addition to providing structure, it imposes constraints on the way the control problem can be solved." (Mataric 1992)
Early researchers focused on planning modules and, because many of the agents operated in a virtual world, de-emphasized the part of the architecture that controlled the motors and
sensors. This section explains the architectures that were adopted in opposition to this mindset. The new architectures constrained development by forcing a distributed approach where behaviors function in parallel rather than in a step-wise, linear fashion. Later, it explains how hybrid approaches attempt to reintegrate some of the old aims of deliberative, cognitive techniques back into the behavior-based approach.
Subsumption Architectures
The subsumption architecture originally developed by Brooks in 1986 provided a method for structuri
ng reactive systems from the bottom up using layered sets of rules. (Brooks 1986) Bottom-layer behaviors such as avoid-collision should be the most basic and should have the highest priority. Top-layer behaviors such as "go to goal" encapsulate high-level-intention and may be built from lower behaviors or may function only when lower behaviors such as "avoid collision" are satisfied. To reduce complexity, there should be minimal interaction between behaviors. The idea is that each should function simultaneously but asynchronously with no INAT: A robot developed at the Idaho National Engineering and Environmental Laboratory which uses learned response s to light and sound fluctuations to modulate swarming behavior.
DART: An aquatic robot developed
by iRobot.
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。
发表评论