Autonomous Agents

xiaoxiao2021-03-06  69

Autonomous Agents

Wooldridge and Jennings [@] provide a useful starting point by defining autonomy, social ability, reactivity and proactiveness as essential properties of an agent Agent research is a wide area covering a variety of topics These include..:

Distributed Problem SOLVING (DPS)

The agent concept can be used to simplify the solution of large problems by distributing them to a number of collaborating problem-solving units DPS is not considered here, EXCALIBUR's agents being fully autonomous:. Each agent has individual goals, and there is no superior common Goal.

Multi-agent systems (mAS)

MAS research deals with appropriate ways of organizing agents. These include general organizational concepts, the distribution of management tasks, dynamic organizational changes like team formation and underlying communication mechanisms.

Autonomous Agents

Research on autonomous agents is primarily concerned with the realization of a single agent. This includes topics like sensing, models of emotion, motivation, personality, and action selection and planning. This field is our main focus within the EXCALIBUR project.

An Agent Has Goals (Stay Alive, Catch Player's Avatar, ...), CAN Sense Certain Properties of Its Environment (see Objects, Hear noises, ...), and can Execute Specific Actions (Walk Northward, Eat Apple, .. . '' '' '' '' '' '' '' S.

.........................

Subsections:

Reactive Agents

Triggering Agents

DelibERATIVE AGENTS

Hybrid Agents

Anytime Agents

Reactive Agents

Reactive agents work in a hard-wired stimulus-response manner. Systems like Joseph Weizenbaum's Eliza [@] and Agre and Chapman's Pengi [@] are examples of this kind of approach. For certain sensor information, a specific action is executed. This can be implemented by simple if-then rules.The agent's goals are only implicitly represented by the rules, and it is hard to ensure the desired behavior. Each and every situation must be considered in advance. For example, a situation in which a helicopter is TO FOLLOW Another Helicopter Can Be Realized by Corresponding Rules. One of the rules might Look Like this:

IF (Leading_heLicopter == Left) THEN

Turn_Left

ENDIF

But if the programmer fails to foresee all possible events, he may forget an additional rule designed to stop the pursuit if the leading helicopter crashes. Reactive systems in more complex environments often contain hundreds of rules, which makes it very costly to encode these systems and Keep Track of their behavior.

The nice thing about reactive agents is their ability to react very fast. But their reactive nature deprives them of the possibility of longer-term reasoning. The agent is doomed if a mere sequence of actions can cause a desired effect and one of the actions is Different from what. Normally Be Executed in The Corresponding Situation.

Triggering Agents

Triggering agent in ,,,,,,,,,,,,,,,:

IF (discution_mode) and (leading_helicopter == left) THEN

TURN_RIGHT

Trigger_acceleration_mode

ENDIF

Popular Alife agent systems like CyberLife's Creatures [@], PF Magic's Virtual Petz [@] and Brooks' subsumption architecture [@] are examples of this category. Indeed, nearly all of today's computer games apply this approach, using finite state machines to implement it.These agents can react as fast as reactive agents and also have the ability to attain longer-term goals. But they are still based on hard-wired rules and can not react appropriately to situations that were not foreseen by the programmers or have not been Previously Learned by the Agents (EG, BY NEURAL Networks).

DelibERATIVE AGENTS

Deliberative agents constitute a fundamentally different approach. The goals and a world model containing information about the application requirements and consequences of actions are represented explicitly. An internal refinement-based planning system (see section on [Planning]) uses the world model's information to build A Plan That Achieves The Agent's Goals. Planning Systems Are Offen Identified with The agent.

Deliberative agents have no problem attaining longer-term goals. Also, the encoding of all the special rules can be dispensed with because the planning system can establish goal-directed action plans on its own. When an agent is called to execute its next action, IT Applies An Internal Planning System:

IF (current_plan_is_not_applicable_anymore) THEN

Recompute_PLAN

ENDIF

EXECUTE_PLAN'S_NEXT_ACTION

Even unforeseen situations can be handled in an appropriate manner, general reasoning methods being applied. The problem with deliberative agents is their lack of speed. Every time the situation is different from that anticipated by the agent's planning process, the plan must be recomputed. Computing Plans Can Be Very Time-Consuming, And Considering Real-Time Requirements In a complex environment is most out of the quothes.hybrid agents

Hybrid agents such as the 3T robot architecture [@], the New Millennium Remote Agent [@] or the characters by Funge et al. [@] Apply a traditional off-line deliberative planner for higher-level planning and leave decisions about minor refinement Alternative of Single Plan Steps to a Reactive Component.

IF (current_plan-step_refinement_is_not_applicable_anymore) THEN

While (no_plan-step_refinement_is_possible) Do

Recompute_high-level_plan

Endwhile

Use_hard-wired_rules_for_plan-step_refinement

ENDIF

EXECUTE_PLAN-STEP_REFINEMENT'S_NEXT_ACTION

There is a clear boundary between higher-level planning and hard-wired reaction, the latter being fast while the former is still computed off-line. For complex and fast-changing environments like computer games, this approach is not appropriate because the off- Line Planning Is Still Too Slow and Would - Given Enough Computation Time - Come Up with Plans for Situations That Have Already Changed.

Anytime Agents

What we need is a continuous transition from reaction to planning. No matter how much the agent has already computed, there must always be a plan available. This can be achieved by improving the plan iteratively. When an agent is called to execute its next action , IT ITS ITS CURRENT Plan Until ITS Computation Time Limit Is Reached and Then Executes The Action: While (Computation_time_available) DO

IMPROVE_CURRENT_PLAN

Endwhile

EXECUTE_PLAN'S_NEXT_ACTION

For short-term computation horizons, only very primitive plans (reactions) are available, longer computation times being used to improve and optimize the agent's plan. The more time is available for the agent's computations, the more intelligent the behavior will become. Furthermore, the iterative improvement enables the planning process to easily adapt the plan to changed or unexpected situations. This class of agents is very important for computer-games applications and will constitute the basic technology for EXCALIBUR's agents.

转载请注明原文地址:https://www.9cbs.com/read-110336.html

New Post(0)