• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/36

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

36 Cards in this Set

  • Front
  • Back

the ability to apply knowledge and reasoning in order to perform well in an environment

Intelligence

the study and construction of agent programs that perform well in a givern environment for a given agent architecture

ai

an entity that computes and takes an action in response to percepts from an environment

agent

the property of a system that does the right thing given what it knows and perceives

rationality

An agent that seses only partial information about the state cannot be perfectly rational

False. Perfect rationality refers to the ability of an agent to make good decisions given what sensor information is received

There exist task environments in which no pure reflex agent can behave rationally


True. A pure reflex agent ignores previous percepts, so might not obtain an optimal state estimate in a partially observable environment


There exists a task environment in which every agent is rational


True.For example, an environment that always gives the same state no matter what action the agents chooses


The input to an agent program is the same as the input to the agent function


False. The agent function takes the entire percept sequence up to that point as input, whereas the agent program takes the current percept only.


Every agent function is implementable by some program/machine combination


False. For example, if the agent’s job is to act as a universal Turing machine to determine the result of the halting problem, then it is not implementable.


Using only four colors, you have to color a planar map in such a way that no two adjacent regions have the same color


- Initial state: No regions colored.


- Agent actions: Assign a color to an uncolored region.


- Transition model: resultant colored map after assigning a color to a region.


- Goal test: All regions colored, no two adjacent regions have the same color, and the total number of colors is at most 4.


directed graph whose nodes are the set of all states and whose arcs are actions that transform one state into another

State space

a tree in whish the root node is the initial state and the set of children for each node consists of those states reachable from current state by taking any action

search tree

specifies how the environment transitions from one state to the next through agent actions

transition model

branching factor

number of actions available to the agent

depth first search always expands at least as many nodes as A* search with an admissible heuristic

False

A* is of no use in robotics because percepts, states, and actions are continuous

False: the continuous spaces can be discretized



breadth first search is complete even if zero step costs are allowed

True. Depth of solution matters for breadth-first search, not cost

is h= |u-x| + |v - y| an admissible heuristic for a state at (u, v)?

Yes, this is manhattan distance.

Does manhattan distance remain admissible if some links are removed?

Yes, in this case h is still a lower bound

PEAS stands for

Performance measures, environment, actuators, sensors

each state of the world is indivisible and has no internal structure

Atomic representation

each state is split into fixed set of variables or attributes, each of which have a value. Constraint satisfaction algorithms are based on factored representations

factored representation

objects and their relationships can be described explicitly. It underlies first order logic, natural language understanding, etc.

structured representation

Select actions on the basis of the current percept, ignoring percept history

Reflex agent

Agent needs goal information that describes situations that are desirable

Goal based agent

Agent should keep track of the part of the world it can’t see now. Maintain internal state that depends on percept history. Requires two kinds of knowledge: Info about how the world evolves independently of the agent and info about how agents own actions affect the world

Model based agent

A utility function maps a state onto a real number which describes the associated degree of happiness

Utility based agent

If there is a goal state, the algorithm will always return a solution

Complete

Returns a goal state with the shortest path cost, when there are multiple goal states

Optimal

Number of nodes generated

time complexity

max number of nodes in memory

Space complexity

the number of children at each node, the outdegree

branching factor

BFS time complexity

O(b^d)

BFS Space complexity

O(b^d)

DFS Time complexity

O(b^m)

DFS Space complexity

O(bm)