• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/70

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

70 Cards in this Set

  • Front
  • Back
Definitions of AI
Combination and analysis of calculated agents that act intelligently
Common sense reasoning
The computer needs to make common-sense conclusions about the unstated assumptions
Optimal Solution
Best possible solution according to some solution quality
Satisfying Solution
A solution that is good enough according to some measure of an adequate solution
Approximately Optimal Solution
Measure of quality that is close to the best optimal solution
Probable Solution
A solution that is likely to be a solution
Anytime Algorithm
can provide a solution at anytime
Symbol
a meaningful physical pattern that can be manipulated
Symbol System
creates, copies, modifies and destroys symbols
Knowledge Level
what the agent knows and what it’s goals are
Symbol Level
a level of description of an agent in terms of what reasoning it is doing
Offline Computation
Computation done by the agent before it has to act
Online Computation
Computation done by an agent in between receiving information and acting
Agent
Something that acts in its environment
Purposive Agent
Prefers some states to others in the world
Embodied Agent
an agent with a body
Robot
Artificial purposed embodied agent
Agent System
An agent and its environment
Controller
the brains of the agent
Percept trace
all past, present, and future precepts received by the controller
Command trace
all past, present, and future command outputs by the controller
Transduction
Function from percept traces into command traces
AI is Intelligent if:
the actions are appropriate for the goal and circumstances

Can adapt to changing environments and goals

learns from past experiences

Can make reasonable choices even when limited by its computations and perceptions.
weak AI
The weak AI hypothesis states that a machine running a program is at most only capable of simulating real human behavior and consciousness
Strong AI
Strong AI claims that the correctly written program running on a machine actually is a mind. That is, there is no essential difference between a (yet to be written) piece of software exactly emulating the actions of the brain, and the actions of a human being, including their understanding and consciousness.
Turing test/Chinese Room
if a machine can do something that seems intelligent but it doesn't know what it is doing, it is considered weak AI. The classical example is the Chinese room.
The chinese room is when you take a person put them in a room with a book and they receive notes, they then look up the symbols and copy the response from the book. The book gives the right answer and the person on the outside thinks the person on the inside is very knowledgeable about the topic and can't tell that the person has no knowledge about the language or the material .
Dimensions of complexity
Dimensions of complexity define the design space for creating intelligent agents
there are nine dimensions commonly used
modularity
representation scheme
planning horizon
uncertainty
sensing uncertainty
effect uncertainty
preference
number of agents
learning
computational limits
interaction of the dimensions
Representation
the representation scheme dimension concerns how the world is described
Representation
the different arrangements of the world that affect how the agents in it act are called states

a state can be described in terms of features, where a feature has a value in each state.
primary functions built into lisp
cdr
append
cons
car
list-length (length)
defun
write a simple lisp function like goal-state
(defun goal-state (state)
(equal state ‘(1 2 3 8 e 4 7 6 5))
)
heuristic
this is a technique designed for solving a problem more quickly when classic methods like exhaustive search are too slow or impractical
strategy used to find the goal
the difference between the different search methods
Blind versus informed/heuristic
in uninformed or blind search we have no idea what number of steps it will take to get to the goal state
breadth-first
Using a list it takes the start node and expands its children. Then checks if one of the children is a goal state. If not then it adds them to the end of the list and expands to the next row of children nodes to recursively search through the entire tree if necessary.
depth first
takes the start node and expands its children tacking them onto the front of the list, then it takes the first child checks to see if it is the solution. if it isn’t then it expands that node and tacks onto the front of the list. it will do this recursively until the goal is found or a branch is expanded to a dead end. If it does find a dead end with no solution it just curses back out of the function and continues to digest the remainder of the list.

In the case of the 8-puzzle search, this search never stops expanding the first node of each level unless the depth bound is used (see e & could be applied to m)
iterative deepening
this is a graph search algorithm that is best described as a depth-first search with a added heuristic that limits the depth starting at n = 0 and if it doesn’t find the answer at that depth it increases n until a solution is found.
informed or heuristic search
uses problem-specific heuristics to improve efficiency
A*
Uses path cost and heuristics to calculate the cost to go from start to goal

It is a mix of lowest cost first and best first search algorithms

Always selects the node on the frontier with the lowest cost to go from start to goal
greedy search
search using an algorithm that follows the problem solving heuristic of making the locally optimal choice at each stage with the hope of finding a global optimum. Overall this is not usually the case but this does help us get a approximate for the global optimal solution inside of a reasonable time limit
Lowest cost
The shortest path from the start node to the goal state. The frontier is a priority queue organized by shortest arc costs
Depth-first fixed depth - it gives the max depth
this is a combination of the blind algorithm depth first search and a heuristic that stops the search from going infinite or finding an un-optimized solution. the depth only lets the search go down one side for a certain number of iterations
Best-first
Selects the path that has the closest goal state in regards to the given heuristic

Treats the frontier as a priority queue ordered by h.
RBFS - Recursive Best First Search
Recursively expands what it thinks to be the best child node of current node.

Unwinds if child node turns out to be worse than it’s parent.

Achieves some of the advantages of best-first search with linear space instead of exponential. Similar to the difference between BFS and ID.
Minimax
Full information (both players know of each other possible moves)

Analyzing a tree, max will take the max value of its children nodes while min will take the minimum value of its children nodes.

Both min and max will try to select the best values for themselves and therefore max’s outcome will be minimized.
Alpha-beta pruning
Alpha–beta pruning is a search algorithm that seeks to decrease the number of nodes that are evaluated by the minimax algorithm in its search tree. used commonly in games that are two player, the algorithm will completely stop evaluating a move when at least one possibility has been found that proves the move to be worse then a previously examined move.
Metrics/evaluation
Breadth first - n^depth

Depth first - (n+m)

iterative deepening - (branching factor * depth)

a* - n

greedy -
pros and cons of Breadth first
If the goal state is near the root then it will be found relatively quickly, however if it is at the bottom of the list then it will take an extremely long time because most/all the nodes will be forced to be evaluated. Also, it could go on forever and keep searching if it does not find a goal.
pros and cons of Depth first
If the Goal state is on one side of the tree or the other depth first can be fairly efficient. However, if it is on the opposite side of the tree then it will take an absorbent amount of time. Also, if there is no end to the tree it could go on forever, continuously deeper.
pros and cons of Iterative deepening
Since it searches by a defined depth it will always find the most efficient path to a goal state. However, there is the possibility that there is no goal state and it will keep searching deeper forever.
pros and cons of a*
As long as the branching factor is finite then a* will always find the most optimal path to the goal state. It will halt when the path to the goal continuously increases.
pros and cons of lowest cost first
It will find the lowest cost solution from start to goal, however the cost of the algorithm is exponential since it expands all paths that have a lower cost than the cost of the solution.
Admissible heuristic
accepted or optimistic strategy for finding the solution
heuristically adequate
finds the closest value to the actual value of the heuristic
reasoning processes actually gone through in solving a problem
are expressible in the language
Graph
a set of N nodes
Path
a sequence of nodes
Frontier
lowest child nodes that have been ‘opened’
worlds
a possible world is a possible way the the states could be arranged

The possible ways the problem could be constructed

a possible world corresponds to a total assignment of values to each variable
models
feature models were first introduced in 1990, this model defines features and their dependencies

a world formed by the constraints

possible world that satisfies all of the constraints
hard vs soft constraints
hard constraints are constraints that must be satisfied to solve the problem

non-mandatory constraints would be called soft constraints.

There are preferences available to optimize the solution
domain consistent
this is a constraining method that propagates the constraints of previously visited variables, this can strengthen the constraints in the current model or create new ones.

a variable is domain consistent if no value of the domain of the node is ruled impossible by any of the constraints
arc consistent
an arc is arc consistent if for each value of a variable in its domain there is some value in the co-domain such that they abide by the rules.

and an arc is not consistent if for all the values of that variable in the domain for which there is no value in the co-domain can’t be deleted from our domain to make the arc consistent.

a network is arc consistent if all its arcs are consistent.
constraint networks
a constraint network is defined by a graph, with one node for every variable, one node for every constraint and undirected edges running between variable nodes and constrained nodes whenever a given variable is involved in a given constraint.
Constraint Satisfaction as search (including local search)
iterative improving an assignment of the variables until all constraints are satisfied. there are two prevalent algorithms for using constraint satisfaction as a searching method. these include greedy algorithms and random walk.
Generalized arc consistency algorithm
generalized arc consistency as a search algorithm makes the entire network arc consistent by considering a set of potentially inconsistent arcs and makes them consistent by pruning the domain of the variable x.
Domain splitting
the idea is the break down the problem into a number of disjoint cases and solve each case separately. There are two approaches to this algorithm one is to make more progress with one of the splits, the other is to allow for more pruning with less steps.

the example that comes to mind is to check our 8 puzzle to see if when the empty spot is in the middle if each of the quadrants has the correct numbers of tiles in it.

recursively solving cases using domain splitting, recognizing when there is no solution based on the assignments.
Variable elimination
eliminate the variables one-by-one passing their constraints to their neighbors.

select a variable X and return its constraints forming a list of constraints.

Project the list of constraints onto the other variables forming another list of constraints.

Replace all constraints where X-sub i appears in the second list.

solve the simplified problem recursively and store the constraints in a third list

join the first list of constraints with the third list.
Simpler answer
simplifies the network by removing variables

remove variables one by one

Creates new constraints that satisfy some variable constraints but removing all the constraints that are related to the variable

Creates a simpler sub-network that can be used for finding a solution to the original network