• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/81

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

81 Cards in this Set

  • Front
  • Back

A single state problem is... the agent... the solution is...

Deterministic and fully observable. The agent knows exactly what state it will be in and the solution is a sequence

A sensor less problem is... the agent... the solution...

Non-observable. Agent may have no idea where it is, solution is a sequence.

Contingency problem is...

Non deterministic and/ot partially observable. Percepts provide NEW information about current state. Often interleave search and execute.

An exploration problem is...

An unknown state space

What 4 things are needed to define a problem?

Initial state


Actions


Goal state


Path cost

Give an example of an implicit goal state and an explicit goal state.

Implicit = checkmate


Explicit = a particular node on a graph

What information do basic tree nodes contain?

State


Parent node


Action


Path cost g(x)


Depth

What 4 things are strategies evaluated on?

Completeness


Time complexity


Space complexity


Optimality

What is completeness

Does it find A solution, if one exists

What is time complexity

Number of nodes GENERATED (not expaned)

What is space complexity

Maximum number of nodes in memory

What is optimality

Does it always find the least cost/ Best solution

What is the variable for max branching factor

B

What is the variable for depth of the least cost solution

D

What is the variable for maximum depth of the state space

M

Define an uninformed search strategy and give examples

Only uses the information available in the problem defenition. E.g. breadth first, depth first, depth limited, iterative deepening.

Is breadth first complete

Yes, if b is finite

What is the time complexity of breadth first

O(b^(d+1))



Think: number of nodes expanded max will be the branching factor to the power of the lowest level expanded which will be one bellow the solution.

Is breadth first optimal

Yes (if cost is the same for all steps)

What is the space complexity of breadth first

O(b^(d+1))



Think: it keeps every node in memory therefore is the same as time complexity

Is depth first complete

Yes in finite space. No in infinite space (which is quite common) or spaces with loops

What is the time complexity of depth first

O(b^(m))



This is terrible is m is much larger than d but can bd faster than breadth if solutions are dense.



Think: imagine it's the last path checked.

What is depth first space complexity

O(bm) linear! Good!



Think: only one branch is stored in memory as once the bottom is reached it can be removed.

Is depth first optimal

No (optimal solution may be lower that first found one)

What is the main advantage of depth limited

Solves the issues around infinite m

What's a disadvantage of depth limited

May miss the solution

What's the main advantages of iterative deepening

Fixes issues around infinite m AND is complete

What's an issue with iterative deepening

Have to restart each time. However actually not that big of an issue.

Is iterative deepening complete

Yes

Is iterative deepening optimal

Yes (if step cost is constant)

What is the time complexity of iterative deepening

O(b^d)



Think: same as depth first but with d instead of m therefore better

What is the space complexity of iterative deepening

O(bd) same as depth first

What issue does graph search ignore/ fix

Repeated states

What is the space complexity of graph search

Linear

Why do bi directional search?

The time complexity is lower:


b^(d/2) + b^(d/2) is less than b^d

How do you know a solution has been found using bi directional search?

Before expanding a node check if it is in the fringe/open in the other search. If yes a solution has been found

How do you do bi directional search if there are many goal states to work back from?

Add a dummy node that all goal states lead to and start from there

Describe best first search

Use an evaluation function f(n) for each node and order the fringe based on that.

Describe greedy best first search

Let h(n) be the estimated cost from node to goal based on straight line distance. Therefore gbfs expands the node that seems closest to the goal

Is gbfs complete

No. (Can get stuck in loops like depth first if the heuristic is bad)

What is gbfs time complexity

O(b^m) but a good heurist can improve it a lot.

What is gbfs space complexity

O(b^m) keeps all nodes therefore same as time complexity

Is gbfs optimal

Mo

Describe A* search

Avoids paths that are already expensive. Uses evaluation function f(n) = g(n) + h(n)


Where g is cost so far


And h is estimated cost n to goal


So f us estimated cost of the total path through n

What makes a heuristic admissible

For every node n h(n) <= h*(n) where h*(n) is the TRUE COST.



Aka an admissible heuristic NEVER overestimates, it is optimistic

Go look at the proof for A* heuristic being admissible

Do you get it?!??

What makes a heuristic consistent/ monotonic

If for every node n, every successor n' of n generated by any action a :



h(n) <= c(n,a,n') + h(n)



I.e f(n) is non decreasing along any path

Is A* complete

Yes unless there are infinity nodes with lower cost than the solution

Is A* optimal

Yes (if heuristic is admissible in tree search or consistent for graph search)

What is the time complexity of A*

Exponential in length of optimal solution

What is the space complexity of A*

Keeps all expanded nodes in memory



(So I think also exponential ??)

Describe how one heuristic dominates another

If for every node a heuristic has a larger estimate than another while still being admissible it dominates the other

Describe what a local search algorithm is

Keeps a single current state and tries to improve it. Where the state space is a set of complete configurations and the goal is to satisfy some constraint. E.g. n queens



Only the goal is necessary not the steps to get there.

Describe hill climbing search

Moves in one direction while that direction is better. When it is no longer better stops and returns the current solution

What is the issue with hill climbing

Gets stuck in local maxima

Describe simulates annealing search

Hill climbing but allows some bad moves. Gradually decreases the likelihood of making bad moves.

Is simulates annealing optimal

Almost. If T decreases slowly enough then the probability of finding optimal solution approaches 1.

Describe local beam search

Keeps track of K states


Starr with K randomly generated states


At each iteration all successors of k are generated


If any one is a goal state stop. Else select the k best successors and repeat.

Disadvantage of local beam search

Can become focused in a small region of the state space

Describe genetic algorithms

A sucsessor is generated by combining 2 parent states


Population is k randomly gen states


Produce a new generation by selection, crossover, and mutation.

What is a constraint satisfaction problem

A problem where state is defined by variables with values from a domain.



The goal is a set of constraints specifying allowed combinations of values for subsets of variables

What type of problem is map colouring

CSP constraint satisfaction

Varieties of CSP includes

Discrete variables:


Finite domains e.g. SAT


Infinite domains e.g. job scheduling


Continuous variables:


E.g. start/end times for telescope observations????

What are unary constraints

Unary constraints involve a single variable e.g. SA= green

What are binary constraints

Binary constraints involve pairs of variables e.g. SA /= WA

What are higher order constraints

Higher order constraints involve 3 or more variables e.g. cryptarithmetic column constraints

4 examples of real world CSP

Assignment problems (who teaches what class)


Timetabling problems (which class is offered where and when)


Transportation scheduling


Factory scheduling

Describe backtracking search

Depth first search for CSP with single variable assignments

How does backtracking search work

Works down assigning randomly until reaching an impossibility, then goes up one step and goes down a different branch. Continues this until finding a solution. (Hopefully that makes sense later)

Explain the difference between backtracking and depth first search

backtracking applies one action to produce one successor node and continues down then only later if it has to revisit that node will it check if there were other successors.



Dfs contrastingly expands all successors each time and keeps them in the fringe.

Name the general purpose methods that help chose what variable to assign next

Most constrained variable


Most constraining variable


Least constraining value

Describe most constrained variable

The varibale with the fewest legal values. This is The minimum remaining values MRV heuristic

Describe the most constraining variable

The variable that constrains the most other variables.

Describe the least constraining value

The value that constrains other variables the least

Describe the concept of forward checking

Forward checking keeps track of remaining legal values for unassigned variables and terminates the search where any variable has no legal values left.

Describe how you would apply local searches e.g. Hill climbing or simulated annealing to a CSP

You would need to allow states to violate constraints


And have operators reassign variable values



Therefore randomly select any conflicted variable then select a value by using the min conflict heuristic (the value the violates the fewest constraints)

Describe iterative min conflicts

While not at a solution:


Pick a conflicted variable and change its value to the min conflicting value



Works well for n queens!

What problems use local search

Problems where only the solution matters.



The state space is a set of complete configurations.

Alpha beta best possible complexity

O(b^ d/2) if ordered from best to worst

Minimax complexity

O(b^d)

Suggest a heuristic for planning problem

Sub goal independence assumption. Assume the cost of a set of sub goals is equal to their combined sum