• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/48

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

48 Cards in this Set

  • Front
  • Back

What is a greedy search algorithm?

Picks the adjacent node to the current state which is closest to the goal state. Algorithm will continue this process until goal state is reached.

How can greedy algorithms fail to find any route to goal state?

Find node that is closest to goal state, but has no path to reach goal state
Why do greedy algorithms not always find the optimal route to a goal state?
One path begins with node closest to goal, but has many additional nodes to follow before reaching goal. Second path begins with furthest node, but goes straight to goal.
What is an optimal search?
A search that uses some heuristic approach, i.e a metric where the optimal path to a solution is approximated, before traversing. Cost of ticket, length of path, etc.
What is a breadth-first search?
The root node is expanded first, then all the successors of the root node are expanded next, one by one, then their successors, etc. All the nodes at a given depth/level in the search tree are expanded before the next level are expanded.
What is the state space?
the set of all possible states of the agent and environment
What is the minimax algorithm?
Made up of 3 parts, the minimax value, the depth-first search, and alpha-beta pruning
What is the minimax value?
a
What is the minimax depth-first search?
a
What is alpha-beta pruning? (minimax)
A version of minimax search which operates in spaces which are too big to search exhaustively.
What is iterative deepening?
Combination of DFS and DFS that finds best depth limit iteratively, increases limit until goal is found.
What are heuristics?
A characteristic
What is A* search?
A* search has heuristic function to guess a characteristic between state and goal and cost between start state and other states. f = g + h
What is depth-first search?
Expands the deepest node, favouring left node, and working up levels to find goal.
What is the Manhattan distance?
Distance between two points or vectors is the sum of the absolute differences of their coordinates: dM(a,b)=|a1 - b1|+...+|an - bn|
What is a linearly separable problem?
Problem where the set of solutions in input space can be separated from all the non-solutions by some linear function.
What is a heuristic function? what is its purpose?
a

What is uniformed BFS pseudocode?

Uniform BFS Current Node = Start Add current node to visited.for every child, in order of lowest cost to highest cost,if child not in visited, add to queue.loop while queue has contents{ Current Node = top of queueAdd current node to visited.for every child, in order of lowest cost to highest cost, if child not in visited, add to queue.Remove top of queue}
a
a

a

a
Why is depth-first search not optimal?
It is an incomplete uninformed search because it blindly follows a predetermined path.
a
a
What is pseudocode for perceptron?
Start with random initial weights (e.g., in [-.5,.5]) {For All Patterns p from the training set { CalculateActivationError = TargetValue_for_Pattern_p - ActivationFor All Input Weights i {DeltaWeight_i = alpha * Error * Input_iWeight_i = Weight_i + DeltaWeight_i} }}Until “Total Error for all patterns = 0" Or "Time-out"
Define Linearly Seperable?
When a plot of all the possible out comes can be seperated into positive and negative outcomes by a single straight line. With 3 Inputs this could be a cube, and all outcomes seperated by a plane.
a
a
a
a
a
a
a
a
a
a
What is the difference between minimax search and alpha-beta pruning?
a
a
a
ANYTHING NOT SEARCH GOES BELOW HERE
a
What is difference between deterministic and stochastic environment?
Deterministic evironments are completely predictable, stochastic environments involve chance and randomness.
What is a stationary process?
a
What is simulated annealing?
Allowing of 'bad moves' to excape local extrema. Solutions that increase 'profit' are always accepted, but 'bad' moves can be accepted with decreasing probablilty
What is difference between static and dynamic environment?
Static environments do not have any changes, dynamic environments have the capacity to change.
Pseudocode to train binary Perceptron
a
How to improve perceptron accuracy
a
What is an activation function?
a
What is the difference between reflex agent with state and goal-based agent?
A goal-based agent has one or more defined goal states and the ability to reason about the consequences of its actions. Reflex agents with state cannot reason about how to achieve goals.
What is the difference between simple reflex agent and reflex agent with state?
Simple Reflex Agent is one whose behaviour at each moment is a function of its sensory inputs at that moment. It has no memory for previous sensory inputs or maintain any state. Reflex agents with state has actions based on combination of current state and current sensory inputs.
What is an atomic event?
A complete specification of the state of the world.i.e an assignment of every single variable of the world.
What is full joint distribution?
A table where the probability of every combination of variables are indicated.
What is P(a|b)?
P(a ^ b) / P(b) 'a from b divided by all instances of b'
What is P(a v b)?
P(a) + P(b) - P(a^b) 'a plus b minus a from b'

What is P(a ^ b)?

P(a|b) * P(b) 'a given b times all instances of b'
What is Baye's Rule?
P(b|a) = (P(a|b)P(b))/P(a) 'a given b times instances of b, divided by instances of a'
What is conditional independence?
a