Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
25 Cards in this Set
- Front
- Back
Interpretations of y=Ax
|
y is measurement or observation; x is unknown to be determined
x is ‘input’ or ‘action’; y is ‘output’ or ‘result’ y = Ax defines a function or transformation that maps x ∈ R^n into y ∈ R^m |
|
Interpretation of aij
|
aij is gain factor from jth input (xj) to ith output (yi)
ith row of A concerns ith output jth column of A concerns jth input |
|
A is lower triangular
|
aij = 0 for i < j, means yi only depends on x1, . . . , xi
|
|
A is diagonal
|
aij = 0 for i != j, means ith output depends only on ith input
|
|
sparsity pattern of A
|
list of zero/nonzero entries of
A, shows which xj affect which yi |
|
Jacobian of A
|
the derivative matrix of A
|
|
Vector space
|
a vector space or linear space (over the reals) consists of
• a set V • a vector sum + : V × V → V • a scalar multiplication : R × V → V • a distinguished element 0 ∈ V |
|
Subspace
|
• a subspace of a vector space is a subset of a vector space which is itself
a vector space • roughly speaking, a subspace is closed under vector addition and scalar multiplication • examples V1, V2, V3 above are subspaces of Rn |
|
Independent set of vectors
|
a set of vectors {v1, v2, . . . , vk} is independent if
a1v1 + a2v2 + · · · + akvk = 0 ⇒ a1 = a2 = · · · = 0 • coefficients of a1v1 + a2v2 + · · · + akvk are uniquely determined, i.e., a1v1 + a2v2 + · · · + akvk = b1v1 + b2v2 + · · · + bkvk implies a1 = b1, a2 = b2, . . . , ak = bk • no vector vi can be expressed as a linear combination of the other vectors v1, . . . , vi−1, vi+1, . . . , vk |
|
Basis
|
set of vectors {v1, v2, . . . , vk} is a basis for a vector space V if
• v1, v2, . . . , vk span V, i.e., V = span(v1, v2, . . . , vk) • {v1, v2, . . . , vk} is independent • for a given vector space V, the number of vectors in any basis is the same |
|
Dimension of V
|
number of vectors in any basis
|
|
Nullspace of a matrix
|
the nullspace of A ∈ R^m×n is defined as
N(A) = { x ∈ R^n | Ax = 0 } |
|
Zero nullspace
|
A is called one-to-one if 0 is the only element of its nullspace:
N(A) = {0} |
|
Onto matrices
|
A is called onto if R(A) = Rm ⇐⇒
• Ax = y can be solved in x for any y • columns of A span R^m • A has a right inverse, i.e., there is a matrix B ∈ R^n×m s.t. AB = I • rows of A are independent • N(A^T ) = {0} • det(AA^T ) != 0 |
|
Inverse
|
A ∈ R^n×n is invertible or nonsingular if detA != 0
• columns of A are a basis for R^n • rows of A are a basis for R^n • y = Ax has a unique solution x for every y ∈ R^n • A has a (left and right) inverse denoted A−1 ∈ R^n×n, with AA−1 = A−1A = I • N(A) = {0} • R(A) = R^n • detA^TA = detAA^T != 0 |
|
Rank
|
rank(A) = dimR(A)
(nontrivial) facts: • rank(A) = rank(A^T ) • rank(A) is maximum number of independent columns (or rows) of A hence rank(A) ≤ min(m, n) • rank(A) + dimN(A) = n |
|
Full rank
|
for A ∈ R^m×n we always have rank(A) ≤ min(m, n)
we say A is full rank if rank(A) = min(m, n) |
|
norm
|
for x ∈ R^n
||x|| = √(x^Tx) |
|
Inner product
|
<x, y> := x1y1 + x2y2 + · · · + xnyn = x^T y
|
|
Orthonormal set of vectors
|
set of vectors u1, . . . , uk ∈ R^n is
• normalized if ||u|| = 1, i = 1, . . . , k (ui are called unit vectors or direction vectors) • orthogonal if ui ⊥ uj for i != j • orthonormal if both |
|
Orthonormal basis for R^n
|
• suppose u1, . . . , un is an orthonormal basis for R^n
• then U = [u1 · · · un] is called orthogonal: it is square and satisfies U^TU = I |
|
Gram-Schmidt procedure
|
given independent vectors a1, . . . , ak ∈ Rn, G-S procedure finds
orthonormal vectors q1, . . . , qk s.t. span(a1, . . . , ar) = span(q1, . . . , qr) for r ≤ k |
|
QR decomposition
|
also called QR factorization: A = QR
Q^TQ = Ik, and R is upper triangular & invertible |
|
Overdetermined linear equations
|
y = Ax where A ∈ R^m×n is (strictly) skinny, i.e., m > n
|
|
BLUE property
|
linear measurement with noise:
y = Ax + v |