• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/47

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

47 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
++ MAT 452 Topics
++ MAT 452 Topics

1. [JointDistnTopics]
2. [MultivariateTopics]
3. [SamplingDistnsTopics]
4. [EstimatorTopics]
None
++ Joint Distn Topics
++ Joint Distn Topics

1. [Joint Density Function]
2. [Joint Distribution Function]
3. [Joint Marginal Distn]
4. [Joint Conditional Distn]
5. [Joint Independence]
6. [Joint Expected Value]
None
Joint Density Function
++ Joint Density Function

Properties of a Density Function:
1. f(y1,y2) >= 0
2. Intg[-∞,∞]Intg[-∞,∞]f(y1,y2)dy1dy2 = 1
None
Joint Distribution Function Properties
++ Joint Distribution Function Properties

Properties of a Distribution Function:
1. F(-∞,-∞) = F(-∞,y2) = F(y1,-∞) = 0
2. F(∞,∞) = 1
3. if y1' >= y1 and y2' >= y2 then
F(y1',y2') - F(y1',y2) - F(y1,y2') + F(y1,y2) >= 0
4. F(y1,y2) = Intg[-∞,y1]Intg[-∞,y2]f(y1,y2)dy2dy1
None
Joint Marginal Distn
++ Joint Marginal Distn

1. f1(y1) = Intg[-∞,∞]f(y1,y2)dy2
None
Joint Conditional Distn
++ Joint Conditional Distn
1. f(y1|y2) = f(y1,y2)/f2(y2)
None
Joint Independence
++ Joint Independence
1. f(y1,y2) = f1(y1)f2(y2)
2. f(y1,y2) = g(y1)h(y2)
3. E(g(y1)h(y2)) = E(g(y1))E(h(y2))
None
Joint Expected Value
++ Joint Expected Value

1. E(y1) = Intg[-∞,∞]Intg[-∞,∞]y1f(y1,y2)dy2dy1
2. E(y1) = Intg[-∞,∞]y1f1(y1)dy1
None
++ MultivariateTopics
++ MultivariateTopics

1. [Covariance]
2. [Linear Functions]
3. [Multinomial Probablity Distn]
4. [Bivariate Normal Distn]
5. [Joint Conditional Expectations]
6. [MethodsOfFindingDistns]
7. [Order Statistics]
None
Covariance
1. Cov(Y1,Y2) = E((Y1-μ1)(Y2-μ2))
2. Cov(Y1,Y2) = E(Y1Y2) - μ1μ2
3. ρ = Cov(Y1,Y2)/σ1σ2
4. If indp. then Cov(Y1,Y2) = 0
None
Linear Functions
++ Linear Functions

1. U = Σ(i=1,n)aiYi
2. E(U) = Σ(i=1,n)aiE(Yi)
3. V(U) = Σ(i=1,n)ai^2V(Yi) + 2ΣΣ(i<j)aiajCov(Yi,Yj)
4. Cov(U1,U2) = Σ(i=1,n)Σ(j=1,m)aiajCov(Yi,Yj) ???
None
Multinomial Probablity Distn

p(y1,,yk) =
++ Multinomial Probablity Distn
1. p(y1,..yk) = n!(p1^y1)(p2^y2)...(pk^yk)/y1!y2!...yk!
OR
p(y1,..yk) = n!Πpk^yk/Πyk!
2. Σ(i=1,k)yi=n
3. Requires [Multinomial Experiment] assumptions
Advanced Distributions
Multinomial Experiment
++ Multinomial Experiment

1. n identical trials
2. k classes of outcomes
3. p1+p2...+pk=1
4. independent
5. Yi = number of trials for outcome class i
None
Bivariate Normal Distn

1. f(y1,y2) = ?
2. z1 = ?
3. Q = ?
++ Bivariate Normal Distn

1. f(y1,y2) = e^(-Q/2)/(2πσ1σ2√(1-ρ^2))
2. z1 = (y1-μ1)/σ1
3. Q = (z1^2-2ρz1z2+z2^2)/(1-ρ^2)
f(y)
Joint Conditional Expectations
++ Joint Conditional Expectations

1. E(Y1|Y2=y2) = Intg[-∞,∞]y1f(y1|y2)dy1
2. E(Y1) = E(E(Y1|Y2)
3. V(Y1) = E(V(Y1|Y2)) + V(E(Y1|Y2))
None
++ Methods Of Finding Distns
++ Methods Of Finding Distns

1. [Method of Distribution Functions]
2. [Method of Transformations]
3. [Method of MGFs]
None
Method of Distribution Functions
++ Method of Distribution Functions

1. U = h(y)
2. Fu(u) = P(U<=u) = P(h(y)<=u) = P(Y<=h-1(u)) = Intg[-∞,h-1(u)]f(y)dy
3. fu(u) = dFu(u)/du
None
Method of Transformations
++ Method of Transformations

1. U = h(y)
2. dh-1/du = d(h-1(u))/du
3. fu(u) = fy(h-1(u))|dh-1/du|
None
Method of MGFs
++ Method of MGFs

1. mu(t) = E(e^tU) = Intg[-∞,∞](e^tU)f(u)du

See also:

2. [Indpendent MGFs]
3. [Independent Normal MGFs]
None
Indpendent MGFs
++ Indpendent MGFs

1. if Y1,Y2,...,Yn indp
2. U = Y1 + Y2 + ... + Yn
3. mu(t) = my1(t)my2(t)...myn(t)
None
++ Independent Normal MGFs
++ Independent Normal MGFs

1. U = Σ(i=1,n)aiYi = a1Y1 + a2Y2 + ... + anYn
2. E(U) = Σ(i=1,k)aiE(Yi)
3. V(U) = Σ(i=1,k)(ai^2)(σi^2)
None
Order Statistics
++ Order Statistics

1. Y(1) = min(Y1,...,Yn)
2. g(1)(y) = n(1-F(y))^(n-1)f(y)
3. Y(n) = max(Y1,...,Yn)
4. g(n)(y) = n(F(y))^(n-1)f(y)
5. g(k)(yk) = ?
6. g(j)(k)(yj,yk) = ?
None
++ Sampling Distns Topics
++ Sampling Distns Topics

1. [YbDistn]
2. [SumOfZSquaresDistn]
3. [SampleVarianceDistn]
4. [TDistn]
5. [FDistn]
6. [CentralLimitTheorem]
++ Yb Distn
++ Yb Distn

1. Yb = (1/n)Σ(i=1,n)Yi
2. μyb = μ
3. σyb^2 = σ^2/n
4. Zyb = √n((Yb-μ)/σ)
++ Sum Of Z Squares Distn
++ Sum Of Z Squares Distn

1. Zi = (Yi-μ)/σ
2. Σ(i=1,n)Zi^2
3. ~ X2(n) distn
4. ? proof ?
++ S^2 Distn
++ S^2 Distn

1. S^2 = (1/(n-1))Σ(i=1,n)(Yi - Yb)^2
2. W = (n-1)(S^2)/σ^2 = (1/σ^2)Σ(i=1,n)(Yi - Yb)^2
3. ~ X2(n-1) distn
4. ? proof (n-1) ?
5. Yb and S^2 are independent
6. used to make inference about the population variance
++ Students-t Distn
++ Students-t Distn

1. Z ~ N(0,1)
2. W ~ X2(ν df)
3. T = Z/√(W/ν)
4. ~ t(ν df)
5. show T = √n(Yb - μ)/S ~ t(ν=n-1)
6. used to make inferences about the population expected value
++ Fdistn
++ Fdistn

1. W1,W2 ~ X2(ν df)
2. F = (W1/ν1)/(W2/ν2)
3. ~ F(ν1,v2 df)
4. used for determining magnitude of rations of estimates of variance
++ Central Limit Central Limit Theorem
++ Central Limit Theorem

1. E(Yi) = μ
2. V(Yi) = σ^2
3. Un = √n((Yb-μ)/σ)
4. ~ converges to N(0,1) as n->∞
5. ProofOfCentralLimitTheorem
6. NormalBinomial
++ Pivot Method CI
++ Pivot Method CI

1. Pivot function has θ as only unknown property, but it's probability distribution does not depend on θ.
2. P(θL^ < θ < θU^) = 1 - α
3. P(a < U < b) = 1 - α
4. U = f(θY)
++ Proof Of Central Limit Theorem
++ Proof Of Central Limit Theorem

1. transform W into Zi = (Yi-μ)/σ with μ=E(Zi)=0, σ=E(Zi^2)=1
2. Un = Σ(i=1,n)Zi/√n
3. mzi(t/√n) = E(e^tW/√n) = 1 + 0(t/(1!√n)) + 1((t/√n)^2/2!) + E(Zi^3)((t/√n)^3/3!) + ...
4. mn(t) = Π(i=1,n)mzi(t/√n) = (1 + t^2/2n + ...)^n
5. ln(mn(t)) = n*ln(1 + t^2/2n + ...)
6. ln(mn(t)) = n((t^2/2n + ...) - (t^2/2n + ...)^2/2 + ...)
7. lim(n->∞)mn(t) = e^(t^2/2)
Normal Approximation to the Binomial Distribution
1. U = Y/n = (1/n)Σ[i=1,n]Xi
1. μy = np
2. σy^2 = npq
3. n > 9(max(p,q)/min(p,q))
++ Large Sample CI
++ Large Sample CI

1. Use normal distribution (Z) to determine confidence interval where n>100?
2. Z = (θ^-θ)/σθ^
3. P(-zα/2 < Z < zα/2) = 1-α
4. θ = θ^+-(zα/2)(σθ^)
++ Bias
++ Bias

1. B(θ^) = E(θ^)-θ
2. MSE(θ^) = E((θ^-θ)^2) = V(θ^) + B(θ^)^2
++ Sample Size
++ Sample Size

1. ε=(zα/2)(σθ^)
2. since σθ^ is a function of n, substitute and solve for n
++ Small Sample CI
++ Small Sample CI

1. T = (Yb-μ)/(S/√n)
2. S^2 = (1/n-1)Σ(Yi-Yb)^2
3. μ = Yb+-(tα/2)(S/√n)
4. ν = n-1 df
5. |μ1-μ2| = |Yb1-Yb2|+-(tα/2)(Sp√(1/n1+1/n2)
6. Sp = ((n1-1)S1^2+(n2-1)S2^2)/(n1+n2-2))
7. ν = n1+n2-2 df
++ Var CI
++ Var CI

1. σ^2L = (n-1)(S^2)/(X2(α/2))
2. σ^2U = (n-1)(S^2)/(X2(1-α/2))
3. Pivot ~ X2(n-1 df)
++ Common Estimators
++ Common Estimators

1. [Estimator_μ]
2. [Estimator_p]
3. [Estimator_μ1-μ2]
4. [Estimator_p1-p2]
++ Estimator Topics
++ Estimator Topics

1. [CommonEstimators]
2. [PivotMethodCI]
3. [VarCI]
4. [PropertiesOfEstimatorTopics]
5. Mehod of Moments
6. Method of Max Likelihood
Topics
++ Estimator_μ
++ Estimator_μ

1. θ = μ
2. θ^ = Yb
3. σθ^ = σ/√n
++ Estimator_p
++ Estimator_p

1. θ = p
2. θ^ = p^ = Y/n
3. σθ^ = √pq/n
++ Estimator_μ1-μ2
++ Estimator_μ1-μ2

1. θ = μ1-μ2
2. θ^ = Yb1-Yb2
3. σθ^ = √(σ1^2/n1 + σ2^2/n2)
++ Estimator_p1-p2
++ Estimator_p1-p2

1. θ = p1-p2
2. θ^ = p1^-p2^
3. σθ^ = √(p1q1/n1 + p2q2/n2)
++ Properties Of Estimators
++ Properties Of Estimators

1. [Unbiasedness]
2. [Efficiency]
3. [Consistency]
4. [Sufficiency]
++ Efficiency
++ Efficiency

1. eff(θ1^,θ2^) = V(θ2^)/V(θ1^)
++ Consistency
++ Consistency

1. lim(n->∞)V(θn^)=0
2. [ConsistencyProof]
3. Un converges ~N(0,1), Wn converges 1, lim(n->∞)Un/Wn ~N(0,1)
++ Sufficiency
++ Sufficiency

1. P(x1,...,xn|Y=y)=P(x1,...,xn,y)/P(y) = U
2. Y is sufficient for θ if U does not depend on θ
3. FactorizationCriterion
4. [MVUE]