• Shuffle
Toggle On
Toggle Off
• Alphabetize
Toggle On
Toggle Off
• Front First
Toggle On
Toggle Off
• Both Sides
Toggle On
Toggle Off
Toggle On
Toggle Off
Front

### How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key

Play button

Play button

Progress

1/47

Click to flip

### 47 Cards in this Set

• Front
• Back
• 3rd side (hint)
 ++ MAT 452 Topics ++ MAT 452 Topics 1. [JointDistnTopics] 2. [MultivariateTopics] 3. [SamplingDistnsTopics] 4. [EstimatorTopics] None ++ Joint Distn Topics ++ Joint Distn Topics 1. [Joint Density Function] 2. [Joint Distribution Function] 3. [Joint Marginal Distn] 4. [Joint Conditional Distn] 5. [Joint Independence] 6. [Joint Expected Value] None Joint Density Function ++ Joint Density Function Properties of a Density Function: 1. f(y1,y2) >= 0 2. Intg[-∞,∞]Intg[-∞,∞]f(y1,y2)dy1dy2 = 1 None Joint Distribution Function Properties ++ Joint Distribution Function Properties Properties of a Distribution Function: 1. F(-∞,-∞) = F(-∞,y2) = F(y1,-∞) = 0 2. F(∞,∞) = 1 3. if y1' >= y1 and y2' >= y2 then F(y1',y2') - F(y1',y2) - F(y1,y2') + F(y1,y2) >= 0 4. F(y1,y2) = Intg[-∞,y1]Intg[-∞,y2]f(y1,y2)dy2dy1 None Joint Marginal Distn ++ Joint Marginal Distn 1. f1(y1) = Intg[-∞,∞]f(y1,y2)dy2 None Joint Conditional Distn ++ Joint Conditional Distn 1. f(y1|y2) = f(y1,y2)/f2(y2) None Joint Independence ++ Joint Independence 1. f(y1,y2) = f1(y1)f2(y2) 2. f(y1,y2) = g(y1)h(y2) 3. E(g(y1)h(y2)) = E(g(y1))E(h(y2)) None Joint Expected Value ++ Joint Expected Value 1. E(y1) = Intg[-∞,∞]Intg[-∞,∞]y1f(y1,y2)dy2dy1 2. E(y1) = Intg[-∞,∞]y1f1(y1)dy1 None ++ MultivariateTopics ++ MultivariateTopics 1. [Covariance] 2. [Linear Functions] 3. [Multinomial Probablity Distn] 4. [Bivariate Normal Distn] 5. [Joint Conditional Expectations] 6. [MethodsOfFindingDistns] 7. [Order Statistics] None Covariance 1. Cov(Y1,Y2) = E((Y1-μ1)(Y2-μ2)) 2. Cov(Y1,Y2) = E(Y1Y2) - μ1μ2 3. ρ = Cov(Y1,Y2)/σ1σ2 4. If indp. then Cov(Y1,Y2) = 0 None Linear Functions ++ Linear Functions 1. U = Σ(i=1,n)aiYi 2. E(U) = Σ(i=1,n)aiE(Yi) 3. V(U) = Σ(i=1,n)ai^2V(Yi) + 2ΣΣ(i∞ 5. ProofOfCentralLimitTheorem 6. NormalBinomial ++ Pivot Method CI ++ Pivot Method CI 1. Pivot function has θ as only unknown property, but it's probability distribution does not depend on θ. 2. P(θL^ < θ < θU^) = 1 - α 3. P(a < U < b) = 1 - α 4. U = f(θY) ++ Proof Of Central Limit Theorem ++ Proof Of Central Limit Theorem 1. transform W into Zi = (Yi-μ)/σ with μ=E(Zi)=0, σ=E(Zi^2)=1 2. Un = Σ(i=1,n)Zi/√n 3. mzi(t/√n) = E(e^tW/√n) = 1 + 0(t/(1!√n)) + 1((t/√n)^2/2!) + E(Zi^3)((t/√n)^3/3!) + ... 4. mn(t) = Π(i=1,n)mzi(t/√n) = (1 + t^2/2n + ...)^n 5. ln(mn(t)) = n*ln(1 + t^2/2n + ...) 6. ln(mn(t)) = n((t^2/2n + ...) - (t^2/2n + ...)^2/2 + ...) 7. lim(n->∞)mn(t) = e^(t^2/2) Normal Approximation to the Binomial Distribution 1. U = Y/n = (1/n)Σ[i=1,n]Xi 1. μy = np 2. σy^2 = npq 3. n > 9(max(p,q)/min(p,q)) ++ Large Sample CI ++ Large Sample CI 1. Use normal distribution (Z) to determine confidence interval where n>100? 2. Z = (θ^-θ)/σθ^ 3. P(-zα/2 < Z < zα/2) = 1-α 4. θ = θ^+-(zα/2)(σθ^) ++ Bias ++ Bias 1. B(θ^) = E(θ^)-θ 2. MSE(θ^) = E((θ^-θ)^2) = V(θ^) + B(θ^)^2 ++ Sample Size ++ Sample Size 1. ε=(zα/2)(σθ^) 2. since σθ^ is a function of n, substitute and solve for n ++ Small Sample CI ++ Small Sample CI 1. T = (Yb-μ)/(S/√n) 2. S^2 = (1/n-1)Σ(Yi-Yb)^2 3. μ = Yb+-(tα/2)(S/√n) 4. ν = n-1 df 5. |μ1-μ2| = |Yb1-Yb2|+-(tα/2)(Sp√(1/n1+1/n2) 6. Sp = ((n1-1)S1^2+(n2-1)S2^2)/(n1+n2-2)) 7. ν = n1+n2-2 df ++ Var CI ++ Var CI 1. σ^2L = (n-1)(S^2)/(X2(α/2)) 2. σ^2U = (n-1)(S^2)/(X2(1-α/2)) 3. Pivot ~ X2(n-1 df) ++ Common Estimators ++ Common Estimators 1. [Estimator_μ] 2. [Estimator_p] 3. [Estimator_μ1-μ2] 4. [Estimator_p1-p2] ++ Estimator Topics ++ Estimator Topics 1. [CommonEstimators] 2. [PivotMethodCI] 3. [VarCI] 4. [PropertiesOfEstimatorTopics] 5. Mehod of Moments 6. Method of Max Likelihood Topics ++ Estimator_μ ++ Estimator_μ 1. θ = μ 2. θ^ = Yb 3. σθ^ = σ/√n ++ Estimator_p ++ Estimator_p 1. θ = p 2. θ^ = p^ = Y/n 3. σθ^ = √pq/n ++ Estimator_μ1-μ2 ++ Estimator_μ1-μ2 1. θ = μ1-μ2 2. θ^ = Yb1-Yb2 3. σθ^ = √(σ1^2/n1 + σ2^2/n2) ++ Estimator_p1-p2 ++ Estimator_p1-p2 1. θ = p1-p2 2. θ^ = p1^-p2^ 3. σθ^ = √(p1q1/n1 + p2q2/n2) ++ Properties Of Estimators ++ Properties Of Estimators 1. [Unbiasedness] 2. [Efficiency] 3. [Consistency] 4. [Sufficiency] ++ Efficiency ++ Efficiency 1. eff(θ1^,θ2^) = V(θ2^)/V(θ1^) ++ Consistency ++ Consistency 1. lim(n->∞)V(θn^)=0 2. [ConsistencyProof] 3. Un converges ~N(0,1), Wn converges 1, lim(n->∞)Un/Wn ~N(0,1) ++ Sufficiency ++ Sufficiency 1. P(x1,...,xn|Y=y)=P(x1,...,xn,y)/P(y) = U 2. Y is sufficient for θ if U does not depend on θ 3. FactorizationCriterion 4. [MVUE]