• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/30

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

30 Cards in this Set

  • Front
  • Back
log-likelihood function
the sum of the log-likelihoods, where the log-likelihood for each observation is the log of the density of the dependent variable given the explanatory variables; the log-likelihood function is viewed as a function of the parameters to be estimated
beta coefficient or standardized coefficient
regression coefficient that measures the standard deviation change in the dependent variable given a one standard deviation increase in an independent variable
Goldfeld-Quandt test
assumes that you know the form of the regression so that you know how to partition the data. But, gives an exact test for even small samples.
variance of the prediction error
the variance in the error that arises when predicting a future value of the dependent variable based on an estimated multiple regression equation
maximum likelihood estimate (mle)
a broadly applicable estimation method where the parameter estimates are chosen to maximize the log-likelihood function
dummy variable trap
the mistake of including too many dummy variables (variables that take on the value of 0 or 1) among the independent variables. Occurs when an overall intercept is in the model and a dummy variable is included FOR EACH GROUP.
Breusch-Pagan test
a test for heteroskedasticity where the squared OLS residuals are regressed on the explanatory variables in the model
asymptotic efficient
for consistent estimators with asymptotically normal distributions, the estimator with the smallest asymptotic variance
LaGrange-Multiplier Test (aka score test)
used to estimate the improvement in model fit if additional variables were included in the model, an test for omitted variables.
central limit theorem (CLT)
a key result from probability theory that implies that the sum of independent random variables, or even weakly dependent random variables, when standardized by its standard deviation, has a distribution that tends to standard normal as the sample size grows
adjusted R-square
a goodness-of-fit measure in multiple regression analysis that penalizes additional explanatory variables by using a degrees of freedom adjustment in estimating the error variance
oblique v. orthogonal projections
oblique: graphical projection using 2-dimensional depictions of 3-d objects
orthogonal: maps it in 3-d space (so oblique matrices have only (x,y); orthogonal have (x,y,z)
method of moment estimators
an estimator obtained by using the sample analog of population moments; OLS and two-stage least squares are both method of moments estimators
probit model (of being married)
a model for binary responses where the response probability is the standard normal cdf evaluated at a linear function of the explanatory variables
law of large numbers
theorem: the average from a random sample converges in probability to the population average; also holds for stationary and weakly dependent time series
variance inflation factor
in multiple regression analysis under the Gauss-Markov assumptions, the term in the sampling variance affected by correlation among the explanatory variables
prediction interval for forecast
a confidence interval for an unknown outcome on a dependent variable in a multiple regression model
instrumental variables
in an equation with an endogenous explanatory variable, an IV is an ommitted variable, is uncorrelated with the error in the equation, and is partially correlated with the endogenous explanatory variable
two-sided alternative
you're saying that the independent variable WILL have an effect, but you're not saying in which direction. It could be positively or negatively correlated.
exclusion restrictions
restrictions that state that certain variables are excluded from the model (or have zero population coefficients)
Breusch-Godfrey test for AR(q)
an asymptotically justified test for AR(p) serial correlation, with AR(1) being the most popular; the test allows for lagged dependent variables as well as other regressors that are not strictly exogenous
pure v. impure heteroskedasticity (Butler's notes)
Pure- there are no correlated omitted variables that cause the variance to change (o2=o2zi2)
Impure- The omitted variable is correlated with the independent variable (o2=f(zi,zi))
"robust" option in STATA
a STATA code that keeps the OLS coefficientes but uses robust standard errors.
"prais y x1 x2 x3" procedure in STATA
stata junk
between-group v. within-group estimators
dunno about the estimator part...but you basically want your between-group differences to be much higher than within-group for your independent variable. Like if I was doing a study where I changed your blood to gasoline, I'd want there to be more variance between your blood and the control group's than between your blood and the other sap with gas in his veins. Get it?
dprobit in STATA
stata crap
marginal effect from a logit regression
dP(X)/dXj = g(B0+XB)Bj where g(z) = dG/dz (z)
STATA "test" statements for this model: regress y x1 x2 x3; test (x1=x2) (x3=x0)
stata stratosphere
weighted least squares estimator
an estimator used to adjust for a known form of heteroskedasticity, where each squared residual is weighted by the inverse of the estimated variance of the error
"hettest, rhs" in STATA
tests for heteroskedasticity in gression model for a right hand sided test.