• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/51

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

51 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)

What is Science

Body of knowledge, field, or approach to studying variables producing verifiable results

Knowledge, field, study = verifiable results

Verifiable

Good hypotheses, objective, replicable methods

Objective/Replicable

O: Not influenced by individual, not subjective



R: repeatable

Hypothesis

Tentative explanation of a phenomenon


Evidence that hypo is true = theory

Good hypotheses

Logical:


- reasonable explanation


Testable:


- explanation for relationship btw variables that can be defined or measured


Refutable:


- can be proven false


Positive:


- explanations about prescence (not abscence) of relationship btw variables

LTRP

Pseudoscience

Ideas based on nonscientific theory, faith, and belief

All the wrong things used

Scientific Steps

1. Observe phenomenon


2. Develop hypothesis


3. Make prediction


4. Evaluate prediction


5. Address hypothesis


ODMEA

H1

Research hypothesis or alternative hypothesis



- predicts relationships btw variables

Something there

H0

Null hypothesis:


- Predicts no relationship btw variables

Nothing there

Type 1 Error and Alpha

Rejection of Null when actually true



Alpha: probability of type 1 error


- significance level


- p-val

No relation = true

Type 2 Error & Beta

Acceptance of Null when actually FALSE


Beta: probability of Type 2 error

Acceptance

Power

Correct choosing H0

Simple random sampling

Equal chance at being selected

Systematic sampling

Pick starting point go to nth number

A - nth sampling

Cluster sampling

Select group measure all of them, hence cluster

Starbucks

Stratified random sampling

Purposefully select particular demographic then randomly select individuals w/in each category

Selection of race or other demographics

Proportionate sampling

Each group represented proportional to population.

If 10% of pop has freckles, 10% of sample will have freckles

Probability sampling

1. Everyone has non-zero chance of being selected



2.

Convenience Sampling

Easily accessible people

Quota Sampling

- Represent certain groups


- Decide amount of people from each group


- Not Random

Snowball sampling

Participants recruit others like them

Throwing a snowball

Sampling Distribution & Standard error of mean

Sample distribution of mean:


Mu * X bar



Standard error mean:


Sigma * x bar



Distribution of statistic. Draw random sample, calculate sample mean, repeat until having many sample means

Skewness

Larger positive val = more positively skwed


Lump left = positive



Larger neg val = more negatively skewed


Lump right = negative

Kurtosis

- Middle peak of curve, if not normally distributed it is kurtosis


Negative = flat and wide, platykurtic


Positive = tall and narrow, leptokurtic


Central Limit Theorem

Mean of sampling dist of mean (mu*xbar) = mu (pop mean)



Variance = sigma^2/N (sample size)



Standard error of mean (mu*xbar) > pop SD (sigma), gets smaller as n increases



Approaches normal dist as sample size (n) increases, and then leptokurtic



Z-scores

2%, 14%, 34%


Way to estimate probability oucomes


Z = x - mu/sigma


-1 if z score negative

Categorical (frequency) data

Anything like male, female, any number that has 1, 2, 3

Face validity

Does it look valid, unscientific

Predictive validity

Accurately predict behavior according to theory

Concurrent validity

When scores obtained form new measure correlate with scores from more well established measured

Construct validity

Grows overtime with studies; scores obtained from measurement behave exactly the same as variable itself



Ex: temperature in predicting aggression

54

Convergent validity

Measurements converge on same construct. Correlates with other measures of similar constructs

Divergent

Measurement doesn't correlate with measures of dissimilar constructs

Reliability

Valid:


Circle within reliability,


E.g. if I measure height and intelligence, it's reliable bc height but not valid


Reliability:


Consistency of measurements


- if I take survey and get one score, then take it in 6 months, if I get same score it's reliablr

Correlation

Measure of relationship btw two variables



Pearson's r (+1, -1)



Closeness to regression libe = correlation strength

Strong correlation

+- .75 to +1

Weak correlation

0 to +-.25

Medium correlation

+-.25 to +-.75

Guidelines Correlation

Can have stat sig with large enough sample size, even w/ weak correlation



Can have strong correlation, but not stat sig bc small sample size



APA: r(N-2) = #, p...#



N = numbers of pairs of group scores



Spearmans r = correlation btw ranked or order variables

Point Biserial Correlation

rpb instead of r


Correlation btw one continous variable and one dichotomous (variable that can take only two different vals)



Phi(ø) Correlation btw two dichotomous (only two vals) variables


Regression

Prediction of one variable from knowledge of one or more variables



Equation: y = bx + a



Error of prediction (residuals) = difference btw y and y hat



Standard error of estimate: avg squared deviations: SD of points above and below regression line



Strong correlation = less error


Weak = more


Perfect = 0

T test

Determines difference btw two means of two different populations

One sample t test

Sample vs population mean dif btw X bar and mu

Ex: Difference btw SAT score in one state vs another

Dependent samples t test

Dif btw two related samples


- Everyone gives 2 scores


N * 2 = Number of scores



Df = N - 1, N = #of pairs of scores

Ex: dif in depression before therapy and after

Independent samples t test

Dif btw 2 unrelated samples




N - 2, N = #of participants

Ex: dif in depression btw those who did vs did not receive therapy

Pearson's r

Correlation btw two variables

Spearmans r

Correlation btw ordered or ranked variables

Ex: behavior rank and symptom rank

R^2

Variability in y related to x not caused

Chi-Square

Non-parametric determines if there is diff btw groups of Categorical data



Also proportions, used single classified variable



Analyzes frequency/Categorical data/count



Df: G-1 groups - 1

# of something


Ex: If we want to know who has more depression btw young and old participants

Chi-square test of Independence

Pearson's chi square or chi-square test of association determines of there is relationship btw two categorical variables



Participants classified on basis of two variables simultaneously



Critical Val = (R-C)(C-1)



Any number of groups are possible 2x2 2x3 etc. For 2 variables



If contingency table too small, chi-square not valid



For <_ 9 cells, guidelines for min sample size is 5x # of cells



Ex: 2x3 (6 cells) contingency table needs 30 participants

Chi-square assumes

independence of observations I.e. one person's scores don't affect another's scores



One and only one score



Larger N = more likely for stat sig