Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
70 Cards in this Set
- Front
- Back
What are 3 ways to measure behavior?
|
Observational (observation of behavior)
Physiological (not directly observable, ex:heart rate) Self-Report (tells us what ppl think/feel/act) |
|
What does single item questionaire mean
|
means there is just one question.
not multiple |
|
what's a multi-item questionaire
|
uses several questions instead of one
|
|
The 4 basic types of Scales
|
Nominal
Ordinal Interval Ratio |
|
Nominal Scale
|
answers questions that correspond to behaviors/characteristics
"What is your sex" "Have you ever smoked" |
|
How would you convert a question to numeric data?
(nominal scale) |
assign the answers a number
ex: have you ever smoked? 1=yes 2=no |
|
what can nominal data tell us?
|
frequency of responses.
ex: 28% of the sample reported being female -can look to see if people differed on another measure based on a nominal response |
|
Ordinal Scale
|
response tells us relative ranking order
ex: amount of applause in 8 Mile doesn't tell us how much difference there was, only which one is larger. |
|
Interval Scale
|
Tells us the rank order AND the difference in-between each value.
No true zero value Ex: temperature, sea level, preference questions. |
|
Ratio Scale
|
includes everything
meaningful distance between numbers true zero point numbers correspond to numbers, NOT labels Ex: weight, test scores, income level |
|
Error Variance
|
The things that we didn't/forgot to measure
|
|
4 main causes of error variance
|
1. Individual differences
2. situational factors (room temp, mood of participant) 3. Characteristics of the measure (ease of understanding of questions, test too long) 4. mistakes (distractions) like counting eye blinks/sneezing. |
|
Reliability
|
means the consistency of a measuring technique.
reliability = systematic variance/total variance. a measure is reliable if this equation results in 70% or greater |
|
Total Variance
|
systematic variance + error variance
|
|
Correlational Coefficient
|
a type of effect size
a way to measure reliability ranges from 0 to 1 can be positive or negative higher the value, the more the 2 variables are related. |
|
why square the coefficient?
|
it gives us the proportion of total variance that is systematically related to the measurement.
|
|
square coefficient example
|
test performance example, right?
suppose r=.22 squaring it gives me .05 means 5% of test performance is due to the amount to sleep you get the night before. |
|
Types of Reliability
|
1. test-retest
2, Interitem reliability 3. interrater reliability |
|
Test retest reliability
|
on average, a person should score about the same each time they're measured.
Makes sense only if we measure the same thing/attribute. A good test-retest is a correlation of >.7 |
|
test-retest examples
|
math performance
personality measure *both should give you the same score |
|
Interitem Reliability
|
used when we have multiple questions to help us measure a behavior/ characteristic.
EX: averaging all question responses (6,8, 9, 6) to obtain a single score. (29) |
|
Interitem reliability contd.
|
refers to how consistent the questions (items) are to each other.
Items need to be .3 or higher. |
|
Cronbach's Alpha Coefficient
|
needs to exceed .7
|
|
Interrater Reliability
|
used when we use people to observe people and code behavior. Amount of agreement between our coders
|
|
Ways to increase reliability
|
1. measure participants in the same environment
2. makes questions clear 3. train observer judges 4. minimize coding errors. |
|
Validity
|
we are measuring what we wanted to measure.
Ex: you give a person a math test to test their math ability, not a verbal test. |
|
3 types of validity
|
1. face validity
2. construct validity 3. Criterion related validity |
|
Face Validity
|
on the surface, it looks like we're measuring what we want to.
(just because it looks like we're measuring what we want to doesn't mean we are) (just because it may not look like we have face validity, doesn't mean we dont) |
|
Face Validity example
|
How much do you prefer white people over black people. Even though it sucks, this is face validity.
|
|
Construct Validity
|
examines whether one measure relates to other measures.
.needed because sometimes we measure things that aren't directly observable (attraction) |
|
Construct Validity Example
|
What makes someone a good person?
-charity? -politeness? -selflessness? |
|
Construct Validity Contd
|
should have Convergent Validity (should have high correlation with other measures that are similar) convergent validity.
Should NOT have Divergent Validity (low correlation with items that are different. |
|
Criterion-related Validity
|
tells us whether we can predict a behavioral outcome from a measure
EX: GRE scores predict if you will do well in Grad School. |
|
Concurrent Validity
|
short term
EX: can run a mile faster than average |
|
Predictive Validity
|
long term
EX: less likely to have health problems |
|
A measure can be reliable but not valid
|
EX: interested in math performance, assess with a vocab test.
will give you the same score (reliable) but not a clear understanding of the person math ability. |
|
A measure can be valid but not reliable
|
Ex: holding the door for someone as a measure of politeness.
can measure what you intend to measure, but the person may not always hold the door. |
|
Probability Sampling
|
means you can explain the likelyhood (percentage) that any individual has for becoming a participant
|
|
Random Sample
|
every individual in the population has an equal chance of being selected. Not always possible due to being
-costly -time consuming -difficult |
|
Probability Sampling
|
instead of random samples we use representative samples.
-represents larger population -look like pop, only smaller -creates sampling error -keep in mind margin of error |
|
Margin of Error
|
pie chart with 38% of something with 3% margin of error. Would mean it's between 35% and 41%
|
|
Four types of Probability Sampling
|
1. simple random sampling
2. systematic sampling 3. stratified random sampling 4. cluster sampling |
|
simple random sampling
|
every possible sample (from total sample)
|
|
sampling frame
|
list of all possible participants in a population
|
|
problem with simple random sampling
|
need to know exactly how many people are in a given population
|
|
systematic sampling
|
choosing every nth person from a sample
prob: still random but not every person has an equal likelyhood |
|
Stratified Random Sampling
|
Dividing the populatlon into strata before sampling.
EX: from population of VCU students to VCU female students to 317 female students |
|
Generalizability
|
If you say 317 is awesome, everyone on campus says it's awesome as well.
|
|
Cluster Sampling
|
before choosing participants, you choose grouping of individuals. (geographically, institution)
|
|
Multi stage cluster sampling method
|
breaking down into smaller and smaller pieces
randomly choose state randomly choose county randomly choose school randomly choose participants |
|
Nonprobability Sampling
|
Don't know the probability an individual was chosen from a particular population.
can't calculate error of estimation |
|
3 types of Non-probability sampling
|
-convenience
-Quota -Purposive |
|
Convenience Sampling
|
participants are easy to obtain
makes generalizability more complicated easy to test hypotheses about variables used because we aren't trying to describe a population. |
|
Quota Sampling
|
Convenience sample that ensures certain types of people are included in the study. EX: if we're interested in the number of shoes people own, and we think it will be different for males versus females, then we make sure there are equal numbers in our sample.
|
|
Purposive Sampling
|
using past research to tell you who to sample
should not generally be used ex: ohio residents predict presidency |
|
Power
|
ability to detect effects
-do this by increasing number of participants -smaller effects require more participants -for cohen's d =.2, need about 200 participants |
|
Basic Ethical Guidelines: must do one of the following
|
-add to our basic knowledge
-improve a procedure -improve a program -improve quality of life |
|
Researcher/participant benefits
|
knowledge
career advancement improve aspect of their life |
|
Things researchers must do
|
-provide informed consent
-ensure confidentiality/anonymity -ensure knowledge that participation is voluntary -minimize harm -debrief participants |
|
Do you always need consent?
|
No, may be a naturalistic study
|
|
Voluntary Participation
|
participants have the right to:
refuse to participate not be pressured free to withdraw at any time |
|
Minimizing harm
|
the risk of harm would be no greater than in ordinary life
|
|
Debriefing
|
Explain what the study was about
provide time or participants to ask questions provide additional sources if particicpants want to ask additional questions/counseling |
|
Deception
|
only used when knowing ahead of time might influence participants responses.
Cannot cause/undo stress participants must be full debrifed afterwards |
|
Objections to Deception
|
Violates individuals rights to choose to participate
A questionable basis on which to build a discipline Leads to distrust of psychology in community |
|
Fabrication
Falsification Plagiarism Ethical Standards Supressing Data |
-making up data
-manipulate physical aspects to get results you want -using another person's ideas/word for your own -disregarding theIRB -ignoring results because it won't be popular |
|
Measurement Error
|
factors that distort the observed score so that it isn't precisely what it should be. (doesn't equal the true score)
|
|
Item total correlation
|
correlation between a particular item and the sum of all other items on the scale
|
|
Sampling Error
|
results obtained from a sample differ from what would have be obtained had the entire population been studied.
|
|
Belmont Principles
|
arose from the Tuskeegee Syphillis study.
|