• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/62

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

62 Cards in this Set

  • Front
  • Back
What are Pierce's 4 ways of knowing?
Pierce's 4 ways:
1. Authority/Expert
2. A Priori/Logic & Reasoning
3. Tenacity/SOS
4. Scientific method
What are three things that Behavioral Science sets out to do?
3 GOAls of behavioral science:
1. Generates Knowledge
2. Organizes knowledge (theory)
3. Application of knowledge
What are John Stuart Mill's 3 Rules of Causality?
Mill's 3 rules of causality:
1. Covariation/Cause consistently related to effect
2. Time precedence of cause/Cause before effect
3. No plausible alternative explanations
What are four components of all theories?
PHAD- 4 components of all theories
1. Assumptions- taken as they are b/c they cannot be tested
2. Hypothetical Constructs- Cannot be directly observed
3. Definition (narrative/classification/operational)
4. Propositions- relationships among hypothetical constructs
What are the 8 Criteria for a good theory (lump into essential & desirable groups)?
8 criteria for a good theory:
Essential:
1. Logical consistency/Does not contradict itself
2. Falsifiability/test theories to see if they're wrong, not correct
3. Agreement w/known data
Desirable:
4. Clarity
5. Parsimony
6. Consistency w/related theories
7. Applicability to the real world
8. Stimulates future research
What did Cohen's Power Primer do for research design?
Cohen talked about how statistical power analysis exploits the relationship among variables (population effect size, sample size, sig criterion, and statistical power) and how we need to look at more than just p values when examining data.
What did Prentice & Miller's "When small effects are impressive" add to research design? What are the two strategies they talked about?
Talks about how magnitude is only one part of examining effects. We must also look at methodology.Statistical size is dependent on operationalization of IV's.
Two strategies:
1. Showing that even the most minimal manipulation in IV still accounts for variance in DV
2. Choosing DV that seems unlikely to yield from influence from IV.
What does Cohen's "Things I have learned so far" add to research design?
He talked about 5 things that he's learned:
1. Parsimony
2. Use graphics to display complex info
3. Falsifiability (inductive inference through null hypotheses rejection)
4. Null hypothesis tests us- We do not know the truth of the null hypothesis, only the probability of the data, given the truth of the null.
5. Confidence Interval's give the range of values of the effect size index.
What are the 5 steps of the research process?
1. Develop idea/lit review
2. Choose research strategy/proposal
3. collect data
4. analyze and interpret
5. communicate results/thesis defense
What are four criteria for evaluating research?
1. Construct validity
2. internal validity
3. statistical conclusion validity- stat's properly used/interpreted
4. External validity
What are the 9 traits of a good researcher?
Enthusiasm
Open-mindedness
Common sense
Inventiveness
Role-taking capability
confidence in judgment
communication
consistency and care with details
honesty
What are the goals of Basic and Applied research?
Basic- generate knowledge for sake of knowledge
Applied- Find a solution to problem
What are 3 research strategies?
1. Experimental
2. Correlational
3. Case Study
What are the three ways developmental research is conducted?
1. Cross-sectional (different ages at single point in time)
2. Longitudinal- same group over time
3. cohort-sequential- combo of previous two
What are the 8 steps you should use to formulate a research question?
1. Establish background- formal & informal associations
2. Choose topic-interest/feasibility
3. Formulate question- well grounded, operationalization, relevant, etc.
4. Review the literature- context, potential problems, avoid duplication
5. Formulate hypotheses- Research & statistical
6. Design study- how, what, where, when, whom?
7. Write the proposal
8. collect the data
What is Classical Measurement Theory and what are it's assumptions?
CMT is quantified as X = t + e where X is an observed score, t is a True score, e is error.
Assumptions:
Error and true scores are unrelated
The mean of all errors is equal to zero
over many trials the avg observed score is equal to the true score (SEM)
What are 4 sources of error and their corresponding estimates of reliability?
1. Time- test-retest
2. Content- parallel, split-half, internal consistency
3. Raters/observers-inter rater reliability
4. Time * Content- parallel forms w/time in between administrations
How is reliability related to correlation?
When looking at estimates of reliability you can generally look at the correlation between two factors.
Examples:
Test-retest- for the reliability look at the correlation b/w scores of people who have taken test twice
Parallel forms- reliability is equal to correlation b/w the scores of two forms of test measuring same construct.
Split-half- for reliability look at correlation between separately scored halves of a single test
Coefficient alpha is conceptually the average of all split-half reliabilities
Inter rater reliability is equal to the correlations b/w scores given to the same person by two different raters.
How can you gather content validity evidence?
Do job/content analysis, determine appropriate weighting, write items to reflect test spec's, pilot test and evaluate
How can you gather criterion-related validity evidence?
Look at concurrent and predictive validity- do the test scores correlate with the behavior you are trying to make inferences about?
How do you gather Construct validity evidence?
Examine scores of known groups, correlate with existing measures, Multitrait-Multimethod Matrix ( MTMM) (convergent & discriminant validation)
What is the Multitrait-Multimethod Matrix (MTMM) ? How does it relate to convergent & discriminant validity?
Convergent validity is the degree to which concepts that should be related theoretically are interrelated in reality. Discriminant validity is the degree to which concepts that should not be related theoretically are, in fact, not interrelated in reality. You can assess both convergent and discriminant validity using the MTMM. In order to be able to claim that your measures have construct validity, you have to demonstrate both convergence and discrimination.
The MTMM is simply a matrix or table of correlations arranged to facilitate the interpretation of the assessment of construct validity. The MTMM assumes that you measure each of several concepts (called traits by Campbell and Fiske) by each of several methods (e.g., a paper-and-pencil test, a direct observation, a performance measure). The MTMM is a very restrictive methodology -- ideally you should measure each concept by each method.
People don't use it much b/c you have to look at each trait with each method, but by looking at convergent and discriminant validity we can move towards a more usable framework (construct validity as degree of each of these validities).
What are some different modalities of measurement?
Self-report, behavioral measures, physiological measures
What are the 8 elements of informed consent?
1. Explanation of purpose
2. description of risks
3. description of benefits
4. disclosure of appropriate alternative procedures
5. confidentiality
6. compensation
7. Who to contact with questions
8. participation is voluntary, no penalties
What are three ways that researchers may wrongly coerce participants?
1. Overt- force against will
2. Subtle- stay awhile longer
3. Excessive inducements- not appropriate to level of participation
What are the 5 principles of ethical research as defined by the APA ethical code?
1. respect for persons autonomy
2. beneficence & non maleficence
3. Justice (distribution of burdens & benefits)
4. Trust (confidentiality, etc.)
5. Fidelity & scientific integrity- balance of scientific & ethical tension
What are 5 factors that determine degree of risk?
1. Likelihood
2. Severity
3. Duration after research
4. Reversibility
5. Measures for early detection
What are some different types of and reasons for deception? What is the responsible thing to do when you must use deception?
Providing false info about purpose or nature of task- avoid artificial response
Using confederates
Providing participants with false info- manipulation of IV
Leading participants to think they're interacting w/someone but no ones there- study events that occur rarely.
Must debrief participants on purpose of research, allow them to ask questions, explain nature & reason for deception, desensitize if necessary.
What are 4 forms of scientific malpractice? Why do researchers do this? What are some problems with detection & enforcement? Who suffers?
1. Data forging- inventing data
2. Data cooking- discarding data
3. Data trimming- changing data values
4. Data torturing- improper exploitation of stat's tests
Why?
Personal factors (crazy) or institutional factors (competition)
Problems:
Guilt- research assistants coming clean
Administrators- more interested in protecting institution than truth
People who report fraud are often punished
Who suffers? participants, science, public
What are some sources of unintentional error? What are some ways that researchers can do harm through research (ie why should we be vigilant against error)?
Incompetence & negligence
do harm:
Exploitation- manipulate for harm
wasting resources- wasted time/$
Overgeneralization- making generalizable claims when not appropriate
Failure to apply research- when it's warranted, failure to apply findings is unethical
What are some characteristics of a good manipulation? How do you know when you have a bad manipulation?
Construct validity, reliability, strength (conditions different enough to differentially affect behavior), salience (participants notice manipulations).
Bad = opposite of all of the above + manipulation not being sensitive
What are 3 defining characteristics of the Experimental method?
1. Manipulation of IV
2. Holding other variables constant
3. Participants in each condition are equivalent
What are some order effects that may be evident in a within subject design?
1. Practice effects- getting better
2. Fatigue effects- getting worse
3. Carryover effects- beer in one condition makes you dizzy in the next
4. Sensitization effects- Smelling coffee in one condition makes them less likely to smell vanilla in the next
What are some characteristics of an ANOVA/Factorial design?
2x2 design has two independent variables with two conditions, 2x3 has two independent variables with three conditions, etc.
main effect- what you would find if you ignored the other IV
interaction effect- two or more IV's combine to produce effect over and above main effects (can have interaction but no main effects).
What are some threats to internal validity of naturalistic studies?
Regression towards the mean- extreme cases, over time, will become less extreme
History- non-randomized events that take place between pre & post tests (9/11 WTC)
Maturation- participants getting older, wiser, stronger or more experienced b/w pre & post test (non-randomized studies)
Instrumentation- results have a threat to internal validity due to changes in the measurement instrument
Selection- when groups are dissimilar
What are some experimenter expectancy effects (Rosenthal & Fode)? What are some ways to reduce these?
Your expectations will bias what you document (dumb rats ex.)
To reduce:
1. Increase number of raters
2. monitor behaviors of researchers
3. analyze xp's for order effects
4. double-blind study
5. minimize contact
6. employ expectancy control groups
What are some assumptions of Correlational research? What hurts correlational research? Guidelines for correlational research?
Linearity of IV and DV
Additivity of IV and DV
Negative impact:
Attenuation/shrinkage, restriction of range, outliers, subgroup differences.
Guidelines:
use most reliable measures, compute subgroup statistics, check range of scores against published norms, avoid combining facets, plot subgroups and overall group to look for outliers and deviations from linearity
What does r tell you? What does Change in R squared tell you?
It's a regression coefficient, it tells you about the strength and the direction of the relationship b/w variables.
Change in Rsquared tells you how much variance is accounted for by the inclusion of new variables (sensitive to order of entry)
What are some correlational statistical techniques?
Logistic regression (Continuous IV, categorical DV)
Binomial (degree b/w two outcomes)
chi-square (similar to binomial but 2+ outcomes)
Logit- similar to chi-square but one of variables is considered to be the DV
What are some different response formats we can use in survey research?
Comparative rating scale- comparisons (must be familiar with, unidimensionality, understand meaning of response options) results in ordinal level data
Itemized rating scale- multiple choice. Must be developed carefully to be inclusive
Graphic rating scale- indicate response pictorially
numerical rating scale- assign numerical values to responses (anchors)
When looking at research in natural settings we often look at archival data. What is it, and what are advantages and disadvantages?
Statistical archives, archival data, etc. describing groups, indiv's or org's
Advantages- nonreactive, expand research population, less expensive, few ethical issues
Disadvantages- access, validity (operational def's), alternative explanations, ecological fallacy (indiv conclusions from aggregate data).
What are some different person-level response biases?
Social desirability, acquiescence, extremity, halo, leniency
What are some methods of survey administration?
Group, mail, personal interviews, telephone, focus groups, computer administration.
What are some characteristics of an effective research setting?
Coherence- events related, no ambiguity
Simplicity- reduces anxiety
Psychological involvement- otherwise bored, distracted
Consistency- provide same psychological state for all participants up until administration of IV
What are some things to think about when doing online data collection? What are some advantages and disadvantages?
Basically all the things that apply to typical research apply to online research (methods, ethics, random assignment, etc).
Advantages: feasibility, cost effective, avoids experimental bias (increased internal validity potentially), access to research participants.
Disadvantages: Participant sampling (likely looking at convenience samples), lack of collection control, participant attrition, sabotage
What are some things to consider when thinking about doing case study research? How can you increase validity? Control criteria? Types?
In single case research you will face a perceived lack of generalizability, lack of rigor and the idea that it takes too long to conduct. It can be useful though for: rare phenomena, providing depth, show limitation of theories, and providing a hypotheses for more controlled research.
You can increase validity by:
using objectivity in measurement, using multiple sources of obvs, do frequent assessment & follow up.
Control criteria by:
Using test case & Control case, standardize treatment, implement treatments as intended.
Types:
A-B-A (assessment, IV, remove IV)
A-B (IV expected to have lasting change, not practical to take away)
A-B-C-B (introduce additional control condition to C condition to rule out alternative explanation for effect on IV).
What are two ways to analyze qualitative data?
1. pattern matching- listing hypotheses and listing confirming/dis-confirming info for each, determine how data fits hypotheses.
2. explanation building- search for patterns in data using four steps: 1. organize 2. generate categories 3. test emerging hypotheses 4. search for alternative explanation.
What are some problems that you run into in field experiments? What are some different types of field experiments?
Problems:
Construct validity, control of extraneous variables, vulnerability to outside interference.
Types of field experiments:
Natural- events outside xp's control that manipulate IV
Quasi- naturally occurring groups as xp and control groups
Nonequivalent Control Group Design- Members of one group experience IV and others are control (problems with pre-existing differences & biased selection)
Time-series- Make observations of DV, manipulate IV, then make further observations of DV
What is the researchers role in the four types of naturalistic observation?
1. Researcher as complete participant
2. " as participant observer (informs others of role as researcher/observer)
3. " observer participant (interacts no more than necessary)
4. " as nonparticipant- no interaction, may or may not deceive
4 Problems with naturalistic observation?
Cognitive biases (selective attention, biased interpretation of obvs, reconstructive memory)
Record Keeping- cognitive and normal biases
Reactivity- may change behavior to be seen in better light
4. Influencing events- covert participation can contaminate observation (influencing people)
What are 5 elements of an interview that a researcher should implement?
1. Establish rapport
2. Listen analytically
3. Tactfully probe
4. Motivate
5. Maintain control
What are some different dimensions of coding systems and what are some best practices for developing a coding system?
Coding sytems: Theory based vs ad hoc, broad vs narrow, number of coding categories, degree of inference required, unit of behavior, concurrent vs after-the-fact coding.
Best practices/Rules:
1. terms clearly defined
2. category for every behavior
3. behavior must fit into only one category
4. Improve reliability by using broad, objectively defined, small # of coding items with little/no inference (objective) and make sure to code after the fact.
What are the three levels of participants?
1. Target population- who we want results to generalize to
2. Study population- members of target population who fit a particular operational definition of the target population
3. Research sample- members of study population who participate in the research
In regards to data collection there is two kinds of sampling: Probability and non-probability. Describe the different methods for each and any other types.
PROBABILITY SAMPLING:
Simple random
Stratified random (sampling frame arranged in terms of variables to structure sample)
Quot Matrix- you have quotas based on the population you want to generalize too... people in each cell
Systematic sampling- start with sampling frame (whole list) and selects every nth name, where n is equal to the proportion of the frame that we want to sample
Cluster sampling- id groups/clusters of people who meet def of study population, take random sample from clusters
NONPROBABILITY SAMPLING:
(Convenience sampling)
Haphazard/Quota samples
OTHER TYPES:
Purposive- selects membership based on judgment (case studies)
Snowball- Initial sample nominates acquaintances.
What are some strategies for missing data (nonimputational vs imputational)?
Nonimputational/does not substitute- Listwise (all missing) & pairwise deletion (computes paramaters only on complete data)
Imputational/Substitutional:
Mean substitution- replacing missing values w/mean from data
Regression substitution- replacing missing values with predicted value from regression using only cases w/no missing data
When looking at studies that have volunteers vs non-volunteers, what are some things that are usually true about volunteer populations?
Better educated, higher social-class status, more intelligent, higher need for social approval, more sociable.
What are the 4 functions of pilot studies?
1. Determine if xp is needed
2. Test validity of xp manipulations
3. Final test of research procedures
4. Dress rehearsal for data collection sessions
What are two ways that research results are biased?
1. Theoretical bias- reflects theoretical framework of researcher
2. Personal bias- reflects researchers more general attitudes and values
What are 4 ways that we can ensure we are making valid inferences from research results?
1. Read statistics appropriately- consistent with level of measurement, not exaggerating or ignoring differences between or within groups, use appropriate follow up to omnibus F tests.
2. Empirically supported evidence, remembering that every operationalization is imperfect, talk about results descriptively- don't evaluate (not better/worse)
3. Causality can only be inferred from xp research but still be wary of alternative explanations
4. Do not generalize in the absence of supportive evidence
What are some ways the null hypothesis is used? What are sources of Type II errors (In IV, DV, Research design)? When we accept the null what are we implicitly saying?
Testing, research validity, testing generalizability
Sources of Type II errors (beta, not finding sig when it exists)-
In the IV- construct validity, implementation, methodology (strength and salience of manipulation).
In the DV: Construct validity, sensitivity, restriction of range.
In the research design: Detecting curvilinear relationships, extraneous variables, moderator variables, mediating variables, sample size.
When we accept the null we are saying that there is no effect, and there are no design flaws to prevent finding & we have enough power to find one if it did exist.
What should we consider when trying to define an optimal sample N?
Effect size (smaller effect needs larger N)
Alpha level (higher alpha needs larger N)
One or two tailed? Two tailed requires larger N
Optimal level of power (always want over .50, .80 is better, higher levels = larger N).
What is the difference b/w a one and two tailed test?
One vs two refers to the critical area in the distribution. One-tailed can only be used for directionality in one direction (use when you're hypothesizing directionality, e.g only increased DV).
Two tailed- looking for increase or decrease, to fall within the critical range. Critical range defined by alpha level. Two way splits the area of the critical area which is why you need a larger N.