• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/106

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

106 Cards in this Set

  • Front
  • Back
empiricism
the process of learning things through direct observation or experience, and reflection on those experiences
a priori method
beliefs are deduced from statements about what is thought to be true according to the rules of logic
belief perseverance
unwillingness to consider any evidence that contradicts a strongly held view; similar to Pierce's principle of tenacity
confirmation bias
a tendency to search out and pay special attention to information that supports one's beliefs, while ignoring information that contradicts belief
availability heuristic
social cognition bias that occurs when we experience unusual or very memorable events and then overestimate how often such events typically occur
determinism and discoverability
events, including psychological ones, have causes, and discoverability means that by using agreed-upon scientific methods, these causes can be discovered, with some degree of confidence
statistical determinism
events can be predicted, but only with a probability greater than chance
objectivity
said to exist when observations can be verified by more than one observer
empirical questions
those that can be answered through the systematic observations and techniques that characterize scientific methodology
theory
set of statements that summarize what is known about some phenomena and propose working explanations for those phenomena
falsification
research strategy advocated by Popper that emphasizes putting theories to the test by trying to disprove or falsify them
pseudoscience
applied to any field of inquiry that appears to use scientific methods and tries hard to give that impression, but is actually based on inadequate, unscientific methods and makes claims that are generally false
4 goals of scientific research in psychology
-description
-prediction
-explanation
-application
5 general principles reflecting ethical code as a whole
-beneficence and non-malfeasance
-fidelity and responsibility
-integrity
-justice
mundane realism
refers to how closely a study mirrors real-life experiences
experimental realism
the extent to which a research study has an impact on the subjects, forces them to take the matter seriously, and involves them in the procedures
quantitative research
data are collected and presented in the form of numbers
qualitative research
results are presented not as statistical summaries, but as analytical narratives that summarize the project's main outcomes
2 important features of empirical questions
-must be answerable with data, qualitative/quantitative
-terms must be precisely defined
construct
a hypothetical factor that is not observed directly; its existence is inferred from certain behaviours and assumed to follow form certain circumstances
deduction
reasoning from a set of general statements toward the prediction of some specific event. if theory is true, then X should occur with probability greater than chance
induction
logical process of reasoning from specific events (results of many experiments) to the general (the theory)
modus tollens
if theory X is true, then result Y can be expected to occur
3 attributes of good theories
-productivity - advance knowledge by generating a great deal of research
-falsification
-parsimony - they include the minimum number of constructs and assumptions that are necessary to explain the phenomenon adequately and predict future outcomes
replication
refers to a study that duplicates some or all of the procedures of some prior study
extension
resembles a prior study and usually replicates part of it, but goes further and adds at least one new feature
reliability
results are repeatable when the behaviours are remeasured
validity
it measures what it has been designed to measure
content validity
concerns whether or not the actual content of the items on a test "makes sense" in terms of the construct being measured
face validity
whether the measure seems to be valid to those who are taking it, and it is important only in the sense that we want those taking our tests and filling out our surveys to take the task seriously
criterion validity
concerns whether the measure (a) can accurately forecast some future behaviour or (b) is meaningfully related to some other measure of behaviour
construct validity
concerns whether a test adequately measures some construct, and it connects directly with the operational definition
convergent validity
scores on a test measuring some construct should be related to scores on other tests that are theoretically related to the construct
discriminant validity
scores on a test should not be related to scores on other tests that are theoretically unrelated to the construct
experiment
systematic research study in which the investigator directly varies some factor(s), holds all other factors constant, and observes the results of the variation
field research
any empirical research outside of the laboratory, including both experimental studies and studies using nonexperimental methods
situational variables
refer to different features in the environment that participants might encounter
task variables
type of task performed by the participants
instructional variables
manipulated by asking different groups to perform a particular task in different ways
extraneous variables
any variables that are not of interest to the researcher but which might influence the behaviour being studied if they are not controlled properly
confound
any uncontrolled extraneous variable that covaries with the independent variable and could provide an alternative explanation of the results
ceiling effect
occurs when the average scores for the different groups in the study are so high that no difference can be determined (easy)
floor effect
happens when all the scores are extremely low because the task is too difficult for everyone
subject variables
refer to the existing characteristics of the individuals participating in the study
statistical conclusion validity
concerns the extent to which the researcher uses statistics properly and draws the appropriate conclusions from the statistical analysis
construct validity (in expt'l research)
refers to the adequacy of the operational definitions for both the independent and the dependent variables used in the study
external validity
the degree to which research findings generalize beyond the specific context of the experiment being conducted
ecological validity
said to exist when research studies psychological phenomena in everyday situations
internal validity
the degree to which an experiment is methodologically sound and confound-free
history
when an event occurs between pre and posttesting that produces large changes unrelated to the treatment program itself
maturation
developmental changes that occur with passage of time
regression to the mean
if a score on a test is extremely high or low, a second score taken will be closer to the mean score; can be a threat to the internal validity of a study if a pretest score is extreme and the posttest score changes in the direction of the mean
threats to internal validity (7)
history
maturation
regression to the mean
testing
instrumentation
attrition
subject selection effects
two ways of creating equivalent groups
random assignment
matching
between-subjects design
any experimental design in which different groups of participants serve in the different conditions of the study
within-subjects design
any experimental design in which the same participants serve in each of the different conditions of the study - aka repeated measures
block randomization
a procedure ensuring that each of the condition of the study has a participant randomly assigned to it before any condition is repeated a second time
2 conditions for matching
-you must have good reason to believe that the matching variable will have a predictable effect on the outcome of the study
-there must be some reasonable way of measuring or identifying participants on the matching variable
progressive effects
in a within-subjects design, any sequence effect in which the accumulated effects are assumed to be the same from trial to trial
carryover effect
form of sequence effect in which systematic changes in performance occur as a result of completing one sequence of conditions rather than a different sequence
counterbalancing
the typical way to control sequence effect in a within-subjects design, using more than one sequence
cohort sequential design
a group of subjects will be selected and retested every few years, and then additional cohorts will be selected every few years and also retested over time
experimenter bias
experimenters testing hypotheses sometimes may inadvertently do something that leads participants to behave in ways that confirm the hypothesis
hawthorne effect (participant bias)
when behaviour is affected by the knowledge that one is in an experiment and is therefore important to the study's success
demand characteristics
refer to those aspects of the study that reveal the hypothesis being tested
evaluation apprehension
participants may behave as they think the ideal person should behave
manipulation check
in debriefing, a procedure to determine if subjects were aware of a deception experiment's true purpose; also refers to any procedure that determines if systematic manipulations have the intended effect on participants
independent groups design
a between-subjects design that uses a manipulated independent variable and has at least two groups of participants; subjects are randomly assigned to the groups
matched groups design
a between-subjects design that uses a manipulated independent variable and has at least two groups of participants; subjects are matched on some variable assumed to affect the outcome before being randomly assigned to the groups
nonequivalent groups design
groups are made up of different kinds of individuals
2 things a t-test assumes
-that the data from the two conditions at least approximate a normal distribution
-homogeneity of variance - the variability of each of the sets of scores being compared ought to be similar
single-factor multilevel designs
more than two levels
advantage of multilevel designs
enable researcher to discover nonlinear effects
placebo control group
led to believe they are receiving some treatment when in fact they aren't
waiting list control group
not yet receiving treatment but will, eventually; used to ensure that those in the experimental and control groups are similar
yoked control group
the treatment given a member of the control group is matched exactly with the treatment given a member of the experimental group
factorial design
any study with more than one independent variable
main effect
used to describe the overall effect of a single independent variable
interaction
said to occur when the effect one independent variable depends on the level of another independent variable
mixed factorial design
a factorial design with at least one between-subjects factor and one within-subjects factor
P x E factorial designs
a factorial design with at least one subject factor and one manipulated factor
mixed P x E factorial
a mixed design with at least one subject factor and one manipulated factor
simple effect analysis
following an ANOVA, a follow-up test to a significant interaction, comparing individual cells
coefficient of determination (r^2)
the portion of variability in one of the variables in the correlation that can be accounted for by the variability in the second variable
criterion variable
Y
predictor variable
X
directionality problem
in correlational research, this refers to the fact that for a correlation between vars X and Y, it is possible that X is causing Y, but it is also possible that Y is causing X; the correlation alone provides no basis for deciding between the two alternatives
cross-lagged panel correlation
refers to a type of correlational research designed to deal with the directionality problem; if vars X and Y are measured at 2 different times and if X precedes Y, then X might cause Y but Y can't cause X
partial correlation
a multivariate statistical procedure for evaluating the effects of 3rd variables; if the correlation between X and Y remains high, even after some 3rd factor Z has been "partialed out", then Z can be eliminated as a third variable
split-half relability
a form of reliability in which one-half of the items on a test are correlated with the remaining items
test-retest reliability
a form of reliability in which a test is administered on two separate occasions and the correlation between them is calculated
criterion reliability
the ability of the test to predict some future event
intraclass correlation
a form of correlation used when pairs of scores don't come from the same individual, as in studies of twins
factor analysis
a multivariate analysis in which a large number of variables are intercorrelated; variables that correlate highly with each other form "factors"
4 problems encountered in applied research
ethical dilemmas
a trade-off between internal and external validity
problems unique to between-subjects designs (reducing internal validity)
problems unique to within-subjects designs (counterbalancing and attrition)
quasi-experiment
exists whenever causal conclusions can't be drawn because there is less than complete control over the variables in the study, usually because random assignment is not feasible
interrupted time series design
OOOOTOOOO
trends
predictable patterns of events that occur with the passing of time
interrupted time series with switching replications
OOOTOOOOOOO
OOOOOOOTOOO
content analysis
any systematic examination of qualitative information in terms of predefined categories
strength of archival research
amount of information available virtually unlimited
4 components of program evaluation
-procedures for determining if a true need exists for a particular program and who would benefit if it was implemented
-assessments of whether a program is being run according to plan and, if not, what changes can be made to facilitate its operation
- methods for evaluating program outcomes
-cost analysis to determine if program benefits justify the funds expended
4 ways of identifying the potential need for a program
-census data
-surveys of available resources
-surveys of potential users
-key informants, focus groups, and community forums
formative evaluation
form of program evaluation that monitors the functioning of a program while it is operating to determine if it is functioning as planned
program audit
an examination of whether a program is being implemented as planned; a type of formative evaluation
summative evaluations
form of evaluation completed at the close of a program that attempts to determine its effectiveness in solving the problem for which it was planned