• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/113

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

113 Cards in this Set

  • Front
  • Back

4 Ways of Knowing Something to be True

1. Authority

2. Reasoning

3. Experience

4. Scientific Method

Authority

Accept validity of information from a source we judge to be an expert

Reasoning

Arrive at conclusions using rules of logic and reasoning (a priori method) without relying experience

Experience

Learning through direct observation or experience

Scientific Method

Attempt to apply systematic, objective and empirical methods to search for the cause of natural events

Empiricism

Process of learning things through direct observation or experience

Belief Perseverance

Once a belief is established, tendency to hold on to belief even in face of contradictory evidence

Confirmation Bias

Tendency to search out and pay attention only to information supporting your beliefs and ignoring contradictory information

Availability Heuristic

Experience unusual or memorable events and overestimate how often similar events occur

Illusory Correlation

Belief that one has observed an association between events but correlation does not exist

Scientific Method

Attempt to apply systematic, objective and empirical methods to search for the cause of natural events

Probabilistic Statistical Determinism

Determining if the probability that two events occur together is greater than chance

Data-Driven

Conclusions are based on empirically gathered evidence

Goals of Psychology Research

1. Description

2. Prediction

3. Explanation

4. Application

Description

Identify regularly occurring sequences of events, including both stimuli or environmental events and responses or behavioral events

Prediction

Regular and predictable relationships exist for psychological phenomena

Explanation

Explain a behavior and know what caused it

Application

Ways of applying principles of behavior learned through research

5 Criteria for Scientific Method

1. Empirical

2. Objective

3. Systematic

4. Controlled

5. Public

Empirical

Based on observation

Objective

Verified by others

Systematic

Made in a step-by-step fashion

Controlled

Confusing factors eliminated

Public

Built on previous research and open to critique and replication

Assumptions about Behaviors or Observations

1. Lawful

2. Determinism

3. Discoverability

Lawful

Every event is understood as a sequence of natural causes and effects

Determinism

Events and behaviors have causes

Discoverability

Through systematic observation, can determine causes of events and come up with explanations via repeated discoveries

Basic Research
Describing, predicting and explaining fundamental principles of behavior and mental processes
Applied Research
Direct and immediate relevance to real-world problems
Laboratory Research
Performed in a controlled environment
Field Research
Environment more closely matches the situations we encounter in daily living
Mundane realism
How closely a study mirrors real-life experiences (field research)
Experimental realism
Concerns the extent to which a study forces participants to take the matter seriously
Experimental confederate
Something that appears to be part of the normal environment but is actually part of the study
Quantitative Research
Data are collected and presented in the form of numbers
Qualitative Research
Data not presented as numbers
Operational Definition
Experiment measures defined in terms of a set of “operations” or procedures to be performed
Serendipity
Discovering something while looking for something else entirely
Teleology
Phenomena are best comprehended and depicted with regard to their reasons and functions instead of their causes
Definition of a Theory
A set of logically consistent statements about a psychological phenomenon that:


  1. best summarizes existing empirical knowledge of phenomenon
  2. organizes knowledge in the form of precise statements of the relationship among variables
  3. provides a tentative explanation for the phenomenon
  4. serves as a basis for making predictions about behavior
Relationship between Theory and Data
  1. Deduction – reasoning starts with a general theory and applying to a specific scenario
  2. Hypothesis – prediction about events derived from a theory
  3. Induction – reasoning going from specific information and coming up with a general theory
Attributes of Good Theories
  1. Productive – generates numerous research studies
  2. Falsifiable – possible to be proven wrong
  3. Parsimonious – includes the minimum and assumptions necessary (simple as possible)
Program of research
Building research upon past research
Replication
Study that duplicates some or all of the procedures of a prior study
Extension
Resembles a prior study and usually replicates part of it, but it goes further and adds at least one new feature
Partial replication
Part of the study that replicates a portion of the earlier work
Pseudoscience
Appears to use scientific methods and tries hard to give that impression but is actually based on inadequate, unscientific methods and makes claims that are generally false or overly simplistic


  1. Associates itself with published research
  2. Accepts anecdotal evidence without question
  3. Avoids disproof
  4. Abridges (or reduces/oversimplifies) the complexities of life
Effort Justification
Idea is that after people expend significant effort, they feel compelled to convince themselves the effort was worthwhile
Anecdotal Evidence
Specific instances that seem to provide evidence for some phenomenon.
General Principles (A through E)
  1. Principle A - Beneficence and Nonmaleficence
  2. Principle B - Fidelity and Responsibility
  3. Principle C - Integrity
  4. Principle D - Justice
  5. Principle E - Respect for People's Rights and Dignity
8.01 Institutional Approval
Provide accurate information about research proposals and obtain approval prior to conducting the research. They conduct the research in accordance with the approved research protocol
8.02 Informed Consent (IC) to Research
Participants must be informed about:


  • research purpose, expected duration, and procedures
  • right to decline/withdraw (+ foreseeable consequences)
  • potential risks, discomfort, or adverse effects
  • prospective research benefits
  • limits of confidentiality
  • incentives for participation
  • whom to contact for questions about the research and participant rights.


Research involving use of experimental treatments, must clarify to participants at start of research:


  • experimental nature of treatment;
  • services that will/will not be available to the control group if appropriate;
  • how assignment to treatment/control groups is made;
  • available treatment alternatives if don’t want to participate in the research or withdraws; and
  • compensation/monetary costs of participating (whether reimbursement from the participant will be sought)
8.03 IC for Recording Voices & Images
Obtain informed consent from participants before recording/image collection unless:


  1. only naturalistic observations in public places, recording will not cause identification/harm, or
  2. the research design includes deception, consent for recording obtained during debriefing
8.04 Client, Student, & Subordinate Res Subj
  1. When conducting research with clients/patients, students, or subordinates as participants, take steps to protect the participants from adverse consequences of declining or withdrawing from participation
  2. If course requirement/opportunity for extra credit, the participant given choice of alternative activities
8.05 Dispensing With Informed Consent
Dispense with informed consent only when:




  1. Research would not reasonably be assumed to create distress or harm and involves:

  • Educational practices, curricula, or classroom management methods in educational settings;
  • Only anonymous questionnaires, naturalistic observations, or archival research where responses not placing participants at risk, damage of standing and confidentiality protected; or
  • Study of job factors effectiveness conducted in organizational settings where no risk to participants' employability, and confidentiality is protected or
2. Where otherwise permitted by law or regulations
8.06 Offering Inducements
  1. Make reasonable efforts to avoid offering excessive/inappropriate gifts for research participation when such inducements are likely to coerce participation
  2. When offering professional services for research participation, clarify the nature of the services, as well as the risks, obligations, and limitations.
8.07 Deception
  1. Do not conduct study involving deception unless determined use of deceptive techniques is justified by the study's significant potential value & effective nondeceptive alternative procedures are not feasible
  2. Do not deceive prospective participants about research that might cause pain or emotional distress
  3. Explain any deception that is integral feature of experiment to participants early, preferably at end of participation, but no later than at end of data collection, permitting participants to withdraw data
8.08 Debriefing
  1. Provide opportunity for participants to obtain appropriate information about nature, results & conclusions of research & take steps to correct misconceptions participants have of which are known
  2. If justified to delay/withhold information, take reasonable measures to reduce the risk of harm
  3. When aware research procedures harmed participant, take reasonable steps to minimize the harm
8.09 Humane Care and Use of Animals
  1. Acquire, care for, use, & dispose animals in compliance with laws/regulations & professional standards
  2. Be trained in research methods and experienced in care of laboratory animals/ensure appropriate consideration of comfort, health, & humane treatment
  3. Ensure others receive instruction in research methods and in care, maintenance, & handling of species being used, to extent appropriate to their role
  4. Minimize discomfort, infection, illness, and pain of animals
  5. Use procedure subjecting animals to pain, stress, or privation only when alternative procedure is unavailable and the goal is justified
  6. Must perform surgical procedures under appropriate anesthesia & avoid infection & minimize pain
  7. If animal’s life is terminated, proceed rapidly & minimize pain in accordance with accepted procedures
8.10 Reporting Research Results
  1. Psychologists do not fabricate data.
  2. If discover significant errors in published data, take reasonable steps to correct such errors in a correction, retraction, erratum, or other appropriate publication means
8.11 Plagiarism
Don’t present portions of other's work/data as own, even if other work/data source cited occasionally
8.12 Publication Credit
  1. Take responsibility/credit, including authorship credit, only for work actually performed or contributed
  2. Principal authorship & publication credits accurately reflect relative scientific/professional contributions of individuals involved, regardless of status. Mere possession of institutional position, (department chair) doesn’t justify credit
  3. Except under exceptional circumstances, student is listed as principal author on multiple-authored article that is substantially based on student's doctoral dissertation. Faculty advisors discuss publication credit with students early & throughout research and publication process as appropriate
8.13 Duplicate Publication of Data
Do not publish data previously published. This does not preclude republishing data when they are accompanied by proper acknowledgment
8.14 Sharing Research Data for Verification
  1. After research results published, do not withhold data which conclusions are based from others seeking to verify substantive claims through reanalysis & who intend to use data only for that purpose, provided that confidentiality of participants can be protected & unless legal rights concerning proprietary data preclude their release. Does not preclude from requiring that such individuals/groups be responsible for costs associated with provision of such information
  2. Requested data from other psychologists to verify substantive claims through reanalysis may be used only for declared purpose. Requesting psychologists obtain prior written agreement for all other uses
8.15 Reviewers
Psychologists who review material submitted for presentation, publication, grant, or research proposal review respect confidentiality of and proprietary rights in such information of those who submitted it
Assent versus Consent in research
  • Consent may only be given by individuals who have reached the legal age of consent (in the U.S. this is typically 18 years old)
  • Assent is the agreement of someone not able to give legal consent to participate in the activity
  • Work with children or adults not capable of giving consent requires the consent of the parent or legal guardian and the assent of the subject.
NIH Revitalization Act
Women and minorities must be included in clinical research supported by NIH
Ethical Considerations with diverse populations
  • 1993 NIH Revitalization Act
  • Sufficient numbers to allow valid evaluation of ethnic differences for major ethnic groups
  • Potential for coercion with research incentives to low income populations
  • Difficulty obtaining informed consent from low literacy or non-English speaking immigrant populations
  • Dissemination – need to report findings back to communities of interest
Milgram study
  • Experiment about obeying authority figures, even when instructions are morally wrong
  • In response to notorious Nazi war criminal trials
  • People thought that they had caused suffering to another human being
  • Participants could develop severe emotional distress
Tuskegee syphilis study
400 African American with syphilis untreated between 1932-1972 to study effects of disease
What to Include in a Protocol for PAU IRB
  • In a university/college setting, this group consists of at least five people, usually faculty members from several departments and including at least one member of the outside community and a minimum of one nonscientist
  • In place for any college/university receiving federal funds for research.
  • important component of an IRB’s decision about a proposal involves determining the degree of risk to be encountered by participants.
Authorship Credit and Authorship Order on Faculty–Student Collaborations
Depends on whose idea it is and who does the majority of the work
Different sections of a manuscript/article and what information/content they contain
  1. Title page
  2. Abstract
  3. Introduction
  4. Method
  5. Results
  6. Discussion
  7. References
Qualitative
categorizing
Quantitative
describing size
Nominal Scale
Observations labeled/categorized
Ordinal Scale
Ranking observations in terms of size or magnitudes
Interval Scale
EQUAL differences (or intervals) between numbers on the scale reflecting differences in magnitude; “Ratios” of interval magnitudes = not meaningful
Ratio Scale
Numbers that reflect a ratio-relationship of magnitudes
Discrete Variables
Consists of separate, indivisible categories
Dichotomous Variables
Only two categories (yes/no)
Continuous Variables
  • infinite # of possible values fall between any two scores or ratings
  • Range is divisible into an infinite number of fractional segments/sub-measurements
  • Boundaries that separate intervals are called real limits
Descriptive Statistics
summarize the data collected from the sample of participants in your study
Inferential Statistics
draw conclusions about your data that can be applied to the wider population.
Sample
members of a defined group that are participants of the particular study
Population
population consists of all members of a defined group
Mean
average
Median
middle value of a range
Mode
most frequent value
Outliers
scores far removed from the other scores
Standard Deviation
average amount scores deviate from mean
Variance
standard deviation squared
Construct
abstract idea; hypothetical factor; cannot be observed directly
Operational Definition
a systematic, replicable way of measuring our best estimate (i.e., PROXY) of the construct.
Reliability
reproducibility - consistent and repeatable, representing minimal measurement error
Validity
measuring the intended construct and not something else
Assumptions of Classical Test Theory
  1. True score is constant
  2. Error is random
  3. The correlation between the true scores and errors is 0 4. The correlation between errors of different measurement occasions also equals 0 (Error is assumed to be RANDOM)
Classical Test or Reliability Theory
  1. Observed score = true score + (single) error
  2. Thus, in the long run, the sum and the average of errors should equal 0
  3. Averaging multiple observations (observed scores) should give good estimate of true score
Generalizability Theory
  1. Error is separated into pieces, each of which can be estimated (if we collect the data correctly)
  2. Observed score = true score + multiple error terms
Test-Retest Reliability
  1. Reproducibility of values of a variable when you measure the same participants twice or more
  2. The test is administered on two separate occasions and the correlation between them is calculated
3 Important Components of Retest Reliability
1. Change in the Mean

A. Random change - due to “typical” error



  • “Typical error” think about a randomly selected number added to or subtracted from the true value every time you take a measurement
  • Larger sample sizes = less random change in mean
  • Infinite sample size = zero random change in mean (random differences cancel each other out!)
B. Systematic change - non-random change in the value between two trials.


  • E.g., Better performance because exercising more.
  • E.g., Worse performance due to fatigue or lower motivation.
  • Less problematic for controlled studies. Why? - Can give tests with small learning effects, or get participants to perform practice trials to reduce learning effects.

2. Measurement Error - The participants didn't have exactly the same weight in the first and second tests.



  • If you reweighed one participant many times, with 2 weeks between each weighing, scores could be 72.2, 70.1, 68.5, 69.9, 67.9, 69.6...
  • Random contribution to this variation is due to the typical error.
  • Measurement Error - Another type of error, produced by any factor that introduces inaccuracies into the measurement of a variable.
  • Can calculate standard error (SE) of measurement = estimated the standard deviation of errors in measurement; related to SD of participants’ T1 vs. T2 weights

3. Retest Correlation - r of 1.00 represents perfect agreement between tests, and 0.00 represents no agreement at all.



  • In the weight example the correlation is 0.95, which is high reliability.
  • Different standards for adequate reliability (e.g., .80, .70)
  • higher for short term retests ~ 2 weeks
  • lower for long term retests >1-2 months
  • Many traits of interest change over time, independent of the stability of the measure
  • Pearson correlation is acceptable when comparing 2 tests, but overestimates true r for small sample sizes (< 15)
  • Better retest correlation for small samples or multiple tests
  • intraclass correlation coefficient (ICC)
  • Pearson r and ICC are unaffected by shift in mean on retest.
  • Example - weights being lower in the second test has no effect on the correlation coefficient
Kappa Coefficient
  • Reliability of Nominal Variables - Consistency of categorical responses
  • For example, how consistent are raters in their categorizing of children’s behavior as disruptive, cooperative, or withdrawn?
  • It is analogous to a correlation coefficient and has the same range of values (-1 to +1).
Split-Half Reliability
  • A way to measure the internal consistency of a measure
  • One-half of the items on a test are correlated with the remaining items (e.g., even-numbered items correlated with the odd-numbers items).
  • Divides the items on the scale into two halves.
  • The scores on the 2 halves are then correlated.
Kuder-Richardson coefficient
Used for measures with dichotomous responses (yes/no)
Cronbach's alpha
  • Same as alpha reliability
  • Used in conjunction with a Likert scale type item (e.g., range in severity from 1-10).
  • Each item is compared with each other item to assure that the rating scale is consistent.
Interobserver or Interrater Reliability
  • Two or more observers watch the same event and independently record variables according to a pre-determined coding system
  • A correlation coefficient is computed to demonstrate the strength of the relationship between one observer’s rating and the other’s
Item to Total Correlation
Measures the r of each of the items to the total scale. Items with a low r can be deleted.
Parallel Items on Alternate Forms
  • If items are truly parallel, they will have identical true scores. Responses to parallel items will differ only with respect to random fluctuations.
  • Difficult to develop comparable items.
  • E.g., standard and alternate form of a memory list of words. California Verbal Learning Test
Standards of Reliability
  • n .80 reliability is a good benchmark for most psychological research measures
  • n .90-.95 is better for application (e.g., diagnoses, personnel decisions).
Factors that affect reliability
  • length of test - more items = more reliable -
  • timing of test - more time spent on a test = more reliable
  • Group heterogeneity - the greater the variance of scores (more varied the sample is) the more reliable the test scores will be for that sample/group. Variance is needed to have reliability.
  • Item difficulty - The more varying of difficulty among the test items, the more reliable the test.
Limitations of classical measurement theory
  • Based on linear combinations (e.g., sum of the scores to get a total)
  • CMT is more sample based.
Item Response Theory as an alternative
  • Item based. Concerned about WHICH items are correct. Not all items may have the same value. CMT is more sample based.
  • Differences in ability to discriminate between test takers
  • Tests that use CMT are more reliable at discriminating between test takers who are in the middle of the distribution (e.g. average range) than those at the extreme ends of the distribution.
  • IRT discriminates well for all parts of distribution.