• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/51

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

51 Cards in this Set

  • Front
  • Back
What are the steps in the Scientific Method?
1) Develop a Hypothesis: A theory based statement about outcome of study that entails your IV and DV. Operational definition: Precise description of what you're measuring in observable, measurable terms.
2) Design and Perform controlled test: IV: That which you manipulate, DV: Outcome measure obtained from participant
3) Gather objective Data: Note: Needs to be measurable
Types of Measurements:
A. Self Report - Questionnaires, Surveys, Interviews
B. Behavioral Measure - Observed behavior, Unobtrusive observation, Naturalistic observation
C. Physiological Measure - Heart rate, Galvanic Skin Response
4) Analyze, Interpret, and Report Findings: Statistical t-tests, Correlations, Analysis of variances
5) Publish Findings: Submit for criticism, replication, submit to peer-reviewed journal, presentation at conference
What are the different research designs and methods?
1) Experiment: A study in which you manipulate one or more IV. The IVs have levels. You must have a DV-which is what you observe and measure. You systematically manipulate a variable to derive a causal explanation.
2) Quasi-Experiment: You take advantage of some naturally occurring event or preexisting conditions. At least one or more variable which cannot be directly manipulated.
3) Correlational: You measure two or more variables and look for a relationship between them. Variables do not have levels. Direction, magnitudes, and forms of the observed relationships. NON-EXPERIMENTAL research. Predictor variable- the variable used to predict. Criterion variable- the variable whose value is being predicted, "outcome variable."
4) Simple Experiment: Looking at multiple variables & multiple levels.
5) Complex Experiment: Looking at multiple variables & multiple levels.
6) Critical Experiment: When comparing 2 or more theories, critically evaluating the theories, but have to be careful about too many variables etc.
How to get ideas:
Define Experience.
Unsystematic Observation (Curiosity) - about the cause or determinants of commonplace, everyday behavior. Systematic Observation (Planned) - you decide what you are going to observe, how you are going to observe it, and how you will record your observation
How to get ideas:
Define Theory.
A set of assumptions about the causes of behavior and rules that specify how those causes act. Designed to account for known relationships among given variables and behavior, theories an also be a rich course of research ideas.
Theories can lead to the development of research questions:
First, a theory allows you to predict the behavior expected under new combinations of variables.
Second, when two or more alternative theories account for the same initial observation
How to get ideas:
Define Applied Issues.
Arise from the need to solve practical problems. Applied research is problem orientated.
How to get ideas:
Define Introspection.
Observation or examination of one's own mental and emotional state, mental process, etc.; the act of looking within oneself.
How to get ideas:
Define Internal Validity.
The ability of your research design to adequately test your hypothesis.
How to get ideas:
Define External Validity.
Results from stud can be extended (generalized) beyond the limited research setting and sample in which they
What are the Threats to Internal Validity?
Maturation - Changes within participants themselves (adaptability)
Example: Could be physical condition such as weight loss. A participant could have loss weight due to a physical medical condition as opposed to the actual study being why the participant loss weight.

Instrumentation: Changes in measuring device.
Example: Poorly designed test or simple as a stopwatch that is inaccurate

Morality - Participants drop out, die, or do not return. Differential loss of participants form the groups of a study results in non-equivalent groups.
Example: Part, Drops out, dies, or moves away.

Testing - Ceiling effect: Everyone does well (tests not sensitive enough)
Example: Could be a survey/questionnaire that unintentionally clues into what is being tested

Statistical Regression - Change in extreme scores over time. They gravitate toward the mean over time.
Example: Reading program test. You choose participants who have poor grades (extreme scores)- program may not be what is making participants do better over the course of the study, but it could be statistical regression.

Selection - Sampling size, randomness, appropriate population. How you choose/recruit.
Example: If you choose too small of a sample size, cannot generalize it. (Randomness- Hillsborough example! You cannot take a sample of affluent community when studying education- needs to be more random sample- same for appropriate population.

Selection by maturation interaction - One group changes faster than other along a given dimension.
Example: EPA vs. Hillsborough school. If you are interested in math scores- picked 2 different SES high schools, therefore one will learn faster (math) because of different upbringings. One had access to books, families, etc.

History - Events outside participants, which can affect DV.
Example: A fire drill goes off in the middle of the survey or experiment- this would affect the DV.

Diffusion or imitation of treatments: Reduction in differences due to repeated blocks, exposure, or via Participants communication with others.
Example: Giving same test over & over, ppls will learn it. Habituation can be a form of diffusion or imitation. People talking amongst each other before taking a survey, if someone tell you its going to be boring, you do not put any effort into it.
Define Validity.
Validity- The validity of a measure is the extent to which it measures what you intend it to measure
What are the types of Reliability and Validity?
Reliability:
*Reliability of Physical measure
*Reliability of Population estimates
*Reliability of Judgements or Ratings by multiple observers
*Reliability of Psychological Test or Meausres

Validity
*Face Validity
*Content Validity
*Criterion-related validity
*Construct validity
Define the types of Reliability.
*Reliability of Physical measure

*Reliability of Population estimates - Margin of error-precision of the estimates

*Reliability of Judgements or Ratings by multiple observers - Interrater reliability: to test the degree of
agreement among observers

*Reliability of Psychological Test or Measures: Test-retest involves administrating the same test twice, separated by a long interval of time. Parallel forms same as retest except the form of the test used the 1st time is replaced by a parallel form the 2nd time
Define the types of Validity.
i*Face Validity- Describes how well a measurement instrument appears to measure what it was designed to measure

*Content Validity - Has to do with how adequately the content of a test samples the knowledge, skill, or behaviors that the test intended to measure

*Criterion-Related Validity - Reflects how adequately a test score can be used to infer an individuals value on some "criterion" measure. Concurrent- if the scores on your test and the criterion are collected at about the same time. Predictive- by comparing the scores on your test with the value of a criterion measure observed at a later time.

Construct Validity - Applies when a test is designed to measure a "construct" which is a variable, not directly observable, that has been developed to explain behavior on the basis of some theory.
Define Reliability.
Reliability - The reliability of a measure concerns its ability to produce similar results when repeated measurements are made under identical conditions. The more variability that you observe, the less reliable is the measure.
What are the types of experiments?
Quantitative: Uses quantifiable data (basic research)
1) Experiments and Quasi-Experiments
2) Correlational studies (archival research, surveys)
3) Archival research (hospital records etc.)

Qualitative: Use descriptive data (applied research)
1) Field Studies (naturalistic observations, obstructive, (they are aware they are being watched) or unobstructive (they are unaware they are being watched))
2) Case Studies - Single-ended (1 person) or small ended (small group)
3) Ethnography - Looking at some form of understanding of a group of people (ethnicity)- doesn't have to be cultural necessarily, could be a sub-culture
Correlation vs Causation
Define the properties.
Correlation: Changes in one variable accompany changes in another

Casual Relationship: One variable directly or indirectly influences another

*Note: Correlation does not equal causation- something does not cause something else, they are related, change directions, but you cannot say the predictor causes the criterion.
Ways to Impose Control in a Study:
Ways to help increase Internal Validity.
Name the the 7 ways.
Control group: (Between subjects)
Control Condition
Counterbalancing
Matching
Latin-Square Design
Hold other factors constant
Ways to Impose Control in a Study:
Define Control Group.
Control group: (Between subjects) this is a comparison group with the treatment/experimental group. They get everything but the experimental treatment.
Ex. One group gets treatment, control group gets placebo.
Ways to Impose Control in a Study:
Define Control Condition
Control Condition: (Within subjects) All participants are exposed to all levels of IV.
Ex. Stroop test, because all had to do 3 tests- practice test was out baseline. Compare to yourself, change in performance.
Ways to Impose Control in a Study:
Define Counterbalancing
Counterbalancing: Accounts for order effects (order of treatment should not affect results).
Ex. Stroop test, some people got incongruent test first, while others got congruent test first.
Ways to Impose Control in a Study:
Define Matching
Matching: Minimize individual differences, which are likely to confound (affect inter validity). Opposite of random assignment.
Ex. You have two groups where each group has all 21 year olds, but two 12 year olds also signed up for the study, you put one 12 year old in each group.
Ways to Impose Control in a Study:
Define Latin-Square Design
Latin-Square Design: Each condition, across participants will be preceded and followed in equal frequency by every other condition
Ways to Impose Control in a Study:
Define Hold on other factors constant
Hold on other factors constant: Instructions same for everyone, use same procedure, apparatus, method of measurement. Doing things the same for everyone!
What are the advantages and disadvantages of Experiment studies?
Experiment Studies
Advantages:
*Make a causal conclusion
*Show causation- Because you can directly manipulate
*Replicate is easier because more systematic
*Increase of Internal Validity
*More control of the experiment- increase of internal validity
*Limit extraneous variables
Disadvantages:
*Decreased external validity, because it is more controlled thus more difficult to apply to real world settings, more difficult to generalize to real world
*Less likely to generalize to real world (lack of generalization)
*Limitations on type of studies, because some things are not ethical to manipulate
*Less applicable to real world
What are the advantages and disadvantages of Correlational studies?
Correlational Studies
Advantages:
*Allows you to study topics not ethically manipulated-because you are not manipulating, you are just comparing existing variables
*Higher external validity
*More applicable to the real world
*Can provide rich correlations that allow ideas for future experiments
Disadvantages:
*Lower internal validity because less controlled- relying on surveys, questionnaires, interviews
*Cannot show causation- only relationship of variables- one does not cause another
*Less controlled- cannot manipulate the variables, cannot control for extraneous variables
Explain Commonsense Explanations
Commonsense Explanations: Based on our own sense of what is true about the world around us. Observed event and what our previous experiences has told us is true. Likely to be incomplete, inconsistent with other evidence, lacking in generality, and probably wrong.
Explain Scientific vs Commonsense explanations
Commonsense explanations:
Based on our own sense of what is true about the world around us. Observed event and what our previous experience has told us is true. Likely to be incomplete, inconsistent with other evidence, lacking in generality, and probably wrong.

Scientific explanation.
Is an explanation based on the application of accepted scientific methods, (also based on an observation of events in the real world, but scientific explanations are subject to rigorous research scrutiny)
1) Empirical- based on evidence of senses, based on objective and systematic observation, controlled conditions.
2) Rational- follows rules of logic and is consistent with known facts.
3) Testable-
4) Parsimonious- one that explains behavior with the fewest number of assumptions
5) General- scientist prefer explanations of broad explanatory power over those that work only within a limited set of circumstances
6) Tentative-
7) Rigorously Evaluated-
Explain Basic vs Applied research
Basic Research:
Is theory-driven, or empirically-based. The major goal is to form a general explanation of a given phenomenon or set of phenomena-with little emphasis on the real life applications (Quantitative)

Applied research:
Emphasizes real-world applications (ex. Bus route schedules and impact on rush hr traffic congestion). Primary is to generate explanations for real-world issues, applications. (Qualitative)
Define Hypothesis
Hypothesis:
A tentative statement, subject to empirical test, about the expected relationship between variables.
Define Theory
Theory:
Goes beyond the level of a simple hypothesis, deals with potentially verifiable phenomena, and is highly ordered and structured. Is a partially verified statement of a scientific relationship that cannot be directly observed.
Define Model
Model:
Specific application of a general theoretical view. Sometimes used as a synonym for theory.
Define Law
Law:
A relationship that has been substantially verified through empirical test
Define Inter-rater Reliability
Inter-rater Reliability:
The degree to which multiple observers agree in their classification or quantification of behavior
Define Primary source and Secondary source
Primary Source:
Is one containing the full research report, including all details necessary to duplicate the study. (the original study)

Secondary Source:
Is one that summarizes information from primary sources, (review papers, theoretical articles that briefly describe studies and results, descriptions of research found in textbooks, popular magazines, newspaper articles, television, films, or lecturers. Meta-analysis as well.
Define Refereed Journal
Refereed Journal:
When you submit your work, it is reviewed usually by two or more reviewers. Higher quality journals because reviewed.
Define Personal Communication
Personal Communication:
Personal replies to your inquires
Define Publication
Publication:
Practices are one source of bias in scientific findings. Published articles are only those that meet subjective, and somewhat strict, publication criteria. Criteria for publication consistency of result with previous findings, and editorial policy.
Define Predictor:
When you use correlational relationships for prediction, the variable used to predict is called predictor. (ex.# of hours slept)
Define Criterion
The variable whose value is being predicted.
(ex. level of crankiness)
Define Indépendant Variable
The variable that is manipulated in an experiment. Its value is determined by the experimenter, not by the subject.
Define Dependent Variable
The variable measured in a study. Its value is determined by the behavior of the subject & may depend on the value of the IV.
Controls for Extraneous Variables
1) Hold extraneous variables constant. (ex. Make sure everyone is sober taking the test). If these variables do not vary over the course of your experiment, they cannot cause uncontrolled variations in your DV.

2) Randomize their effects across treatments. The idea is to distribute the effects of these differences across treatment in such a way that they tend to even out and thus cannot be mistaken for effects of the IV.
Define Randomization
Randomization:
To ensure equivalent groups of subjects (cuts down on individual differences affecting outcome)
Define Random Sampling
Random Sampling:
Participants have an equal and unbiased chance of being in study. A sample drawn from the population.
Define Random Assignment
Random Assignment:
Participants have an equal and unbiased chance of being in any experimental condition. (picking names out of a hat)
Define Floor and Ceiling Effects
Ceiling Effects:
Everyone does well (measure not sensitive enough)

Floor Effects:
Everyone does poorly (cannot form any valid conclusions about IV effects, because too strict)
Define Deception in Research
Deception in Research:
Is ethical if the researcher can demonstrate that important results cannot be obtained in any other way. Minimal deception is okay if it does not cause any short term or permanent negative scarring on participants. Participants must be provided with an explanation for deception as soon as possible.
Define Scales of Measure
Nominal:
A measurement scale that involves categorizing cases into two or more distinct categories. This scale yields the least information. (used for variables whose values differ in quality and not quantity, ex. Male/Female, extroverted/introverted)

Ordinal:
A measurement scale in which cases are ordered along some dimension (small, medium, large). The distances between scale values are unknown. Ranked.

Interval:
A measurement scale in which the spacing between values along the scale if known. The ZERO point of an interval scale is arbitrary. You know one unit is larger or smaller than another as well as by how much. (e. Celsius scale of temperature)

Ratio:
Highes cales of measurement. It has all of the characteristics of an interval scale plus an ABSOLUTE ZERO POINT. (ex. Kelvin scale, where 0 means all heat is absent.
Define Experimenter vs Participant bias
Experimenter Bias:
When the behavior of the researcher influences the results of a study. Stems from two sources:
1) Expectancy effects
2) uneven treatment of subjects across treatments

Participant Bias:
Participant bias is the tendency of the participants in any research activity or focus group discussions to act in the way they think that the evaluator wants them to act.

*Because experimenter bias can pose such a serious threat to internal validity and external validity, you must take steps to reduce the bias by....
Define Single Blind study
The experimenter does not know which experimental condition a subject has been assigned to
Define Double Blind study
Neither the experimenter not the participants know at the time of testing which treatments the participants are receiving