• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/66

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

66 Cards in this Set

  • Front
  • Back
"Nature of Knowing"
-Sensory Experiences
-Traditions and faith
-Authority
-Science
Ontology
The nature of reality
-is reality objective or subjective?
-what factors influence reality?
Epistemology
The study of knowledge
-how is knowledge acquired?
Axiology
The role of values in research
-bias
-reflexivity
Methodology
processes and procedures of qualitative inquiry (e.g., research design)
Inductive Reasoning
begin by observing real and practical data to help understand the data and develop a theory to explain it.
Deductive Reasoning
researcher beings with an established theory and develops and tests hyotheses of how certain facts of the theory will operate or relate to observable variables.
Empiricism
A philosophical movement claiming that all knowledge comes from experences that can be tested.
Positivism
focuses on hypothesis testing of existing theory and a search for laws through observation via experiments that seek to explain phenomena through cause-and-effect relationships. Even more subjective than Empiricism
Postpositivism
Researchers can only approximate a universal truth.
Constructivism
Pure objectivity is unattainable. Multiple realities exist. We construct meaning rather than discover it.
Generalizability
The greatest good for the greatest number of people. How widely do research findings apply to populations of people.
Sample
A subset of a population to be studied (because realistically, every single person in a population cannot be studied)
Replication
The consistency of findings from one study to the next. If replication is not achieved, findings cannot be trusted.
Reliability
How consistently the instruments used to gather data performed or how closely they approximated the "true" score.
Validity
The degree to which a study's conclusions are consistent with the data used.
Theory
A lens through which researchers view the phenomena they want to study.
Basic Research
Seeks to build a theory
Applied Research
Takes basic research into the world of practice.
Action Research
The researcher has an immediate problem he/she would like to understand better.
Variable
Any behavior or trait that varies under different conditions.
Independent Variables
believed to affect the behavior or status of another variable.
Organismic Variables
Independent variables that cannot be directly manipulated (e.g., sex).
Dependent Variables
those that depend on the independent variable for their response.
Quantitative Methods of research
rely on mathematical calculations to characterize the data collected to address the research question.
Qualitative Methods of research
rely on words or narrative description rather than calculations to characterize the data collected to address the research question.
Hypothesis
a tentative explanation for a phenomenon that is used as the basis for further explanation.
Operational Definitions
Outline the precise steps required to measure a variable accurately.
Convenience Sampling
Gathering participants who are available and willing to participate.
Simple Random Sampling
random selection of a portion of the population under study.
Stratified Random Sampling
used when the researcher wants to ensure that certain characteristics of participants are reflected in the final sample in the same proportion that they occur in the population.
Descriptive Statistics
Used to describe and summarize data
Inferential Statistics
Used to predict the probability of occurence of some causal event or association with some degree of confidence and allow this prediction to be generalized back to the population from which the sample was drawn.
Parametric Tests
used to evaluate hypotheses when the dependent variable is measured with an interval or ratio scale.
Nonparametric tests
Used to evaluate hypotheses about the shapes of distributions and are only applied to ordinal data.
Outcomes Research
The counseling literature on the study of counseling effectiveness; what works and what doesn't.
Clinical Trials
Rely on comparison groups, standardized treatment protocol, and the use of outcomes measures.
Meta-analysis
A quantitative technique that allows empirical studies to be collapsed into a meaningful quantitative index known as the effect size.
Effect Size
Mean (Experimental Group) - Mean (Control Group)/Standard deviation of the control group.

Effect size indicates the strength of a finding.
Internal Validity
describes the level of confidence in which the results of a study are supported by teh design or methodology.
External Validity
also known as generalizability and refers to whether the results of the sample in this particular study can be generalized or applied to a population, group, condition, setting, or other participants.
Experiemental Bias
refers to the researchers' possible bias toward the expected or hypothesized results.
Attitudinal Effect
The possible perception by either group (in a study) that they may be receiving specital attention, which may affect the results of the study.
Double-Blind Studies
Researchers providing the treatment are unaware if is a placebo; participants are also unaware.
Statistical Regression
the tendency for participants with exteme scores to score more toward the mean on subsequent testing.
Reactivity
participants behaving in certain ways because they have knowledge that they are being observed or experimented on.
Placebo Effects
Participants in a study act according to expectations derived from inadvertent cues to the anticipated results of the study.
Hawthorne Effect
Refers to changes in performance by the mere presence of others.
Pygmalion Effect
the experimenter acts in ways that bias the study without necessarily affecting the participant directly, or when the experimenter unintentionally provides cues to let the participants know what is expected of them.
Treatment interaction effects
the potential for a treatment protocol to have an effect based on the characterisitcs of the participant rather than on the group as a whole.
True Experimental Designs
researchers introduce treatments to participants and observe if any changes in behavior occur.
Quasi Experimental Design
similar to true experimental but do not use random assignment as a means of control. Limit ability to infer cause and effect.
Nonexperimental Design
Used to describe participant characteristics or behaviors and do not involve application of any treatment to participants (e.g., descriptive studies, survey research)
Correlational Designs
Look at the degree and direction of the relationship between variables.
Causal-Comparative Studies (ex post facto)
observe and describe some current condition, but rather than introduce treatments, researchers look to the past to try to identify possible causes.
Cross-Sectional Designs
compare groups of participants of different ages at the same point in time.
Longitudinal Designs
evaluate changes in participants over an extended period of time, often years or decades.
Cross-Sequential Designs
comprise combinations of cross-sectional and longitudinal designs by assessimg participants from two or more age groups at more than one point in time.
Trend Study
Samples different groups of people at different points in time from the same population.
Cohort Study
includes members of a particular population who do not change over the course of the survey.
Panel Study
Includes the same sample of individuals surveyed at different times.
Nonresponse Bias
occurs when the participants who respond to the survey differ in characteristics from those who do not respond.
Between-Group Designs
allow participants to be separated into groups according to some characteristic or intended treatment.
Within-Group Designs
expose the same group of participants to different levels of treatment.
Randomized posttest-only control group design
involved random assignment of participants into two groups: one group receives treatment and one group doesn't. After conclusion of the experiement, members of each group are assessed relative to their performance on the dependent variable.
Randomized pretest-postest control group design
involves randomly assigning participants into two groups, with both receiving the pretest, then subjecting each to their level of treatment, and finally administering a postest. Checks for accuracy of random assignment.