Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
300 Cards in this Set
- Front
- Back
Research
|
Systematic inquiry using disciplined methods to solve problems or answer questions
|
|
Nursing research
|
Systematic inquiry to develop knowledge about issues of importance to the nursing profession
|
|
Research vs. Nursing Research
|
Research - Systematic inquiry using disciplined methods to solve problems or answer questions
Nursing Research - Systematic inquiry to develop knowledge about issues of IMPORTANCE TO THE NURSING PROFESSION |
|
Roles of Nurses in Research
|
Continuum of participation, from producers of research to skilled consumers of research findings who use research evidence in their practice
Evidence-based practice (EBP) – the use of the best clinical evidence in making patient care decisions Both consumers and producers play a key role in EBP |
|
Evidence-based Practice (EBP)
|
The use of the best clinical evidence in making patient care decisions
|
|
History of Nursing Research
|
Pioneered by Florence Nightingale, 1850s
First journal on research (Nursing Research) emerged, 1950s Clinical research increasingly important, 1980s National Center for Nursing Research established at NIH, 1986 National Institutes of Nursing Research (NINR) established, 1993 NINR budget exceeds $100 million, 2000s |
|
Types of Nursing Research
|
1. Quantitative Research
2. Qualitative Research 3. Mixed Methods |
|
Quantitative Research
|
The investigation of phenomena that lend themselves to precise measurement and quantification, often involving a rigorous and controlled design. Involves analysis of numerical data
|
|
Example of Quantitative Research
|
Identify factors that related to cancer-related fatigue among patients undertaking active cancer treatment. Rate fatigue before and after cancer treatment on a scale of 0-10. Data is numeric
|
|
Types of Quantitative Research
|
1. Non-experimental – Observe effects: Descriptive, Correlational
2. Experimental – Give intervention: True experimental (Randomized clinical trial), quasi-experimental |
|
Qualitative Research
|
The investigation of phenomena, typically in an in-depth and holistic fashion, through the collection of rich narrative materials using a flexible research design. Involves analysis of data such as words
|
|
Example of Qualitative Research
|
Explore the experience of cancer-related fatigue among patients undertaking active cancer treatment. Experience. Data is paragraph and subject
|
|
Types of Qualitative Research
|
1. Phenomenological – Focuses on the lived experiences of humans
2. Grounded Theory – Seeks to understand key social psychological processes 3. Ethnographic – Focuses on the patterns and lifeways of a cultural group |
|
Mixed Methods
|
Combine both qualitative and quantitative data
|
|
Non-experimental vs. Experimental
Quantitative Research |
1. Non-experimental – Observe effects: Descriptive, Correlational
2. Experimental – Give intervention: True experimental (Randomized clinical trial), quasi-experimental |
|
Phenomenological vs. Grounded Theory vs. Ethnographic
Qualitative Research |
1. Phenomenological – Focuses on the lived experiences of humans
2. Grounded Theory – Seeks to understand key social psychological processes 3. Ethnographic – Focuses on the patterns and lifeways of a cultural group |
|
Concepts
|
Concepts are abstractions of particular aspects of human behavior or characteristics (e.g., pain, weight)
|
|
Constructs
|
Constructs are slightly more complex abstractions (e.g., self-care).
|
|
Concepts vs. Constructs
|
Concepts are abstractions of particular aspects of human behavior or characteristics (e.g., pain, weight).
Constructs are slightly more complex abstractions (e.g., self-care). |
|
Theories and Conceptual Models
|
Theories and conceptual models knit concepts into a coherent system that purports to explain phenomena.
|
|
Variable
|
A characteristic or quality that takes on different values, i.e., that varies from one person to the next
Examples: blood type, weight, length of stay in hospital Term “variable” is used almost exclusively in quantitative research |
|
Types of Variables
|
Continuous (e.g. height, weight) vs. categorical (e.g., marital status, gender) on a spectrum
Attribute variable vs. created variable Independent variable—the presumed cause (of a dependent variable) Dependent variable—the presumed effect (of an independent variable) Often referred to as the outcome variable or outcome Example: Smoking (IV) → Lung cancer (DV) |
|
Independent Variable
|
The presumed cause (of a dependent variable)
|
|
Dependent Variable
|
The presumed effect (of an independent variable
|
|
Data
|
The pieces of information researchers collect in a study
Quantitative researchers collect numeric (quantitative) data Qualitative researchers collect narrative (verbal) data |
|
Which of the following best describes a dependent variable?
a.Outcome being measured b.A person’s gender c.Presumed cause d.Measurements performed |
A. Outcome being measured
|
|
Process Steps of Nursing Research
|
Phase 1: Conceptual Phase
Phase 2: Design and Planning Phase Phase 3: Empirical Phase Phase 4: Analytic Phase Phase 5: Dissemination Phase |
|
Phase 1: Conceptual Phase
|
Phase 1: Conceptual Phase
Formulating research problem and research question Reviewing related literature Undertaking clinical fieldwork Defining the framework and developing conceptual definitions Formulating hypotheses |
|
Phase 2: Design and Planning Phase
|
Phase 2: Design and Planning Phase
Selecting a research design Identifying the population Designing the sampling plan Specifying methods to measure variables and collect data Developing methods to protect human/animal rights Finalizing the research plan Conduct pilot |
|
Phase 3: Empirical Phase
|
Phase 3: Empirical Phase
Collecting the data Preparing data for analysis (e.g., coding the data) |
|
Phase 4: Analytic Phase
|
Phase 4: Analytic Phase
Analyzing the data Interpreting results |
|
Phase 5: Dissemination Phase
|
Phase 5: Dissemination Phase
Communicating the findings in a research report (e.g., in a journal article) Utilizing findings in practice |
|
Research Problem
|
A perplexing or enigmatic situation that a researcher wants to address through disciplined inquiry.
|
|
Sources of Research Problems
|
Experience and clinical fieldwork
Nursing literature Theories Social issues Suggestions from external sources (e.g., priority statements of national organizations or funders) |
|
Problem Statement
|
A statement articulating the research problem and making an argument to conduct a new study. Broad enough to include central concerns but narrow enough to serve as a guide to study design
|
|
Components of a Problem Statement
|
Identification of the problem (What is wrong with the current situation?)
Background (What is the nature or context of the problem?) Scope (How big is the problem, and how many people are affected?) Consequences (What are the consequences of not fixing the problem)? Knowledge gaps (What information about the problem is lacking?) Proposed solution (How will the study contribute to the problem’s solution?) |
|
Statements of Purpose
|
The researcher’s summary of the overall study goal
|
|
Statement of Purpose: Quantitative Studies
|
Identifies key study variables
Identifies possible relationships among variables Indicates the population of interest Suggests, through use of verbs, the nature of the inquiry (e.g., to test…, to compare…, to examine…) |
|
Example of Statement of Purpose in Quantitative Studies
|
o Example: The purpose of this study was to examine the relationships among nurse staffing, RN composition, hospitals’ Magnet status, and patient falls. We studied general acute-care hospitals, hereafter referred to as ‘‘general hospitals.’’
|
|
Statement of Purpose: Qualitative Studies
|
Identifies the central pheomenon
Suggests the research tradition (e.g., grounded theory, ethnography) Indicates the group, community, or setting of interest Suggests, through use of verbs, the nature of the inquiry (e.g., to describe…, to discover…, to explore…) |
|
Example of a Statement of Purpose: Qualitative Study
|
The purpose of this study was to describe satisfactory and unsatisfactory experiences of postpartum nursing care from the perspective of adolescent mothers*
|
|
Research Questions
|
the specific queries the researcher wants to answer in addressing the research problem. They are sometimes direct re-wordings of statements of purpose, worded as questions and sometimes used to clarify or lend specificity to the purpose statement
In quantitative studies, typically pose queries about the relationships among variables |
|
Example of Purpose --> Question
|
Purpose: The purpose of this study was to examine the relationships among nurse staffing, RN composition, hospitals’ Magnet status, and patient falls
Question: What is the relationships among nurse staffing, RN composition, hospitals’ Magnet status, and patient falls? |
|
Example of Purpose --> Question
|
Purpose: The purpose of the study was to examine the relationship between exercise level and cancer related symptom among cancer patients.
Question: Do cancer patients who experiences high level of fatigue exercise less than those with low level of fatigue. |
|
Hypotheses
|
states an expectation, a predicted answer to the research question. Should almost always involve two or more variables and suggests The predicted relationship between the independent variable and the dependent variable
|
|
Sources of Hypotheses
|
Theory, previous studies or clinical practice
|
|
What should a hypothesis include?
|
- Terms that indicate a relationship (e.g. more than, different from, associated with)
- Is articulated almost exclusively in quantitative (not qualitative) studies - Is tested through statistical procedures - Always use present tense |
|
Simple Hypothesis
|
expresses a predicted relationship between one independent variable and one dependent variable
|
|
Complex Hypotheses
|
states a predicted relationship between two or more independent variables and/or two or more dependent variables (e.g., there is relationship between nurse staffing, RN composition, hospitals’ Magnet status, and patient falls.)
|
|
Simple vs. Complex Hypotheses
|
Simple Hypothesis – expresses a predicted relationship between one independent variable and one dependent variable
Complex Hypothesis - states a predicted relationship between two or more independent variables and/or two or more dependent variables (e.g., there is relationship between nurse staffing, RN composition, hospitals’ Magnet status, and patient falls.) |
|
Directional Hypothesis
|
predicts the direction of a relationship (e.g., hospitals with better nurse staffing have less patient falls than hospitals with poor nurse staffing
|
|
Nondirectional Hypothesis
|
predicts the existence of a relationship, not its direction (e.g., there is relationship between nurse staffing, RN composition, hospitals’ Magnet status, and patient falls
|
|
Directional vs. Nondirectional Hypothesis
|
Directional Hypothesis – predicts the direction of a relationship (e.g., hospitals with better nurse staffing have less patient falls than hospitals with poor nurse staffing
Nondirectional hypothesis – predicts the existence of a relationship, not its direction (e.g., there is relationship between nurse staffing, RN composition, hospitals’ Magnet status, and patient falls |
|
Research Hypothesis
|
states the actual prediction of a relationship
|
|
Statistical or Null Hypothesis
|
expresses the absence of a relationship (used only in statistical testing) (e.g., there is no difference in patient falls between hospitals with magnet recognition and hospitals without magnet recognition.
|
|
Research vs. Statistical or Null Hypotheses
|
Research hypothesis – states the actual prediction of a relationship
Statistical or null hypothesis – expresses the absence of a relationship (used only in statistical testing) (e.g., there is no difference in patient falls between hospitals with magnet recognition and hospitals without magnet recognition. |
|
Hypotheses and Proof
|
Hypotheses are never proved or disproved
Statistical hypothesis testing cannot provide absolute proof—only probabilistic information to support an inference that a hypothesis is probably correct (or not). Hypotheses are supported, or not, by the study data. |
|
Purpose of a Literature Review
|
Identification of a research problem
Orientation to what the status of the evidence base is Determination of gaps or inconsistencies in a body of research Guidance in designing the study Assistance with interpretation of findings |
|
Sources of Literature
CINAHL vs. MEDLINE |
CINAHL (Cumulative Index to Nursing and Allied Health Literature)
- All English-language nursing and allied health journals - 1980 – - 3000 journals Medline (Medical Literature On-Line) -By National Library of Medicine (NLM) -Free access through pubmed -1960s – - +4000 journals, 70 countries |
|
Quantitative Research Design
|
Precise measurement
Quantification Analysis of numerical data Rigorous and controlled design |
|
Types of Quantitative Research Design
|
1. Experimental Design
2. Non-experimental Design 3. Data Collection Time 4. Relative Timing |
|
Data Collection Time
|
Cross-Sectional: Data are collected at a SINGLE POINT in time
Longitudinal: Data are collected TWO OR MORE TIMES over an extended period |
|
Relative Timing (Outcome & Causes)
|
Retrospective: outcomes → cause
Prospective: causes → outcomes |
|
Experimental Designs
|
- Trying to identify a CAUSAL RELATIONSHIP
-Criteria to establish a causality 1. Temporal 2. Empirical Relationship 3. Relationship cannot be explained by a third variable True Experimental design and Quasi-experimental Remember: both true experimental & quasi-experimental involve manipulation of an independent variable (intervention)! |
|
True Experimental Design (Randomized Clinical Trial
|
Control group – no intervention at all, a placebo intervention, a usual care intervention, a different intervention, a different intervention, same intervention but of a different “dose”
Randomization – each participant has an equal & known chance of being assigned to either the control or experimental group •tossing coin •random numbers table •computerized random number generators Intervention – Process of maneuvering the independent variable. So that its effect on the dependent variables can be observed. Ethical consideration |
|
Strengths and Weakness of True Experimental Design
|
Strengths: groups are comparable, control confounding, systematic variance (between group variance) = effect of intervention + confounding
Most reliable type of scientific evidence in the hierarchy of evidence Weakness •Practical – time, cost, ethical concern •Adherence •Sample is not representative, threat to external validity |
|
Solomon Four-Group Design
|
Most rigorous design, it controls for the effect of the pretest
|
|
Factorial Designs
|
•Two or more variables are manipulated simultaneously
•Test both main effects and interaction effects •Randomized block design |
|
Crossover Design
|
• Subjects exposed to more than one interventions, in different orders
•Repeated measures •Subjects serving as their own control group •Carryover effects |
|
Quasi-experimental Design
|
Lack at least one of the two properties – randomization or a control group
- Resemble experimental design -Do not have equivalence by randomization, or do not have control group -Lack of rigorous control over threats to the internal validity |
|
Types of Quasi-experimental Design
|
1. One group pretest-posttest design
O1 X O2 2. Posttest-only design X O O 3. Pretest–posttest design O1 X O2 O1 O2 4. Time Series Design O1 O2 O3 O4 X O5 O6 O7 O8 |
|
Strengths and Weakness of Quasi-experimental Design
|
Weakness – lack of control of confounding
Strengths – practical, feasible, relevant to “real nursing world,” enhance generalizability – external validity |
|
Issues with Experimental Design
|
Not all independent variables (“causes”) of interest to nurse researchers can be experimentally manipulated
•Smoking cannot ethically be manipulated •Hawthorne effect |
|
Non-Experimental Design
|
When the independent variables (“causes”) of interest can be experimentally manipulated to explore a new research area
|
|
Types of Non-Experimental Design
|
1. Descriptive Study Design
2. Correlatonal Study Designs 3. Epidemiologic Designs |
|
Descriptive Study Design
|
To observe, describe or document some aspect of a naturally occurring situation
Usually describe the incidence, prevalence, or particular characteristics present in a population Usually no theoretical framework No hypothesis Control over data: uncontrolled – no treatment or intervention Sampling: total population – external validity Statistical method: descriptive statistics Data: Cross-sectional or Longitudinal |
|
Examples of Descriptive Study Design
|
Knowledge, attitudes, and practices regarding cervical cancer screening among physicians in the Western Region of Saudi Arabia.
A cross-sectional descriptive study using an interview with a structured questionnaire to obtain information regarding cervical cancer, practice in screening for cervical cancer, and attitudes of female physicians regarding the HPV vaccine in different health facilities in Saudi Arabia. |
|
Correlational Study Designs
|
To understand relationships among phenomena as they naturally occur, without any researcher intervention, >=2 variables
A quantitative method of research in which you have 2 or more quantitative variables from the same group of subjects, & you are trying to determine if there is a relationship between the variables (a tendency for variation in one variable to be related to variation in another). No control or manipulation of the situation Variables and can be detected through statistical analysis Can look at relationship between 2 variables, or >= 2 variables Data: Cross-sectional or Longitudinal Remember: Correlations does not prove causation! |
|
Examples of Correlational Study Designs
|
• Factors related to health practices: cervical cancer screening among Filipino women
• This correlational study developed and tested theory to better understand health practices, including cervical cancer screening, among young Filipino women. • It tested theoretical relationships postulated among (a) positive health practices, (b) cervical cancer screening, (c) social support, (d) acculturation, and (e) optimism. |
|
Epidemiologic Designs
|
1. Case-control design
2. Cohort design |
|
Case-control design
|
Retrospective design
Typically examines multiple exposures in relation to a disease; subjects are defined as cases and controls, and exposure histories are compared. |
|
Example of Case-Control Design
|
Examine how smoking affect bone healing post orthopedic surgery
Risk/exposure = smoking Outcome = post-surgery complications |
|
Cohort Design
|
Prospective Design
Typically examines multiple health effects of an exposure; subjects are defined according to their exposure levels and followed for disease occurrence |
|
Risk
|
Risk = # in group who had outcome / Total # in group
|
|
Relative Risk
|
Relative Risk = Risk of group A / Risk of group B
|
|
Risk & Relative Risk Interpretation
|
Interpretation
1 = same risk <1 = Group A has less risk >1 = Group A has more risk |
|
Odds
|
Odds = # in group who had outcome / # in group who did not have outcome
|
|
Odds Ratio
|
Odds Ratio = Odds of group A / Odds of group B
|
|
Choosing a Quantitative Design to Fit Research Question
|
1. Is there a treatment?
-Yes - Experimental - No - Non-Experimental Design |
|
Theory
|
An abstraction that purports to account for or explain phenomena
Types 1. Grand Theories 2. Middle Range Theories |
|
Grand vs. Middle Range Theories
|
Grand theories – a theory that attempts to explain large aspects of human experiences
Middle-range theories – a theory that focuses on a specific aspect of human experience |
|
Conceptual Models
|
- Deal with abstractions, assembled in a coherent scheme
-Represent a less formal attempt to explain phenomena than theories -Do not have formal propositions about relationships among phenomena |
|
Commonalities Between Theories & Conceptual Models
|
•Use concepts as building blocks
•Require definitions of key concepts •Depicted in a schematic model •Are developed inductively •Cannot be “proven” proven—they are supported to greater or lesser degrees •Can be used to generate hypotheses •Can stimulate research |
|
Conceptual Models of Nursing
|
Formal explanations of what nursing practice is
4 Concepts Central to Models of Nursing - Human beings -Environment -Health - Nursing |
|
Substantive Theory
|
conceptualizations of the target phenomena, theories in qualitative research
|
|
Theory embedded in a research tradition
|
Grounded theory (e.g., symbolic interactionism)
Ethnography (cultural theories: ideational and materialistic) Phenomenology (the phenomenological philosophy of human experience) |
|
The Use of Theories of Models in Quantitative Research
|
Testing a theory through deducing hypothesis to be tested
Testing a theory-based intervention Using a theory/model as an organizing or interpretive structure Fitting a problem into theory, after the fact (not recommended) |
|
Framework
|
The overall conceptual underpinnings of a study
Theoretical framework (based on theory) Conceptual framework (based on a conceptual model) |
|
Why Do Clinicians Need Theory?
|
Avoids a haphazard approach to practice based on the expediency of the moment
Provides a basis for continual description, explanation, and prediction of nursing issues Changes the way we comprehend and process information |
|
Theory in Nursing
|
Theory → Research → Practice
Organizational Forms → Operant Mechanisms → Outcomes |
|
Population
|
the aggregate of cases in which a researcher is interested. Not restricted to human subjects. Broadly defined or narrowly specified
|
|
Target Population
|
the entire population of interest
|
|
Accessible Population
|
the portion of the target population that is accessible to the researcher, from which a sample is drawn
|
|
Target vs. Accessible Population
|
Target population – the entire population of interest
Accessible population – the portion of the target population that is accessible to the researcher, from which a sample is drawn |
|
Strata
|
subpopulations of a population (e.g., male/female) Often used to enhance the sample’s representative
|
|
Sampling
|
selection of a portion of the population (a sample) to represent the entire population
Sample - a subset of population |
|
Sampling Goal in Quantitative Research
|
Representative sample
|
|
Representative Sample
|
a sample whose key characteristics closely approximate those of the population
More easily achieved with: - Probability Sampling - Homogeneous Populations - Larger Samples |
|
Sampling Bias
|
the systematic over- or under- representation of segments of the population on key variables when the sample is not representative
|
|
Sampling Error
|
differences between sample values and population values
|
|
Sampling Issues in Quantitative Studies
|
Sampling Design - how the sample is selected?
Sample Size - how many elements are included? |
|
Types of Sampling Designs
|
1. Probability Sampling - involves random selection of elements: each element has an equal, independent chance of being selected
2. Nonprobability Sampling - does not involve selection of elements at random |
|
Probability Sampling
|
Involves random selection of elements: each element has an equal, independent chance of being selected
Types 1. Simple Random Sampling 2. Systematic Sampling 3. Stratified Random Sampling 4. Cluster (Multistage) Sampling |
|
Simple Random Sampling
|
-Uses a sampling frame – a list of all population elements
-Assigns every element a unique number from 1 to N (where N = population size) -Then draws n random numbers (with or without replacement) between 1 and N where n = desired sample size - Basic building block of sampling -Computer selection methods -Involves random selection of elements from the sampling frame -Not to be confused with random assignment to groups in experiments -Cumbersome; rarely possible to get a complete listing of population elements, not used in large, national surveys |
|
Systematic Sampling
|
Selection of every kth case from a list
Sampling interval – standard distance between the selected elements (population size/sample size) Identical to simple random sampling, but more convenient |
|
Sample Interval
|
Sampling interval – standard distance between the selected elements (population size/sample size)
|
|
Stratified Random Sampling
|
- Population is first divided into strata, then random selection is done from the stratified sampling frames
-Enhances representativeness -May be impossible if the strata information is not available |
|
Cluster (Multistage) Sampling
|
- successive random sampling of units from larger to smaller units (e.g., states, then zip codes, then households)
-Widely used in national surveys - Larger sampling error than in simple random sampling but more efficient |
|
Nonprobability
|
Does not involve selection of elements at random
Types 1. Convenience Sampling 2. Snowball (network) sampling 3. Quota Sampling 4. Purposive Sampling 5. Consecutive Sampling |
|
Convenience Sampling
|
Convenience sampling – use of the most conveniently available people
- Most widely used approach by quantitative researchers - Most vulnerable to sampling biases |
|
Why Do Clinicians Need Theory?
|
Avoids a haphazard approach to practice based on the expediency of the moment
Provides a basis for continual description, explanation, and prediction of nursing issues Changes the way we comprehend and process information |
|
Theory in Nursing
|
Theory → Research → Practice
Organizational Forms → Operant Mechanisms → Outcomes |
|
Population
|
the aggregate of cases in which a researcher is interested. Not restricted to human subjects. Broadly defined or narrowly specified
|
|
Target Population
|
the entire population of interest
|
|
Accessible Population
|
the portion of the target population that is accessible to the researcher, from which a sample is drawn
|
|
Target vs. Accessible Population
|
Target population – the entire population of interest
Accessible population – the portion of the target population that is accessible to the researcher, from which a sample is drawn |
|
Strata
|
subpopulations of a population (e.g., male/female) Often used to enhance the sample’s representative
|
|
Sampling
|
selection of a portion of the population (a sample) to represent the entire population
Sample - a subset of population |
|
Sampling Goal in Quantitative Research
|
Representative sample
|
|
Representative Sample
|
a sample whose key characteristics closely approximate those of the population
More easily achieved with: - Probability Sampling - Homogeneous Populations - Larger Samples |
|
Sampling Bias
|
the systematic over- or under- representation of segments of the population on key variables when the sample is not representative
|
|
Sampling Error
|
differences between sample values and population values
|
|
Data Collection Plan Decisions
|
New data, collected specifically for research purposes
OR Existing data: records, historical data, existing data set (secondary analysis) |
|
Types of Data Collection
|
1. Structured Self-Reports - Data are collected with a formal instrument
2. Observation – systematic observing and recording of behavior, events, and settings of the variable(s) under investigation 3. Biophysiologic measures – this is an alternative to self-report and observation to collect data |
|
Structured Self- Reports
|
Structured Self-Reports - Data are collected with a formal instrument
Interview schedule – questions are prespecified but asked orally. Either face to face or by telephone Advantages of Interviews – higher response rates, appropriate for more diverse audiences, opportunities to clarify questions or to determine comprehension, opportunity to collect supplementary data through observation Questionnaire – questions pre-specified in written form, to be self-administered by respondents Advantages of Questionnaires – lower costs, possibility of anonymity, greater privacy, lack of interview bias |
|
Interview Schedule
|
Interview schedule – questions are prespecified but asked orally. Either face to face or by telephone
Advantages of Interviews – higher response rates, appropriate for more diverse audiences, opportunities to clarify questions or to determine comprehension, opportunity to collect supplementary data through observation |
|
Questionnaire
|
Questionnaire – questions pre-specified in written form, to be self-administered by respondents
Advantages of Questionnaires – lower costs, possibility of anonymity, greater privacy, lack of interview bias |
|
Types of Questions
|
1. Closed-ended (fixed alternative) – “within the past 6 months, were you ever a member of a fitness center or gym?”
Dichotomous questions Multiple-choice questions Forced-choice questions Rating questions 2. Open-ended questions – “why did you decide to join a fitness center or gym” |
|
Composite Psychosocial Scales
|
Scales – used to make fine quantitative discriminations among people with different attitudes, perceptions, traits
1. Likert Scales - summated rating scales 2. Graphic |
|
Likert Scales
|
Likert scales – summated rating scales
-Consist of several declarative statements (items) expressing viewpoints -Responses are on an agree/disagree continuum (usually 5 or 7 response options) -Responses to items are summed to compute a total scale score |
|
Response Set Biases
|
Response Set Biases – Biases reflecting the tendency of some people to respond to items in characteristic ways, independently of item content
Examples: social desirability response set bias, extreme response set, acquiescence response set (yea-sayers) |
|
Observation
|
Observation – systematic observing and recording of behavior, events, and settings of the variable(s) under investigation
Things observed in research -Characteristics and conditions of individuals and groups -Verbal and non-verbal communication -Activities -Skill attainment -Environmental characteristics Types of Observations -Hidden vs. Open – eliminate social desirability, problems with informed consent, public behavior Sources of bias with observation -Subject -Observer – fatigue |
|
Biophysiologic Measures
|
Biophysiologic measures – this is an alternative to self-report and observation to collect data
Physiologic phenomena that interest nurses are varied - Chemical measures such as hormone levels or blood sugar etc. - Respiratory rate, temperature, blood pressure Need to decide if biophysical measure will provide valuable information about the study variable - Measuring stress – administer a tool to the subjects, observe behavior, measure HR, BP or adrenocorticotropic hormone Advantages – objectivity, precision Disadvantages – still have measurement error, discomfort, risk, expense |
|
Measurement
|
Measurement – the assignment of numbers to represent the amount of an attribute present in an object or person, using specific rules
Advantages: removes guesswork, provides precise information, less vague than words |
|
Levels of Measurement
|
1. Nominal – used when data can be organized in to categories of a defined property, but the categories cannot be ordered. Categories must be mutually exclusive, and all data must fit into the established categories
2. Ordinal – used when data can be assigned to categories that can be ranked Categories must be mutually exclusive, and all data must fit into the established categories. Intervals between ranked categories cannot be assumed to be equal Example: likert scale 3. Interval - Distances between intervals of the scale are numerically equal. Like ordinal scales have mutually exclusive categories, exhaustive categories and rank ordering. Values are presumed to lie on a continuum, and change can be precisely measured. Absolute amount of an attribute cannot be precisely measured since no zero point on scale. Example: Blood Pressure 4. Ratio - Also have mutually exclusive categories, exhaustive categories, rank ordering, equal spacing between intervals, and a continuum of values . Have an absolute zero point, Addition of a zero point not only allows you to document the absence of some quality, it also allows you to say that something weighs twice as much as another object. Example: number of hours worked/number of hours sleep |
|
Nominal
|
Nominal – used when data can be organized in to categories of a defined property, but the categories cannot be ordered. Categories must be mutually exclusive, and all data must fit into the established categories
|
|
Ordinal
|
Ordinal – used when data can be assigned to categories that can be ranked Categories must be mutually exclusive, and all data must fit into the established categories
Intervals between ranked categories cannot be assumed to be equal Example: likert scale |
|
Interval
|
Interval - Distances between intervals of the scale are numerically equal. Like ordinal scales have mutually exclusive categories, exhaustive categories and rank ordering. Values are presumed to lie on a continuum, and change can be precisely measured. Absolute amount of an attribute cannot be precisely measured since no zero point on scale.
Example: Blood Pressure |
|
Ratio
|
Ratio - Also have mutually exclusive categories, exhaustive categories, rank ordering, equal spacing between intervals, and a continuum of values . Have an absolute zero point, Addition of a zero point not only allows you to document the absence of some quality, it also allows you to say that something weighs twice as much as another object.
Example: number of hours worked/number of hours sleep |
|
A variable’s level of measurement determines what mathematic operations can be performed in a statistical analysis
|
A variable’s level of measurement determines what mathematic operations can be performed in a statistical analysis
|
|
Psychometric Assessments
|
Psychometric Assessments - an evaluation of the quality of a measuring instrument
Key criteria in a psychometric assessment: reliability and validity |
|
Reliability
|
Reliability – the consistency and accuracy with which an instrument measures the target attribute
- Reliability assessments involve computing a reliability coefficient - Reliability coefficients can range from .00 to 1.00 - Coefficients below .70 are considered unsatisfactory - Coefficients of .80 or higher are desirable |
|
Aspects of Reliability
|
Stability – the extent to which scores are similar on two separate administrations of an instrument
Internal Consistency – the extent to which all the items on an instrument are measuring the same unitary attribute Equivalence – the degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument |
|
Stability
|
Stability – the extent to which scores are similar on two separate administrations of an instrument
- Evaluated by test-retest reliability – requires participants to complete the same instrument on two occasions - Appropriate for relatively enduring attributes (e.g., creativity) |
|
Internal Consistency
|
Internal Consistency – the extent to which all the items on an instrument are measuring the same unitary attribute
- Evaluated by administering instrument on one occasion Appropriate for most multi-item instruments - The most widely used approach to assessing reliability - Assessed by computing coefficient alpha (Cronbach’s alpha) - Alphas > .8 are highly desirable |
|
Equivalence
|
Equivalence – the degree of similarity between alternative forms of an instrument or between multiple raters/observers using an instrument
Most relevant for structured observations Assessed by comparing agreement between observations or ratings of two or more observers (interobserver/interrater reliability) |
|
Reliability Principles
|
- Low reliability can undermine adequate testing of hypotheses
- Reliability estimates vary depending on procedure used to obtain them - Reliability is lower in homogeneous than heterogeneous samples - Reliability is lower in shorter than longer multi-item scales |
|
Validity
|
Validity – the degree to which an instrument measures what it is supposed to measure
1. Face validity – refers to whether the instrument looks as though it is an appropriate measure of the construct. Based on judgment; no objective criteria for assessment 2. Content validity – the degree to which an instrument has an adequate sample of items for the construct being measured. Evaluated by expert evaluation, often via a quantitative measure – the content validity index (CVI) 3. Criterion-related validity – the degree to which the instrument is related to an external criterion 4. Construct validity – concerned with these question: what is this instrument really measuring? Does it adequately measure the construct of interest? |
|
Face Validity
|
Face validity – refers to whether the instrument looks as though it is an appropriate measure of the construct. Based on judgment; no objective criteria for assessment
|
|
Content Validity
|
Content validity – the degree to which an instrument has an adequate sample of items for the construct being measured. Evaluated by expert evaluation, often via a quantitative measure – the content validity index (CVI)
|
|
Criterion-Related Validity
|
Criterion-related validity – the degree to which the instrument is related to an external criterion
Validity coefficient – calculated by analyzing the relationship between scores on the instrument and the criterion Predictive validity – the instrument’s ability to distinguish people whose performance differs on a future criterion Concurrent validity – the instrument’s ability to distinguish individuals who differ on a present criterion |
|
Construct Validity
|
Construct validity – concerned with these question: what is this instrument really measuring? Does it adequately measure the construct of interest?
Some methods of assessing construct validity - Known-groups technique - Testing relationships based on theoretical predictions - Factor analysis |
|
Validity Coefficient
|
Validity coefficient – calculated by analyzing the relationship between scores on the instrument and the criterion
Predictive validity – the instrument’s ability to distinguish people whose performance differs on a future criterion Concurrent validity – the instrument’s ability to distinguish individuals who differ on a present criterion |
|
Predictive Validity
|
Predictive validity – the instrument’s ability to distinguish people whose performance differs on a future criterion
|
|
Concurrent Validity
|
Concurrent validity – the instrument’s ability to distinguish individuals who differ on a present criterion
|
|
Criteria for Assessing Screening/Diagnostic Instruments
|
Sensitivity – the instruments’ ability to correctly identify a “case” – i.e., to diagnose a condition
Specificity – the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition Likelihood ratio – summarizes the relationship between sensitivity and specificity in a single number |
|
Sensitivity
|
Sensitivity – the instruments’ ability to correctly identify a “case” – i.e., to diagnose a condition
|
|
Specificity
|
Specificity – the instrument’s ability to correctly identify noncases, that is, to screen out those without the condition
|
|
Likelihood Ratio
|
Likelihood ratio – summarizes the relationship between sensitivity and specificity in a single number
LR+: the ratio of true positives to false positives LR - : the ratio of false negatives to true negatives |
|
Validity
|
Validity – soundness of the evidence and the degree of inferential support the evidence yields
|
|
Types of Validity
|
1. Statistical conclusion validity – the ability to detect true relationships statistically
2. Internal validity - Extent to which effects detected in the study are a true reflection of reality rather than due to extraneous or confounding variables 3. External validity – the generalizability of the observed relationships across samples, settings, or time 4. Construct validity – the degree to which key constructs are adequately captured in the study |
|
Construct Validity
|
Construct validity – the degree to which key constructs are adequately captured in the study
- Examines fit between conceptual and operational definitions of variables - Does the instrument actually measure the construct it was designed to measure? |
|
Threats to External Validity
|
Threats to External Validity
- Inadequate sampling of study participants - Expectancy effect (Hawthorne effect – usually in a behavioral study) makes effects observed in a study unlikely to be replicated in real life. - Unfortunately, enhancing internal validity can sometimes have adverse effects on external validity. |
|
External Validity
|
External validity – the generalizability of the observed relationships across samples, settings, or time
- Concerned with extent to which study findings can be generalized beyond the sample used in the study (different persons, settings, times) - Most serious external validity threat would be if findings only meaningful for group being studied - The significance of a study depends on the number of types of people and situations to which the findings can be generalized |
|
Threats to Internal Validity
|
- Temporal ambiguity
- Selection threat—biases arising from pre-existing differences between groups being compared. This is the single biggest threat to studies that do not use an random assignment. -History threat—other events co-occurring with causal factor that could also affect outcomes -Maturation threat—processes that result simply from the passage of time - Mortality threat / Attrition—differential loss of participants from different groups |
|
Internal Validity
|
Internal validity - Extent to which effects detected in the study are a true reflection of reality rather than due to extraneous or confounding variables
Alternate explanations for research findings are referred to as “threats to internal validity” - Was the result caused by the independent variable (treatment) or by something else? - Could it be due to something else? Is there an alternative cause? - Is there another valid explanation for the results |
|
Threats to Statistical Conclusion Validity
|
1. Low statistical power – The ability to detect true relationships among vars
Increase sample size to increase statistical power Maximize group difference in independent var to increase statistical power Maximize precision to increase statistical power = Accurate measure tool, Control over confounding vars, Powerful statistical methods 2. Range restriction - Sufficient variability in dependent var 3. Unreliability of treatment implementation -Intervention is not as powerful in reality as it is “on paper” -Treatment adherence |
|
Statistical Conclusion Validity
|
Statistical conclusion validity – the ability to detect true relationships statistically
- Each statistical test is associated with certain assumptions -If violate assumptions, then conclusion may be wrong |
|
Interpretation and Quantitative Results
|
The statistical results of a study, in and of themselves, do not communicate much meaning
Statistical results must be interpreted to be of use to clinicians and other researchers |
|
Interpretative Task
|
Interpretive Task – involves addressing six considerations:
-The credibility and accuracy of the results -The precision of the estimate of effects -The magnitude of effects and importance -The meaning of the results - The generalizability of the results - The implications of the results for practice, theory, further research |
|
Inference and Interpretation
|
Interpreting research results involves making a series of inferences.
We infer from study results to “truth in the real world” |
|
The Interpretative Mindset
|
- Approach the task of interpretation with a critical – and even skeptical – mindset
- Test the “null hypothesis” that the results are wrong against the “research hypothesis” that they are right. - SHOW ME!! Expect researchers to provide strong evidence that their results are credible – i.e., that the “null hypothesis” has no merit. |
|
Credibility and Interpretation
|
Inferences of the type the researcher wishes people to make are supported by:
Rigorous methodological decisions, Good proxies or stand-ins for abstract constructs and idealized methods, Minimization of threats to study validity, Elimination or reduction of bias, Efforts to find corroborating evidence - CONSORT Guidelines - Reporting guidelines have been developed so that readers can better evaluate methodological decisions and outcomes |
|
Precision and Magnitude
|
Results should be interpreted in light of the precision of the estimates (often communicated through confidence intervals) and magnitude of effects (effects sizes)
- Considered especially important to clinical decision-making |
|
Effect Size
|
Effect Size – a measure of the strength of the relationship between two variables in a statistical population
- P value (<sig level, 0.05) does not convey effect size measure - Unstandardized effect size – group difference & unstandardized regression coefficient - Standardized effect size – r, cohen’s d, cohen’s f, OR relative risk (RR) |
|
What do the results mean?
|
•If the results are credible and of sufficient precision and importance, then inferences must be made about what they mean.
•An interpretation of meaning requires understanding not only methodological issues but also theoretical and substantive ones |
|
Meaning and Causality
|
•Great caution is needed in drawing causal inferences – especially when the study is nonexperiemental (and cross-sectional).
•Critical maxim: CORRELATION DOES NOT PROVE CAUSATION |
|
Meaning and Hypothesis Testing
|
•Greatest challenges to interpreting the meaning of results:
oNonsignificant results oSerendipitous significant results oMixed results •Because statistical procedures are designed to provide support for research hypotheses through the rejection of the null hypothesis, testing a research hypothesis that is a null hypothesis is very difficult |
|
Dissemination of Research Findings
|
- Select a communication outlet
- Know the audience - Develop a plan Deciding on authorship, Deciding on content, Assembling materials - Develop effective writing skills |
|
IMRAD Format
|
For Quantitative Research Reports
I = Introduction M = Method R = Results A = And D = Discussion |
|
Introduction
|
Summarize existing literature
Describe research problem Present conceptual framework State research questions or hypotheses |
|
Methods
|
Describe research design
Explain intervention (if any) Describe sample and setting Present data collection instruments Explain procedures Describe data analysis methods |
|
Credibility and Interpretation
|
Inferences of the type the researcher wishes people to make are supported by:
Rigorous methodological decisions, Good proxies or stand-ins for abstract constructs and idealized methods, Minimization of threats to study validity, Elimination or reduction of bias, Efforts to find corroborating evidence - CONSORT Guidelines - Reporting guidelines have been developed so that readers can better evaluate methodological decisions and outcomes |
|
Precision and Magnitude
|
Results should be interpreted in light of the precision of the estimates (often communicated through confidence intervals) and magnitude of effects (effects sizes)
- Considered especially important to clinical decision-making |
|
Effect Size
|
Effect Size – a measure of the strength of the relationship between two variables in a statistical population
- P value (<sig level, 0.05) does not convey effect size measure - Unstandardized effect size – group difference & unstandardized regression coefficient - Standardized effect size – r, cohen’s d, cohen’s f, OR relative risk (RR) |
|
What do the results mean?
|
•If the results are credible and of sufficient precision and importance, then inferences must be made about what they mean.
•An interpretation of meaning requires understanding not only methodological issues but also theoretical and substantive ones |
|
Meaning and Causality
|
•Great caution is needed in drawing causal inferences – especially when the study is nonexperiemental (and cross-sectional).
•Critical maxim: CORRELATION DOES NOT PROVE CAUSATION |
|
Meaning and Hypothesis Testing
|
•Greatest challenges to interpreting the meaning of results:
oNonsignificant results oSerendipitous significant results oMixed results •Because statistical procedures are designed to provide support for research hypotheses through the rejection of the null hypothesis, testing a research hypothesis that is a null hypothesis is very difficult |
|
Dissemination of Research Findings
|
- Select a communication outlet
- Know the audience - Develop a plan Deciding on authorship, Deciding on content, Assembling materials - Develop effective writing skills |
|
IMRAD Format
|
For Quantitative Research Reports
I = Introduction M = Method R = Results A = And D = Discussion |
|
Introduction
|
Summarize existing literature
Describe research problem Present conceptual framework State research questions or hypotheses |
|
Methods
|
Describe research design
Explain intervention (if any) Describe sample and setting Present data collection instruments Explain procedures Describe data analysis methods |
|
Results
|
Findings from the analyses are summarized
Intertwine description and interpretation |
|
Discussion
|
Interpretation of results
Findings relate to earlier research Study limitations Implications of the findings Future research How do results compare with prior knowledge on the topic? What can be concluded about use of the findings in nursing practice, nursing education, and future nursing research? |
|
Major Types of Reports
|
- Theses and dissertations
- Journal articles -Goals and audience, Prestige and acceptance rates, How often it publishes - Online publications - Presentations at professional meetings |
|
Impact Factor
|
Impact factor: ration between citations to a journal and recent citable items published
|
|
Manuscript
|
Manuscript – review journal’s instructions to authors
|
|
What is a p-value?
|
•the p-value is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true
•P < level of significance (pre-determined by researcher, common: 0.05, or 0.01) – oreject the null, or called “statistically significant” •P >= level of significance o Fail to reject the null, or called “not statistically significant” |
|
What is a Systematic Review
|
Integrates research evidence about a specific research question
Carefully developed through: -sampling -data collection procedures - spelled out in advance in a protocol |
|
Evidence-based Practice ...
|
Evidence-based practice relies on rigorous integration of research evidence on a topic through systematic reviews – Systematic reviews is the top of the hierarchy of evidence
|
|
Why Systematic Review ?
|
- High quality information is needed to guide practice
- 2 million articles are published each year - Can we rely on a single research study? - Failing in traditional reviews (literature review) such as cannot be replicated or personal bias involved - Two landmark SR papers published in 1992 |
|
Use Systematic Reviews When ...
|
•Needed to establish clinical and cost-effectiveness of an intervention or drug
oPractice guidelines •Needed to propose a future research agenda •Needed in grant application •Needed in dissertations |
|
Systematic Reviews
|
Systematic Reviews
Purpose - Thorough examination of an issue Production Process - Standards exist; Process used is described in report Search - As exhaustive as possible Inclusion - Original study reports, previous SRs, information from large databases Selection - Often uses a quality appraisal filter Report - Inclusive of all qualifying studies |
|
Literature Reviews
|
Purpose - Highlights of an issue; varying degrees of thoroughness
Production Process - No standards; Process not described Search – Often limited Inclusion - Original study reports, theoretical literature, essays, opinion articles Selection - Quality filter not used Report - Often selective based on purpose |
|
Systematic vs Literature Reviews
|
Systematic Reviews
Purpose - Thorough examination of an issue Production Process - Standards exist; Process used is described in report Search - As exhaustive as possible Inclusion - Original study reports, previous SRs, information from large databases Selection - Often uses a quality appraisal filter Report - Inclusive of all qualifying studies Literature Reviews Purpose - Highlights of an issue; varying degrees of thoroughness Production Process - No standards; Process not described Search – Often limited Inclusion - Original study reports, theoretical literature, essays, opinion articles Selection - Quality filter not used Report - Often selective based on purpose |
|
Types of Systematic Reviews
|
Integrative Research Reviews (IRR) - Also called as “narrative reviews”, “qualitative systematic reviews”, or “state of the science summaries.” Term not often seen in literature
Meta-analysis - Called as “Quantitative systematic reviews.” Quantitative studies Metasynthesis - A family of methodological approaches to developing new knowledge based on rigorous analysis of existing qualitative research findings |
|
Integrative Research Reviews (IRR)
|
Integrative Research Reviews (IRR) - Also called as “narrative reviews”, “qualitative systematic reviews”, or “state of the science summaries.” Term not often seen in literature
|
|
Meta-analysis
|
Meta-analysis - Called as “Quantitative systematic reviews.” Quantitative studies
|
|
Metasynthesis
|
Metasynthesis - A family of methodological approaches to developing new knowledge based on rigorous analysis of existing qualitative research findings
|
|
Systematic Review Steps
|
1. Formulating the Problem
2. Searching the Literature 3. Selecting studies 4. Assessing Study Quality 5. Extracting Data 6. Reporting the findings |
|
Formulate Review Questions
|
- Clear Statement of objectives of the review of interest: populations, type of evidence, outcomes
- Not too broad Example: The purposes of the present study are to provide a narrative review of the methods used in research examining relationships between hospital staffing and patient risk for HAI and to summarize findings of the recent studies. |
|
Searching the Literature
|
- Cover all the literature - several databases, computerized database, hand searching reference lists
- potential for bias |
|
Publication bias
|
bias against the null hypothesis (non-significant findings)
|
|
Grey Literature
|
Not formally published studies
|
|
Language bias
|
Non-English paper
|
|
Should a SR include grey literature?
|
-No consensus
- Exclusion of grey literature can lead to overestimation of effects |
|
How to locate grey literature?
|
- Hand searching journals publishing relevant content
- Contacting key researchers in the field -Contacting funders of relevant research |
|
How to select studies in a systematic review?
|
Inclusion and Exclusion Criteria
Usually provide a selection flow diagram |
|
Assessing studies for a systematic review
|
- read full text paper
- Quality/Methodological quality based on appraising framework and OCEBM - done by 2 people |
|
What is OCEBM?
|
Oxford Centre for evidence-Based Medicine (OCEBM)
|
|
How to extract data from systematic review?
|
•Synthesize the studies
•Identify commonalities •Identify differences - Reasons? •Identify patterns across the studies |
|
How to report findings in a systematic review
|
-Databases used
-Key terms used -Inclusion or exclusion criteria used -Tables - Abbreviated profiles of the studies and findings -Place findings in the context -Acknowledge the limitation of the body of research |
|
Potential limitations of systematic reviews
|
Keep in mind that conclusion from SRs is not free from bias
Systematic review may be done badly Inappropriate aggregation of studies |
|
Appraising a systematic review?
|
Is the topic well defined?
Population Intervention Outcomes Was the search for papers thorough? Search strategy described? Manual searching used? Any potential bias – language bias, grey literature Was inclusion criteria clearly described and fairly applied? Was appraisal of studies well conducted? Are the recommendations based on quality of the evidences presented? |
|
Meta-analysis
|
Called quantitative systematic reviews
To summarize research findings using statistical techniques to combine the results of quantitative studies from several or many intervention studies. |
|
Why is a meta-analysis better?
|
Used for quantitative studies - Often used in RCTs, Observational analytical studies
The statistical part of a SRs -Use statistical integration Advantages - Objectivity—statistical integration eliminates bias in drawing conclusions when results in different studies are at odds -Increased power (reduces risk of Type II error compared to single study) – possible to determine a relationship, among small studies with ambiguous nonsignificant findings -Increased precision—(results in smaller confidence intervals than single studies) more precise estimates of the effect sizes, |
|
When should a meta-analysis be used?
|
The research question being addressed or the hypothesis being tested across studies should be very similar, if not identical
Must be a sufficient knowledge base—must be enough studies of acceptable quality Consistency of evidence -- Results can be varied but not totally at odds. Not appropriate: Broad questions Insufficient knowledge base Substantial conflicting of findings |
|
Major Steps in a Meta-Analysis
|
Delineate research question or hypothesis to be tested.
Identify sampling criteria for studies to be included. Develop and implement a search strategy. Locate and screen sample of studies meeting the criteria. Appraise the quality of the study evidence. Extract and record data from reports. Formulate an analytic plan (i.e., make analytic decisions). Analyze data according to plan. Write a systematic review. |
|
Evaluating Study Quality
|
Evaluating Study Quality
Evaluations of study quality can use: A scale approach (e.g., use a formal instrument to “score” overall quality) A component approach (code whether certain methodologic features were present or not, e.g., randomization, blinding, low attrition) Meta-analysts must make decisions about handling study quality. Approaches: Omit low-quality studies (e.g., in intervention studies, non-RCTs). Give more weight to high-quality studies. Analyze low- and high-quality studies to see if effects differ (sensitivity analyses). |
|
Analytic Decisions in Meta-Analysis
|
What effect size index will be used?
How will heterogeneity be assessed? Which analytic model will be used? Will there be subgroup (moderator) analyses? How will quality be addressed? Will publication bias be assessed? |
|
Effect Size Indexes
|
A central feature of meta-analysis is the calculation of an effect size index for each study that encapsulates the study results.
An ES index is computed for each study and then combined and averaged Weighted average (by sample size) Inverse variance Several different effect size (ES) indexes can be used. |
|
Major effect size indexes
|
d: the standardized difference between 2 groups (e.g., Es vs. Cs) on an outcome for which a mean can be calculated (e.g., BMI)
Odds Ratio (OR): relative odds for two groups on a dichotomous outcome (e.g., smoke/not smoke) r: correlation between 2 continuous variables (e.g., age and depression) |
|
Heterogeneity
|
Results (effects) inevitably vary from one study to the next.
Heterogeneity can be formally tested but also can be assessed visually via a forest plot Major question: Is heterogeneity just random fluctuations?* If “yes,” then a fixed effects model of analysis can be used. If “no,” then a random effects model should be used. Factors influencing variation in effects is usually explored via subgroup analysis (moderator analysis). |
|
Is heterogeneity just random fluctuations?
|
If “yes,” then a fixed effects model of analysis can be used.
If “no,” then a random effects model should be used. |
|
Variations related to?
|
Do variations relate to:
Participant characteristics (e.g., men vs. women)? Methods (e.g., RCTs vs. quasi-experiments)? Intervention characteristics (e.g., 3-week vs. 6-week intervention)? |
|
Forest Plot
|
Forest plot -- a graphical display of individual study results that were included in a systematic review
|
|
Metasynthesis
|
One definition: The bringing together and breaking down of findings, examining them, discovering essential features, and combining phenomena into a transformed whole
Integrations that are more than the sum of the parts— novel interpretations of integrated findings Must be done by 2 people to avoid bias |
|
Some Ongoing Debates
|
Whether to exclude low-quality studies
Whether to integrate studies based in multiple qualitative traditions Various typologies and approaches; differing terminology |
|
Steps of Metasynthesis
|
Formulate question
Decide selection criteria, search strategy – more specific Search for and locate studies Extract data for analysis Formulate and implement an analysis approach Integrate, interpret, write up results All are similar are the same except how the studies should be analyzed |
|
Approaches for Analysis
|
- Noblit and Hare – developed an approach for a meta-ethnography
- Paterson and colleagues’ approach involves three components: - Sandelowski and Barrosa’s approach distinguishes studies that are summaries (no conceptual reframing) and syntheses (studies involving interpretation and metaphorical reframing – theoretical underpinning). |
|
Noblit and Hare
|
Noblit and Hare – developed an approach for a meta-ethnography
Suggest a 7-phase approach Involves “translating” findings from qualitative studies into one another An “adequate translation maintains the central metaphors and/or concepts of each account” Information should be integrated and not just summarized until translation– based on concepts – complex process Final step is synthesizing the translations |
|
Patterson and Colleagues
|
Paterson and colleagues’ approach involves three components:
Meta-data analysis – analyzing and integrating the study findings Meta-method – analyzing the methods and rigor of studies in the analysis Meta-theory – analysis of the studies’ theoretical underpinnings |
|
Meta-data analysis
|
Meta-data analysis – analyzing and integrating the study findings
|
|
Meta-method
|
Meta-method – analyzing the methods and rigor of studies in the analysis
|
|
Meta-theory
|
Meta-theory – analysis of the studies’ theoretical underpinnings
|
|
Sandelowski and Barrosa’s
|
Sandelowski and Barrosa’s approach distinguishes studies that are summaries (no conceptual reframing) and syntheses (studies involving interpretation and metaphorical reframing – theoretical underpinning).
|
|
Summaries and Syntheses in a meta-summary
|
Both summaries and syntheses can be used in a meta-summary, which can lay a foundation for a metasynthesis.
|
|
Meta-Summaries
|
Involve making an inventory of findings and can be aided by computing manifest effect sizes – effect sizes calculated from the manifest content in the studies in the review.
Two types: Frequency effect size Intensity effect size |
|
Frequency Effect Size
|
Count the total number of findings across all studies in the review (specific themes or categories).
Compute prevalence of each theme across all reports (e.g., the #1 theme was present in 75% of reports). Count # of “things” across the studies analyzed |
|
Intensity effect size
|
For each report, compute how many of the total themes are included (e.g., report 1 had 60% of all themes identified).
Count # of “things” in each individual study |
|
Sandelowski & Barroso’s Metasynthesis
|
Can build on a meta-summary
But can only be done with studies that are syntheses (not summaries), because the purpose is to offer novel interpretations of interpretive findings—not just summaries of findings |
|
Descriptive Statistics
|
Used to determine and synthesize data
|
|
Inferential Statistics
|
used to make inferences about the population based on sample data
|
|
Descriptive Indexes
|
1. Parameter - a descriptor for a population (ex. the average age of nursing students in the US)
2. Statistic - a descriptor for a sample (ex. the average age of nursing students at CUSON) Calcuate statistics to estimate parameters |
|
Parameter
|
Parameter - a descriptor for a population (ex. the average age of nursing students in the US)
|
|
Statistic
|
Statistic - a descriptor for a sample (ex. the average age of nursing students at CUSON)
|
|
Univariate Descriptive Statistics
|
only looking at one variable
|
|
Frequency Distribution
|
A systematic arrangement of numeric values on a variable from lowest to highest and a count of the number of times (and/or percentage) each value was obtained
-see the lowest and highest numbers |
|
Describing Frequency Distributions
|
1. Shape
2. Central Tendency 3. Variability |
|
Symmetric distribution
|
"bell curve"
|
|
Skewed distribution (Asymmetric distribution
|
Positive skew (long tail point to the RIGHT)
Negative skew (long tail point to the LEFT |
|
Shapes of Distribution
|
1. Peakedness
2. Modality |
|
Peakedness
|
how sharp the peak is
|
|
Modality
|
# of peaks
-unimodal -bimodal -multimodal |
|
Central Tendency
|
Index of "Typicalness" of the set of scores that comes from the center of the distribution
-Mean - equals the sum of all scores divided by the total number of scores *most stable -Median - the point in a distribution above which and below 50% of the cases fall *used as descriptor of typical value -Mode - most frequent occuring score in the distribution *used as a gross descriptor |
|
Variability
|
the degree to which scores in a distribution are spread out or dispersed
-homogeneity - little variability -heterogeneity - great variability |
|
Indexes of Variability
|
1. Range - highest value minus largest value
2. Standard deviation - average deviation of scores in a distribution, most widely used index each individual value in the set from the mean |
|
variance
|
standard deviation, squared
|
|
Bivariate DEscriptive Statistics
|
used for describing the relationship between two variables
1. contingency tables 2. correlation coefficients |
|
contingency tables
|
a 2-D frequency distribution, frequencies of 2 variables are cross-tabulated. "Cells" at intersection of rows and columns display counts and percentages
variables are usually nominal or ordinal |
|
correlation coefficients
|
indicate direction and magnitutde of relationship beween 2 variables.
the most widely used correlation coeffient is Pearson's r - used when both variables are interval or ratio level |
|
Correlation coefficients
|
range from -1 to +1
Negative - one variable increases as the other variable decreases Positive - both variables increase |
|
Correlation
|
- both variables interval/ratio
- 2 variables - looking for relationship |
|
Chi-square
|
- nominal or ordinal
- looking for difference |
|
t-test
|
- 2 groups
- interval or ratio - looking for difference |
|
ANOVA
|
- >2 groups
- interval or ratio - looking for difference |
|
Logistic Regression
|
- categorical dependent variable
- > 2 variables |
|
ANCOVA
|
- categorical independent
- continuous dependent variable - > 2 variables |
|
Multivariate Linear Regression
|
- continuous independent variable
- continuous dependent variable - > 2 variables |
|
Type I error -
|
falsely reject the null
|
|
Type II error
|
failure to reject a null hypothesis
|
|
Validity
|
soundness of the evidence and the degree of inferential support the evidence yields
|
|
statistical conclusion validity
|
ability to detect true relationships statistically
|
|
threats to statistical conclusion validity
|
- low statistical power
- range restriction - unreliability of treatment implementation |
|
Internal validity
|
extent to which effects detected in the study are a true reflection of reality rather than due to extraneous or confounding variables
|
|
Threats to Internal Validity
|
- temporal ambiguity
- selection threat - history threat - maturation threat - attrition threat |
|
External Validity
|
extent to which study findings can be generalized beyond the sample used in the study
|
|
Threats to External Validity
|
-inadequate sampling
- hawthorne effect |
|
Construct Validity
|
examines fit between conceptual and operational definitions of variables
|
|
Effect Size
|
A measure of the strength of the relationship between 2 variables in a statistical population
|