• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/234

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

234 Cards in this Set

  • Front
  • Back
The entire aggregation of cases in which a researcher is interested.
Population
Criteria that designates the specific attributes of a population, by which people are selected for inclusion in a study.
Eligibility Criteria or Inclusion Criteria
Criteria specifying characteristics that a study population does NOT have
Exclusion Criteria
An individual who participates and provides information in a study
Study Participant
The entire population in which a researcher is interested
Target Population
The cases of the target population that are accessible to the researcher as research participants
Accessible Population
The process of selecting a portion of the population to represent an entire population
Sampling
A subset of the population selected to participate in a study
Sample
A part of a population whose characteristics closely approximate those of the population; can only truly be achieved with probability sampling
Representative Sample
The systematic overrepresentation of underrepresentation of some segment of the population in terms of a characteristic relevant to the research question, affected by many things like homogeneity
Sampling Bias
Mutually exclusive segments of a research population based on a specific characteristic such as age groups
Strata
When researchers select elements by nonrandom methods; less likely to produce representative samples, but this accounts for most research samples in nursing
Nonprobability Sampling
A form of nonprobability sampling where the researcher uses the most conveniently available people as participants; snowball sampling is a variation; the weakest form of sampling
Convenience Sampling
A form of nonprobability sampling where early sampling members are asked to refer other people who meet the eligibility criteria
Snowball Sampling (a form of Convenience Sampling)
A form of sampling where researchers identify population strata and determine how many participants are needed from each stratum, often to ensure that diverse segments are adequately represented
Quota Sampling
A form of sampling that involves recruiting all of the people from an accessible population who meet the eligibility criteria over a specific time interval, or for a specified sample size
Consecutive Sampling (a form of nonprobability sampling)
Also known as judgmental sampling, this form is based on the belief that researchers' knowledge about the population can be used to hand-pick sample members; used by both quantitative and qualitative researchers (particularly the latter)
Purposive Sampling
This type of sampling involves the random selection of elements from a population; not to be confused with random assignment; the only viable method of obtaining representative samples; allows researchers to estimate the magnitude of sampling error
Probability (Random) Sampling
This sampling process is when each element in the population has an equal, independent chance of being selected
Random Selection
Using a sampling frame, this sampling method involves creating a list of population elements and selecting random members from this list; the most basic of probability sampling
Simple Random Sampling
A list of all random sampling population elements
Sampling Frame
A form of random sampling where the population is first divided into two or more strata to enhance representativeness, often dividing the population into unequal subpopulations
Stratified Random Sampling
Using a successive random sampling of units in a population to account for the fact that is may be impractical or impossible to account for all the elements; the first group is large groupings and then using successive stages of sampling
Cluster Sampling
This form of random sampling involves taking the _th case from a list, such as every 10th person, to achieve an essentially random sample
Systematic Sampling
This refers to the differences between the population values and the sample values
Sampling Error
The number of subjects in a sample; the larger, the more representative
Sample Size
Procedure for estimating either the needed sample size for a study or the likelihood of committing a Type 2 Error
Power Analysis
A form of snowball sampling used by qualitative researchers where early informants are asked to make referrals for other study participants
Nominated Sampling
A convenience sample used by qualitative researchers, where volunteers sometimes come forward and identify themselves as eligible partipants
Volunteer Sample
Deliberately selecting cases with a wide range of variation on dimensions of interest, as part of purposeful sampling in a qualitative study
Maximum Variation Sampling
These are cases qualitative researchers often use towards the end of data collection that fit the researchers' conceptualizations and strengthen credibility
Confirming Cases
These are cases qualitative researchers often use towards the end of data collection that do not fit the researchers' conceptualizations and challenge their interpretations, offering new insights about how the original conceptualization needs to be revised or expanded
Disconfirming Cases
This is a method of sampling that is most often used in grounded theory studies, involving decisions about what data to collect next and where to find those data to develop an emerging theory in the best way; unlike purposeful sampling, the point of this type of sampling is to discover categories and their properties
Theoretical Sampling
In qualitative research, this is the point at which sampling yields no new information and redundancy is achieved
Data Saturation
These are ethnographic research subjects who are highly knowledgeable about the culture and who develop special,ongoing relationships with the researcher and are often the researcher's main link to the "inside"; often chosen purposively
Key Informants
The rate of participation in a study, calculated by dividing the number of persons participating by the number of persons sampled
Response Rate
The measurement error resulting from the tendency of some individuals to respond to items in characteristic ways (e.g., always agreeing), independent of item content
Response Set Bias
A bias that can result when a nonrandom subset of people invited to participate in a study decline to participate
Nonresponse Bias
This refers to the sufficiency and quality of the data the sample of a qualitative study yielded
Adequacy
This is the degree of congruence between the sample of an original qualitative study and the people at another site where the original findings might be applied
Fittingness
These are participants' responses to questions posed by the researchers, as in an interview; the most common data collection approach in both qualitative and quantitative nursing studies
Self-Report
This is a type of qualitative self-report used when the researchers have no preconceived idea of the content or flow of information to be gathered; the aim is to eludicate respondents' perceptions of the world without imposing their own views
Unstructured Interview
This is a broad question asked in an unstructured interview to gain a general overview of a phenomenon, on the basis of which more focused questions are subsequently asked, like "What happened when you first learned you had AIDS?"
Grand Tour Question
These are interviews in qualitative research where the researchers have a list of topics or broad questions that must be addressed in an interview; a written topic guide is used to ensure all question areas are addressed
Semi-Structured or Focused Interviews
These are qualitative research interviews with groups of about 5 to 10 people whose opinions and experiences are solicited simultaneously, with the interviewer guiding the discussion according to a topic guide
Focus Group Interviews
These are narrative self-disclosures about individual life experiences; researchers ask respondents to describe, often in chronological order, their experiences regarding a specific theme
Life History
This is a method of obtaining data from study participants by in-depth exploration of specific factual incidents and behaviors related to the topic under study
Critical Incident Technique
This is a method of obtaining data by asking participants to use audio-recording devices to talk about decisions as they are being made or while problems are being solved, over an extended period
Think Aloud Method
__ questions are those in which the response alternatives are prespecified by the researcher.
Close-Ended
__ questions are those in which the participants are allowed to respond in their own words.
Open-Ended
A trial run used to determine whether the instrument used for data collection is useful in generating desired information
Pretest
This is a device that assigns a numeric score to people along a continuum, like a scale for measuring weight
Scale
This is a scaling technique for data collection that consists of several declarative statements that express a viewpoint on a topic, and respondents are asked how much they agree or disagree with the statement
Likert Scale
A scale for measuring attitudes toward a concept on a series of bipolar adjectives, such as good/bad, effective/ineffective, etc.
Semantic Differential
A scaling procedure used to measure certain clinical symptoms like pain or fatigue by having people indicate on a straight line the intensity of the symptom
Visual Analog Scale (VAS)
This type of bias reflects a tendency of some respondents to want to give answers that are consistent with prevailing social views
Social Desirability Response Set Bias
This type of bias reflects a tendency of some respondents to exaggerate their feelings on scales (ex. "strongly agree"), leading to distortions
Extreme Response Set Bias
This type of bias reflects a tendency of some respondents to agree with statements regardless of their content
Acquiescence Response Set Bias
This type of self-report involves brief descriptions of events or situations to which participants are asked to react; can be fictional or based on fact, structured to elicit info
Vignettes
In this type of self-report data collection, participants are provided a set of cards on which words or statements are written; they are asked to sort the cards along a bipolar dimension (for instance, agree or disagree)
Q Sort
Method of collecting data through the participation in (and observation of) a group or culture in naturalistic settings; social position of researcher determines what they will see; researcher must gain entree and then get back stage (to learn about the unstaged realities)
Participant Observation
A rich and thorough description of the research context and participants in a qualitative study
Thick Description
A method of recording in a systematic fashion the behaviors and events of interest that transpire within a setting, using categories for classifying what is observed
Category System
In a category system, an instrument used to record observed behaviors/phenomena
Checklist
This method of observational sampling provides a mechanism for obtaining representative examples of behaviors by selecting a specific time period from which to observe the behaviors
Time Sampling
This method of observational sampling provides a mechanism for obtaining representative examples of behaviors by selecting integral behaviors or events to observe
Event Sampling
This type of data measure is one performed directly within or on living organisms like blood pressure, temperature, etc
In Vivo (Biophysiological)
This type of data measure is one performed by extracting biophysiological material from subjects and subjecting it to analysis in labs
In Vitro (Biophysiological)
This is a daily record of events and conversations in a participant observation study
Log
This is the lowest level of measurement. This is simply giving numbers to different attributes in the same category, but the numbers do not have quantitative implications (i.e., they do not mean “more than” or “less than” another). For instance, using numbers to indicate gender.
Nominal
This level of measurement ranks objects based on their standing on an attribute; for instance, ranking people from shortest to tallest or lightest to heaviest is an example. Also, ranking a person’s ability to manage ADLs (1 = completely dependent, 2 = needs some assistance from another person, 3 = needs some mechanical assistance, 4 = completely independent) is an example. Does not tell us how much greater one level is to another.
Ordinal
This level of measurement tells us about the order of data points, and the size of the intervals in between data points. There is no true zero (for instance, the Celsius scale, or IQ tests, where differences between scores like 140-120 and 120-100 are presumed to be equivalent)
Interval
This level of measurement is essentially the same as a interval scale but with a true zero. For instance, weight is an example (there can be an absence of weight, and it is correct to say that 100 lbs is twice as heavy as 50 lbs).
Ratio
This is the true score of a measurement plus or minus the error of measurement
Obtained Score
This is the consistency with which an instrument measures the attribute. For instance, consider a weight scale – if you take someone’s weight every 5 minutes and you get 120 lbs, 125 lbs, 115 lbs, etc, you would question it. If you use a different scale and you get 120, 120, 120, and 119, you would consider this scale better than the first
Reliability
The three aspects of reliability
Stability, internal consistency, equivalence
This aspect of reliability simply means that the measuring instrument produces similar results on two separate occasions (you would compare the two scores using test/retest and employing a reliability coefficient to determine how close the scores are – reliability coefficients higher than 0.70 are considered adequate for test/retest reliability)
Stability
This aspect of reliability means that the items of an instrument are all measuring the same trait; this usually is determined using coefficient alpha or Cronbach’s alpha (the higher, the more internally consistent)
Internal Consistency
This aspect of reliability is the degree to which two or more independent observers agree about the scoring of an instrument; a high level of agreement means that measurement errors have been minimized; two or more interraters can review the instrument and rate it
Equivalence
This is the degree to which an instrument is measuring what it is supposed to measure
Validity
True or false - an instrument can be unreliable but still be valid
False
True or false - an instrument that is not valid can still be reliable
True
The four major aspects of validity
Face Validity, Content Validity, Construct Validity, Criterion-Related Validity
Aspect of validity; whether or not the instrument appears to be measuring the correct attribute.
Face Validity
Aspect of validity; the degree to which an instrument has an appropriate sample of items for the attribute being measured and adequately covers the domain
Content Validity
Aspect of validity; the degree to which the scores of an instrument match up with some external criterion, using a validity coefficient
Criterion-Related Validity
Aspect of validity; it is the extent to what is being measured was actually measured. To have this sort of validity, the test or measuring tool has to correlate with the theory of what is being measured. It addresses the question,"Are we measuring what we think we are measuring?"
Construct Validity
These evaluate the quality of new instruments used by researchers, providing information about the instrument’s reliability, validity, and other assessment criteria
Psychometric Assessments
This is the ability of a measure to identify a “case” correctly, that is, to screen in or diagnose a condition correctly. A measure’s sensitivity is its rate of yielding “true positives"
Sensitivity
This is the ability of a measure to identify noncases correctly, that is, to screen out those without the condition. A measure’s rate of finding “true negatives"
Specificity
This summarizes the relationship between specificity and sensitivity in a single number. It answers the question, “How much more likely are we to find that an indicator is positive among those with the outcome of concern with those for whom the indicator is negative?” It is the ratio between true positive results to false negative results.
Likelihood Ratio
This type of statistics is used to synthesize and describe data, using averages and percentages
Descriptive
Indexes that are calculated on data from a population
Parameters
Statistics that permit inferences about whether results observed in a sample are likely to occur in the larger population
Inferential
This is a systematic arrangement of numerical values from the lowest to the highest, together with a count or percentage of the number of times each value was obtained
Frequency Distribution
A distribution of values with two halves that are mirror images of each other
Symmetrical Distribution
The asymmetrical distribution of a set of data values around a central point, with one tail longer than the other (positive if to the right, and negative if to the left)
Skewed Distribution
A distribution with one peak
Unimodal
A distribution with two or more peaks
Multimodal
A distribution with two peaks
Bimodal
A symmetrical, unimodal, not very peaked distribution, often called a bell curve
Normal distribution
Three indexes of central tendency (index of typicalness)
Mode, median, mean
The number that occurs most frequently in a distribution
Mode
The point in a distribution that divides scores in half - the middle value
Median
The sum of all values in a distribution divided by the number of participants (the average) - usually the statistic reported in an interval or ratio measurement
Mean
This is the degree to which values on a set of scores are dispersed
Variability
The highest score minus the lowest score in a distribution
Range
The most widely used variability index, calculated based on every value in a distribution, and summarizes the average amount of difference of values from the mean
Standard Deviation
__ % of the scores in a normal distribution fall within 1 standard deviation from the mean.
68
__ % of the scores in a normal distribution fall within 2 standard deviations from the mean.
95
__ % of the scores in a normal distribution fall within 3 standard deviations from the mean.
99.7
A two dimensional frequency distribution in which the frequencies of two variables are cross-tabulated
Contingency Table
Index that summarizes the degree of relationship between variables, typically ranging from +1.00 (perfect positive relationship) to -1.00 (perfect negative relationship); reflects intensity and duration of relationship
Correlation Coefficient
An index designating the magnitude and direction of a relationship between two variables measured on at least one interval scale; also called "Pearson's r"
Product Moment Correlation Coefficient (r)
A two dimensional display showing the correlation coefficients between all pairs of a set of variables
Correlation Matrix
This is the proportion of people in a group who experienced an undesirable outcome
Absolute Risk
This index represents the comparison of the risk for an exposed group to an intervention to the risk of an unexposed group
Absolute Risk Reduction (ARR) Index
A way of expressing the chance of an event - the probability of an event occuring to the probability that it will not occur, calculated by dividing the number of people who experienced an event by the number for whom it did not
Odds
Taking various samples from a population and calculating the means of each; normally distributed and equals the population mean
Sampling Distribution Of The Mean
The standard deviation of a sampling distribution, such as the sampling distribution of the mean
Standard error of the mean (SEM)
The range of values within which a population parameter is estimated to lie, at a specific probability of accuracy (e.g., 95% CI)
Confidence Interval
The upper or lower limits of a confidence interval
Confidence Limits
An error created by rejecting the null hypothesis when it is true (researcher concludes that a relationship exists when in fact it does not - a false-positive)
Type 1 Error
An error created by accepting the null hypothesis when it is false (researcher concludes that no relationship exists when in fact it does - a false-negative)
Type 2 Error
This hypothesis states that no relationship exists between the independent and the dependent variable
Null
The risk of making a type 1 error (rejecting the null hypothesis) in a statistical analysis, established by the researcher beforehand (e.g., the 0.05 level)
Level of significance (alpha)
The probability of committing a type 2 error (accepting the null hypothesis) in a statistical analysis, estimated through power analyses
Beta
A statistic computed to assess the statistical reliability of relationships between variables (e.g., chi-squared, t)
Test statistic
Results of hypothesis tests that are deemed not by chance, at some specified level of probability
Statistically significant
Results of hypothesis tests that are deemed to be the potential result of chance fluctuation
Nonsignificant result
The data from a population upon which the index is applied (what most scientific questions are about)
Parameter
Statistics derived from analyzing two variables simultaneously to assess the empirical relationship between them
Bivariate Statistics
The two techniques used in statistical inferences
Estimation of parameters; hypothesis testing
A class of statistical tests that involve assumptions about the distribution of the variables and the estimation of a parameter
Parametric Tests
A parametric statistical test for analyzing the difference between two group means
T-test
A statistical procedure used for testing mean differences among groups on a dependent variable, while controlling for one or more covariates
ANCOVA (Analysis of Covariance)
A statistical procedure used for testing mean differences
among three or more groups by comparing variability between groups to variability within groups
ANOVA (Analysis of Variance)
A statistical test used to assess group differences in proportions; symbolized as x^2
Chi-squared test
Statistical procedures designed to analyze the relationships among three or more variables (e.g., multiple regression, ANCOVA)
Multivariate Statistics
A statistical procedure for understanding the effects of two or more independent (predictor) variables on a dependent variable
Multiple Regression Analysis
Part of a research article where researchers present interpretations of the results
Discussion
These are widely used guidelines for reporting information on clinical trials, including a flow chart for tracking participants through a trial
CONSORT guidelines (Consolidated Standards of Reporting Trials)
The extent to which random errors have been reduced, usually expressed in terms of the width of the confidence interval around an estimate
Precision
This is the measure of how strong the evidence is that the study’s null hypothesis is false (not an estimate of any quantity that is of direct relevance to practicing nurses)
P-value
These tell more about the precision and quantitative relevance of results than p-values
Confidence Intervals
A statistical expression of the magnitude of the relationship between two variables, or the magnitude of the difference between groups on an attribute of interest; refers to the magnitude of the effect under the alternate hypothesis. The effect size should represent the smallest effect that would be of clinical or substantive significance, particularly in comparing effectiveness of interventions
Effect Size
In quantitative studies, results that support the researcher's hypotheses
Significant
This is central to interpretation of results
Inference
These involve a careful assessment of study rigor through an analysis of validity threats and various biases that could undermine the accuracy of the results
Credibility Assessments
This is replication of results, through either internal or external sources, in a credibility assessment to determine study rigor
Corroboration
This is the examination across cases of one variable at a tim
Univariate
Two methods of estimating the dispersion of a distribution
Range and Standard Deviation
Inference and ___ are inextricably linked.
Validity
The three major analytical styles of qualitative research
Template Analysis, Editing Analysis, Immersion/Crystallization
A style of qualitative analysis where the researchers created a template to which the narrative data is applied
Template Analysis Style
A style of qualitative analysis where the researchers use an editing style act as interpreters who read through the data in search of meaningful segments and units and identifying category schemes and codes that can be used to organize the data
Editing Analysis Style
A style of qualitative analysis that involves the analyst's total immersion in (and reflection of) the text materials, resulting in an intuitive crystallization of the data; highly interpretive and subjective
Immersion/Crystallization Style
The most widely used procedure to organize data for a qualitative analysis
Category Scheme
Traditional, manual method for organizing and managing qualitative data before the advent of computer programs; involves creating a physical file for each category and then cutting out and inserting into the file all the materials for that category
Conceptual Files
Software used to organize and manage qualitative data, including examining the codes (but not doing the coding)
CAQDAS (Computer Assisted Qualitative Data Analysis Software)
The analysis of themes and patterns among the themes, primarily using a template or editing analysis style, in a qualitative study
Content Analysis
"An abstract entity that brings meaning and identity to a current experience and its variant manifestations. As such, it captures and unifies the nature or basis of the experience into a meaningful whole."

Often sought in qualitative research
Themes
A symbolic comparison, using figurative language to evoke a visual analogy
Metaphor
Used in qualitative studies to validate and refine themes by tabulating the frequency with which certain themes or insights are supported by the data
Quasi-Statistics
A common method of sequencing research for data analysis in an ethnographic study, based on the premise that language is the primary means that relate meaning in a culture
Spradley's Research Sequence
Units of cultural knowledge (Spradley) that are broad categories that encompass smaller categories
Domains
A system of classifying and organizing terms
Taxonomy
Three phenomenological methods of qualitative data analysis
Colaizzi, Giorgi, Van Kaam
An approach to analyzing ethnographic data using domain analysis, taxonomic analysis, componential analysis, and theme analysis
Spradley
A phenomenological approach that involves efforts to grasp the essential meaning of the experience being studied, using either a holistic approach (viewing the text as a whole), a selective approach (pulling out key statemetns and phrases), or a detailed approach (analyzing each sentence)
Van Manen
A methodological process in which there is continual movement between the parts and the whole of the text under analysis, used to analyze data in a hermeneutic study
Hermeneutic Circle
In hermeneutic analysis, a pattern that expresses the relationships among relational themes and is present in all the interviews or texts
Constitutive Pattern
In a hermeneutic analysis following the precepts of Benner, a strong exemplar of the phenomenon under study, often used early in the analysis to gain understanding of the phenomenon
Paradigm Case
A procedure used in a grounded theory analysis wherein newly collected data are compared in an ongoing fashion with data obtained earlier, to refine theoretically relevant categories
Constant Comparison
This is the first stage of coding in a grounded theory study, referring to the basic descriptive coding of the content of narrative materials; three levels are possible (level 1, 2, 3)
Open Coding
Three levels of open coding in grounded theory, each varying in degree of abstraction
Level 1 (In Vivo), Level 2 (Condensed), Level 3 (Theoretical)
In a grounded theory, the central phenomenon that is used to integrate all categories of the data
Core Variable (Category)
A level of coding in a grounded theory study that involves selecting the core category, systematically integrating relationships between the core category and other categories, and validating those relationships
Selective Coding
The central social process emerging through an analysis of grounded theory data; evolves over time into two or more phases
Basic Social Process (BSP)
A concept in grounded theory that involves comparing new data and new categories with previous existing conceptualizations
Emergent Fit
An alternative grounded theory method whose outcome is a full conceptual description; involves three types of coding -- open (categories are generated), axial (categories are linked with subcategories), and selective (findings are integrated and refined)
Strauss and Corbin (Straussian) Method
This is the degree of confidence qualitative researchers have in their data, assessed using criteria of credibility, transferability, dependability, confirmability, and authenticity
Trustworthiness
Lincoln and Guba's Framework
Four criteria for developing the trustworthiness of a qualitative study: credibility, dependability, comfirmability, transferability -- later, authenticity was added
Overriding goal of qualitative research, according to Lincoln and Guba; refers to confidence in the truth of the data and interpretations of them; part of Lincoln and Guba's Framework Credibility (Lincoln and Guba)
Credibility (Lincoln and Guba)
Part of Lincoln and Guba's Framework; refers to stability of data over time and over conditions
Dependability (Lincoln and Guba)
Part of Lincolin and Guba's Framework; refers to objectivity, that is, the potential for congruence between two or more independent people about the data's accuracy, relevance, or meaning in a qualitative study Confirmability (Lincoln and Guba)
Confirmability (Lincoln and Guba)
Analagous to generalizability, refers to the extent to which qualitative findings can be transferred to (or have applicability in) other settings or groups; part of Lincoln and Guba's Framework
Transferability (Lincoln and Guba)
Overarching goal of Whittemore and Colleague's Framework
Validity
The extent to which researchers fairly and faithfully show a range of different realities; part of Lincoln and Guba's Framework
Authenticity (Lincoln and Guba)
This is the investment of sufficient time collecting data to have an in-depth understanding of the culture, language, or views of the people or group under study, to test for misinformation and distortions, and to ensure saturation of important categories; also essential for developing trust with informants
Prolonged Engagement
Refers to the researchers' focus on the characteristics or aspects of a situation or conversation that are relevant to the phenomena being studied, providing depth to data collection in a qualitative, naturalistic study.
Persistent Observation
This involves using multiple data sources to validate conclusions in a qualitative study; three types are time, space, and person
Data Triangulation
A type of data triangulation involving collecting data on the same phenomenon at different points in time
Time Triangulation
A type of data triangulation involving collecting data on the same phenomenon in multiple sites, to test for cross-site consistency
Space Triangulation
A type of data triangulation involving collecting data on the same phenomenon from different types or levels of people
Person Triangulation
A type of data triangulation involving the use of multiple methods of data collection about the same phenomenon
Method Triangulation
A systematic collection of materials and documentation that would allow an independent auditor to come to conclusions about the data in a qualitative study
Audit Trail
A method for establishing credibility of qualitative data by providing feedback to study participants about emerging interpretations and obtaining participants' reactions
Member Check
A type of data triangulation where two or more researchers are used to make data collection, coding, and analytical decisions in a qualitative study to reduce bias and idiosyncratic interpretations
Investigator Triangulation
A strategy of investigator triangulation whereby a qualitative research team can be divided into two groups that deal with data sources separately and conduct independent inquiries through which data can be compared
Stepwise Replication
A type of data triangulation in qualitative research where researchers use competing theories or hypotheses in the analysis and interpretation of the data to test the validity and rule out rival hypotheses and prevent premature conceptualizations
Theory Triangulation
A process discussed by Lincoln and Guba by which researchers revise their interpretations by including cases that appear to disconfirm earlier hypotheses to continually refine a hypothesis or theory until it accounts for all cases
Negative Case Analysis or Deviant Case Analysis
A quality-enhancement strategy involving external validation whereby researchers enable sessions with peers to review and explore various aspects of the inquiry, exposing researchers to searching questions of others who are experienced in either methods, the phenomenon, or both.
Peer Debriefing
This refers to the process of "living" the data in a qualitative research study, a process in which researchers must try to understand their meanings, find their essential patterns, and draw well-grounded, insightful conclusions
Incubation
Researchers expectations about relationships among study variables. They are predictions about expected outcomes – they state relationship that is expected from research
Theories
A careful and objective appraisal of a study’s strengths and limitations. Usually includes review of study’s merits, recommendations regarding value of evidence, and suggestions on how study can be improved
Research Critique
Soundness of the evidence and the degree of inferential support the evidence yields
Validity
Who are considered vulnerable groups in research?
Children, emotionally/mentally retarded, institutionalized (hospitals or prisons), pregnant women, severely ill, terminally ill or physically disabled
Directional hypothesis (Definition and types)
Predicts the direction of a relationship:
1. Simple: single dependent/independent variable
2. Complex: >= dependent/independent variables
3. Non-directional hypothesis: predicts existence of a relationship but not direction
R O1 X O1
Pretest-Posttest Design
R X O1
Posttest Only Design
1. R O1 XA O2 XB O3
2. R O1 XB O2 XA O3
Crossover Design
A type of research theory that seeks to describe and understand key social, psychological or structural process that occurs in social settings
Grounded Theory
A type of research that, unlike most phenomenological research, focuses on interpreting the meaning of the experiences, rather than describing them
Hermeneutics
Basic units of a population from whom data is collected. Usually humans.
Elements
Organize in order of size, from largest to smallest:

Population, sample, target population, accessible population
1. Population
2. Target Population
3. Accessible Population
4. Sample
A type of sampling that involves recruiting all the people from an accessible population who meet eligibility criteria over a specific time interval or for a specified sample size. (ex: if 100 participants is the target sample size, sample the first 100 eligible participants). Reduces bias.
Consecutive Sampling
True or false - generalizability is equally important in quantitative and qualitative research
False, it is not as important in qualitative research since the purpose is to uncover meanings
A p-value is evidence against the ___
Null hypothesis
If the p-value is less than the commonly used level of 0.05, ____
The null hypothesis is rejected
Falsely rejecting a null hypothesis is called a __
Type 1 error
Falsely accepting a null hypothesis is called __
Type 2 error
Goal of Glaser and Strauss (Glaserian)
Get into patterns (Qualitative)
Goal of Strauss and Corbin (Straussian)
Problem emerges from data (Qualitative)
This is the truthfulness of an instrument; does the test measure what its suppose to measure?
Validity
The larger the sample size, the (less/more) representative
More
This is when the researchers start to notice trends and patterns at end of data collecting in a qualitative study
Emergent issues
This type of phenomenological study involves the validation of results by having participants “COME BACK” (return)
Colaizzi
This type of phenomenological study considers it inappropriate to return to participants after study is complete (“GO AWAY)
Giorgi
True or false - researchers attempt to prove their hypotheses.
False - they attempt to support their hypotheses with evidence (they cannot prove it)
Researchers want to see at least a __ % confidence interval
95
With a negative skewed distribution, the mean is __ than the median
Higher
With a positive skewed distribution, the mean is __ than the median
Lower
One often "rejects the null hypothesis" when the p-value is __ than the significance level α (Greek alpha), which is often __
Less; 0.05 (conventional level)