Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

73 Cards in this Set

  • Front
  • Back
what is measurement?
the assignment of numbers to represent an attribute present in an object or person, using a set of rules.
are attributes constant in measurement? explain:
attributes are not constant.

They vary from day to day, from situation to situation, or from oneperson to another.
why do you assign numbers in measurement?
to differentiate varying degrees of an attribute.

ex: 1-5. You also have to know what they mean.
which type of research does measurement go hand in hand with, quan or qual?
what are some advantages of using measurement?
ut removes subjectivity and guesswork.

it makes it possible to obtain reasonably precise information that can more readily differentiate degrees of an attribute.

the use of numbers are less vague
with "errors of measurement", what is the "observed score?"
true score + error
an observed score is broken down into 2 parts:

1. the error component
2. the true component
is there such thing as a true score with no error?
no. the true score can never be known because measures are NOT infallible.
How do the following factors contribute to measurement errors?
Situational contaminants
Transitory personal factors
Response-set biases
Administration variations
Instrument clarity
Item sampling
Instrument format
Situational contaminants - ex: subject is aware that researcher is present and this creates bias in his/her answer

Transitory personal factors ex: fatigue, hunger, mood

Response-set biases - enduring traits in a subject's personality interfere with accurate results.

Administration variations - alterations in the method of collecting data from one person to the next

insrument clarity - if the written directions are poorly understood then scores can be affected by misunderstanding

item sampling - the item its self might not elicit the assumed response from the subject. ex: you can get a 95 on one test, and then get a 87 on the same test that had different questions

instrument format - ex: open ended questions may yield different responses than closed ended ones. It is what the instruments look like.
what is the "Reliability of Measuring Instruments?"
it is the consistency with which a quantitative instrument measures the target attribute

Measurement is reliable to the extent that it maximizes the true score, minimizes the error component.

It can be equated with stability, consistency, dependability.
in relaibility of measuring instruments, what is "stability?"
the extent to which similar results are obtained on two separate occasions
in relaibility of measuring instruments, what is
"Test-retest reliability?"
it is when the same measure is administered to a sample twice, scores are compared and a reliability coefficient is obtained (index of magnitude of tests reliability).
in relaibility of measuring instruments, what is "Internal Consistency?"
it is when items measure the same trait. Cronbach’s alpha
in relaibility of measuring instruments, what is "Equivalence" (interrater reliability)?
degree to which two or more independent observers agree on scoring. This is how you find the score:
Number of agreement
Number of agreement + disagreements
what is the "Correlation Coefficient?"

explain it:
it is a tool for quantitatively describing the magnitude and direction of a relationship.

Possible values range from -1.00 through .00 to + 1.00.
0 – variables are unrelated
.00 to -1.00 variables are inversely related or relationship is negatively related
what are some reliability coefficient Interpretations for the following:
70 – may be adequate for subscales
.80 or greater – highly desirable
.90 or better – decisions about individuals <----THIS HAS THE HIGHEST PREDICTIVE VALUE OF ALL
what are some factors that affect reliability?
Length of scale

Definition of categories


what is validity?
Degree to which an instrument measures what it is suppose to measure.

A measuring device that is unreliable cannot be valid.
what are some types of validity? define:

Face -e x: "it just looks like it." this is the lowest form.

Content - the degree to which the items in an instrument represent the concept being measured

Criterion related - the degree of which scores represent external criteria

Predictive - the degree of which an instrument can predict criterion observed at a future time

Concurrent - the degree to which scores scores correlate with an external criterion, measured at the same time.

Construct - the validity of inferences from observed persons. You know this group. you can compare it to other populations
Known groups
Multi-trait-multimethod: convergence, discriminant
what is sensitivity?
the ability to identify a case correctly
what is specificity?
the ability to identify non-cases correctly
describe these Criteria for Assessing Quantitative Measures:
Efficiency - short tools are better.
Comprehensibility - tool has everything you need to measure
Precision - you have true calibration
Speededness - the quicker the better
Reactivity - you need to make sure that the patientrespond according to how they are feeling.
what is a "research problem?"
a situation that a researcher wants to address through disciplined inquiry.
what is the "problem statement?"
a satement that articulates problem to be addressed and indicates the need for a study.
what is the "statement of purpose?"
it is the summary of the overall goal of a study.
what is the "research question?"
it is a specific querie to answer in addressing the research problem
what is a "hypothesis?"
it is a specific prediction about answers to a research question.
what are some sources of Research Problems?

(places nurses can get cues to discovering a problem.)
Researcher interest

Experience and clinical



Social issues


Ideas from external sources
when Developing and Refining a Research Problem, where do you begin with the topic?
you select a topic. Ideas are sorted with regard to interest, knowledge, perceived feasibility.

you then narrowing the topic.
what are some things you have to think about when evaluating a Research Problem?
Time and timing
Availability of study participants
Cooperation of others
Facilities and equipment
Researcher experience, interest
Ethical considerations
what are the components to a Problem Statement in a Quantitative Study?
Problem identification

Background - previous research

Scope of the problem

Consequences of the problem - what will happen if i don't do this research

Knowledge gaps

Proposed solution - what will we do to solve this?
what do "Problem Statements" do in Qualitative Studies?
they express the nature of the problem, its context, scope, and information needed to address it.

they usually incorporate terms and concepts that foreshadow their tradition of inquiry.
what are some examples of Problem Statements in Qualitative Studies?
Phenomenological study – include ‘need to know more about people’s experiences’.

Ethnograph study – ‘desire to describe how cultural forces affect people’s behavior’
what is a research hypothesis?
A prediction about the relationship between two or more variables.
describe some types of Wording of Hypotheses:

Simple (1 inde var, 1 dep var)

Complex – number of variables.

Directional - a hypopth statement that makes a specific prediction about the direction of a relationship

Nondirectional – one that does not

Research hypothesis statement - a researcher wishes to test (as opposed to the null), stating the anticipated relationship between two or more variables.

Null hypothesis ststement – the wording states that there is absence of relationship. used in statistical testing as the hypothesis to be rejected.
what is hypothesis testing?
it is a statistical procedures used to determine whether a hypothesis has a high probability of being correct.
can hypothesises be proven?
No. Hypotheses are not proved; they are accepted or supported.
what do you try to look at when you are Critiquing Research Problems, Research Questions and Hypotheses?
Adequately communicated to reader: significance for nursing, has potential to produce evidence to improve nursing practice.

Compatibility of research problem with methods. It should be congruent. ex: wt and exercise: how do you measure exercise? how do you measure wt? They should fit.
describe some basic sampling concepts:


elidgibility (inclusion) criteria

exclusion criteria
entire cases in which a researcher is interested.
Target - ex: everyone who has DM
Accessible - maybe have to go to the hospital

elidgibility (inclusion) criteria - population characteristics
what is sampling?
the process of selecting a portion of the population to represent the entire population so inferences can be made.
what is a sample?
a subset of a population
what is Representativeness?
when key characteristics closely approximate the population
what is probability sampling?
the random selection of units (participants) from the population using random procedures

this is the best kind.
what is non probability sampling?
non-random selection of elements
what are "strata" in a population?
subdivisions of a population according to some charicteristic

A mutually exclusive segment of a population established by one or more characteristics.
what is sampling bias?
a systematic over-representation or under-representation of a population in terms of a characteristic relevant to the research question.
how valuabable is non probability sampling?

discuss the 4 types:

It is rarely representative of the population.

Snowball - subject invites riends


Purposive - you ask who you think will fit
what is a simple random sample?
it is a sample that uses a sampling frame, elements from which sample will be drawn e.g. use a list
what is stratified random sampling?
population is first divided in 2 or more strata based on demographic attributes.
what is cluster (multistage) sampling?
lg groups (clusters) are selected first, then sampling of smaller ones.
what is systematic sampling?
there is a system. ex: every 5th person chosen
what is probablility sampling?

what is an advantage of probability sampling?
The selection of sampling participants from a population using random procedures

there is only viable method of obtaining representative samples.
what is a drawback of probability sampling?
it is inconvenient and complex
when considering sample size, what is "power analysis?"
a tool used to estimate sample size. how many people will you need to getreliable results?
are large samples always best? expain:
Large samples are no assurance of accuracy.

Sample size is limited by practical constraints (time, subject availability, resources).
what if your sample size is too small?
data may not support hypothesis, even if its correct.
discuss the following Factors Affecting Sample Size Requirements in Quantitative Studies:

Homogeneity of the population

Effect size

Cooperation and attrition

Subgroup analysis

Sensitivity of measures
Homogeneity of the population - the subjects can be too similar.

Effect size - how much a difference wil they be?

Cooperation and attrition - this can create problems with sample size

Subgroup analysis - how groups compare. Looking at secondary variables that could have made a difference

Sensitivity of measures
what are the steps to implementing a sampling plan in quantitative studies?
Identify the population.

Specify eligibility criteria.

Specify sampling plan.

Recruit the sample
what are the steps to sample recruitment?
Identifying eligible candidates (ex: going to a diabetic clinic.)

Persuading eligible candidates to participate

Using a screening instrument – to determine eligibility like a questionare.)
what are some factors that may afect sample recruitment?
Recruitment method


Persistence -try calling again

Incentives - $

Research benefits - what is in it for them?

Sharing results


also, try to assure the participant and personally endorse the study.
what do you do when you are generalizing from samples?
you are Enhancing inferences about generalizability of findings by comparing sample characteristics with population characteristics
discuss sampling in qualitative research:
samples are usually use small, nonrandom samples.

there is Concern with measuring attributes and relationships in a population.
Who would be an information-rich data source?

Goal of most qualitative studies – to discover meaning, uncover multiple realities.
what are some samplng types used in qualitative research?

Maximum variation
Extreme case
describe theoretical sampling:
The process of data collection for generating theory, to discover categories and their properties, and to offer interrelationships that occur.

it is used in grounded theory
what decides the samople size in qualitative research?
it is based on informational needs. They tend to be small.

You are looking to attain data saturation (hearing the same thing over and over.)
what are the 3 main qualitative traditions in sampling?


grounded theory
describe "ethnography" in qualitative traditions in sampling:
mingling with and having conversations with as many members of the culture under study. Decide whom to sample and what to sample.

Key informants can help researchers determine what to sample.
describe "phenomenological" in qualitative traditions in sampling:
we will ask about a phenomena

There is a very small sample size – 10 or fewer.

All participants must have experienced the phenomena.

Participants must be able to articulate what it is like to have lived the experience.
describe "grounded theory" in qualitative traditions in sampling:
Sample size usually 20 to 30 people, using theoretical sampling.
Goal – select informants who can best contribute to the evolving theory.
Sampling, data collection, data analysis, theory construction occur concurrently and are adjusted in an ongoing fashion.
when critiquing sampling plans, what do you want to look for in description of sampling strategy?
the Sampling approach e.g. convenience.

look at the population, eligibility criteria.

look at the Number of participants, rationale for sample size.

look at the Description of main characteristics of participants.

in a Quantitative study – look at the number and characteristics of potential subjects who declined participation
when critiquing sampling plans, what do you want to look for in adequacy?
sufficiency and quality of the data the sample yielded.
when critiquing sampling plans, what do you want to look for in apropriateness?
methods used to select a sample resulting from the identification and use of participants who can best supply information on the conceptual requirements of study.
when critiquing sampling plans, what do you want to look for in "fittingness?"
Fittingness – degree of congruence between the adequacy of descriptions provided by participants and the context in which the study was done.
what are some other things to consider when critiquing a sampling plan?
whether the sample is representative of population.

the response rates - who responded and who did not?

Nonresponse bias - the people in invited to participate in a study fail to participate

Attrition bias in longitudinal study -