• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/139

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

139 Cards in this Set

  • Front
  • Back
Reason for Measurements in PT
to provide the best clinical intervention possible
Assessments
"Tests" - provide the data for measurements - use them to evaluate changes in your patients for conditions (i.e. balance, strength, coordination...etc)
Measurements
numerical data - “Measurement is the act or process of quantifying some variable such as cognition, pain, blood pressure, force associated w/strength, liver enzyme levels, ROM, etc.”
Big Picture for tests and measures?
What you want to evaluate  assessments (tests)  measurements (data)  statistical analysis  evaluation & decide what to do
What are you ACTUALLY measuring?
Pounds, kilograms, ounces - NOT strength, force, or ROM
Critical Thinking
The art of analyzing and evaluating thinking with a view to improve it
Do journals and articles contain facts?
No - it is what we know at this time, and at this point of our research
Critical Thinking is?
Self-directed
Self-disciplined
Self-monitored
Follows a scientific method
Egocentrism
The tendency to perceive, understand & interpret the world in terms of the self
Sociocentrism
The tendency to perceive, understand & interpret the world in terms of your society, culture or profession
Robert Fuller?
was a fool - did not use critical thinking in his alternative medicine study - body repairs itself - "healing by coincidence" - has no way to prove cause and effect
Scientific Method
1- empirical data is generated (objective)
2 - generation of hypothesis (null and alternative)
3 - experiments
4 - data is statistically analyzed
5 - reject or accept hypothesis
6 - generate theories
Model
a series of steps or events that explains a process
Garrison's quote on models?
"Models are paradigms waiting to change & thus they are outdated once they are born"
Biggest problems with models?
you are trained to use certain models and you try to make all your thoughts and reasoning fit into those models (even though not everything always fits)
The 3 models for this course?
International Classification of Impairments, Disabilities, & Handicaps
Nagi Scheme
ICF Model
International Classification of Impairments, Disabilities, & Handicaps (ICIDH)
Disease -> Impairment -> Disability -> Handicap
Nagi Scheme
Active Pathology -> Impairment -> functional limitation -> disability
ICF Model
health conditions -> activities -> participation -> body structures/function -> environmental factors -> personal factors (circular in nature, everything relates)
Research
an objective, systematic investigation
analysis and interpretation of the data is done to?
gain new knowledge or add to existing knowledge
Null Hypothesis
there is no difference between the 2 conditions (hypothesis of no effect)
Alternate Hypothesis
there is a difference between the 2 conditions
Quantitative Research
considered a higher level of research than qualitative since the results can be generalized to the general population
Qualitative Research
descriptive of a population or sample being tested
Basic/pure/bench research
establish new knowledge in the development or refinement of theory
*quantitative/implies a laboratory situation
Clinical research
involve human subjects
conduct clinical trials of new programs, products, drugs, and techniques
Applied Research
quantitative research
designed to answer practical problems
Ex: development of MRI machine
Methodological Research
develop or design new changes between variables
*all tests and assessments are this type
Descriptive Research
QUALITATIVE research
describe systematically a condition, observation, or area of interest
Epidemiological Research
QUALITATIVE research
study the incidence, distribution, cause of disease, or impairment
may describe 2 types of conditions
Research Methodology
determines how you set up your experiment/research to evaluate the Null hypothesis
3 important pieces of research methodology
1. Manipulation
2. Control
3. Randomization
Manipulation
the researcher changes (manipulates) 1 or more variables in connecting with the subject or condition
Variable
anything that can vary or change about the condition for the subject
Independent Variable
It is the variable which is manipulated which can be the experimental intervention/treatment variable
EX: hot/cold; drug
Dependent Variable
this is the data (measurement) outcome, condition/appearance variable
EX: swelling of ankle; contusion
Control
refers to the ability of the researcher to control of eliminate interfering and irrelevant influences from the study
(need to be able to say that the results of the experiment came from the change in the variable, not from an outside source)
What is compromised if there is no control?
sensitivity, validity, reliability, and predictive value
How do you get around not being able to control everything?
Add a control group
Randomization
A process designed to reduce the risk of systematic bias from creeping into the study.
Internal Validity
the chance we are changing & measuring what we think we are changing & measuring
External Validity
the chance that results found in subjects can be applied to groups outside of the groups we are studying
3 categories of research protocols (methodologies)
True Experimental Designs
Quasi-Experimental Designs
Non-Experimental Designs (qualitative)
True Experimental Designs
must have: manipulations, randomization, and control
"cause and effect" research
Can be double-blind studies
Quasi-Experimental Designs
must have: manipulations, but not control or randomization
*opens study up to outside influence
Typically, case studies, groups of people
Non-Experimental Designs
no manipulations, randomizations, or control
generates questions for research
good correlational studies
Data Collection (measurements)
The numeric value (number) assigned to an object, event, interaction, observation or person according to rules (OPERATIONAL CRITERIA)
if you have rules...
you should be able to measure everything
Categories of Measurements
Fundamental
Derived
Change
Fundamental Measurements
obtained w/o the need for derivation (no math!)
EX: measuring ROM
Derived Measurements
measurements of a variable (dependent) that are obtained as a results of math applied to the existing measurements
EX: Femur is 18 inches on L, 18.5 on R = 0.5 difference
Change Measurements
mathematical difference between 2 of the same kinds of measurements taken on the same person at 2 points in time
EX: Pre and Post treatment data
3 types of purposes for measurements
Evaluative
Discriminative
Predictive
Evaluative Purpose
can evaluate the effect of an intervention over time
"outcome measures"
EX: Berg Balance Scale
Discriminative Purpose
to discriminate some function, variable or activity among subjects or groups
EX: cognitive function among subjects - with a test
Predictive Purpose
using a measurement to say something about future events or creating a prognosis
EX: Berg Balance can predict balance in the future
Qualitative Data
alphanumeric
comprised of letters or characters which may be digits
"Character or Categorical"
does not support anything
descriptive stats - mean, mode, etc. for categories
Quantitative Data
always numbers with quantities
does NOT have to be whole numbers
measures should be standardized (reliable)
discrete/cardinal (whole numbers!)
continuous (any value along a continuum w/in a range)
Scales used to measure?
Nominal
Ordinal
Interval
Ratio
Nominal Scale
Qualitative
lowest level of refinement
descriptive stats
EX: person can stand, or not
NONPARAMETRIC DATA
Ordinal Scale
ranking scale
implies a greater or lesser degree of something
EX: hate a lot, hate a little, its OK, like a little, love it
*no equal increments!
Interval Scale
data ranked in a logical sequence
EQUAL increments in data
EX: ROM (degrees), height (inches)
there is NO absolute zero
Ratio Scale
highest level of scales
continuum of values (like interval)
has an ABSOLUTE ZERO
EX: if you get a zero, you don't know anything about the subject
PARAMETRIC DATA
Validity
refers to the degree to which a test (assessment), intervention, or instrument measures what is supposed to be measuring
*a matter of spectrum (not an all or none thing)
Stats Definition
to extract the maximum amount of information about a set of data (measurements)
External Validity
can we generalize the results of an assessment to a similar population
Internal Validity
concerned w/correctly concluding that an independent variable is, in fact, responsible for variation in the dependent variable
(needs good controls - randomization - manipulation)
Construct Validity
based on the knowledge & intellectual underpinnings, which are considered the CONSTRUCT, upon which the test & measurements are developed
Content Validity
related to the extent to which a measurement reflects the specific intended domain of content
Ex: testing for UE body strength: cannot say that you are testing for body strength b/c you only tested the UE; the content validity is not good
Criterion-Based Validity
"instrumental validity"
involves comparing the measurements being examined w/another measurement or a series of other measurement or procedures which have been demonstrated to be valid
Three types of criterion-based validity?
Concurrent
Predictive
Prescriptive
Concurrent Validity
when an inferred interpretation is justified by comparing a measurement w/supporting evidence that was obtained at approx. the same time as the measurement being evaluated (i.e. concurrently)
*more precise than criterion due to time frame
Predictive Validity
concerned w/using criterion to make predictions which are true
*used in many screening tests
Prescriptive Validity
concerned w/using the inferred interpretation of criterion (measurement) from a test to prescribe a treatment
Face Validity
how a measure or assessment appears
non-statistical variety of validity
"does this data seem reasonable?"
Convergent Validity
it refers to the degree to which a measure is correlated w/other measures that it is theoretically predicted to correlate with
EX: multiple testing for the same outcome - should align results for the patient/population
Reliability
the degree to which measurements of a test remain consistent over repeated test of the same subject under identical condition
(repeatability)
Inter-tester Reliability
consistency between different people measuring the same thing
indicates a correlation between testers
measures TESTERS - not test
Intra-tester Reliability
consistency or equivalence when one person repeated measurements over a period of time
Test-Retest Reliability
consistency of repeated measurements in time
indicates stability in test
measure TOOL/ASSESSMENT
Population
total number of individuals, measurements, or units from which data will be collected or generalized about
Sample of Population
o Rarely can you deal with a whole population so you must use a sample.
o In general when dealing with statistics you are dealing with a sample of the population
Three Levels of Data Analysis
Descriptive
Correlative/Trend
Comparative
Descriptive Data Analysis
lowest level
mean/mode/frequency etc...
qualitative research
Correlative/Trend Data Analysis
Describes relationship of changes of one variable with changes of another variable
Middle level
*correlation of coefficients*
can extrapolate results to population (quantitative)
Comparative Data Analysis
Determines whether 2 or more groups of data are different or not
"cause and effect"
highest level
Parametric Statistical Test
Tests which are run when the data (measurements) comes from a normal distribution (Bell Shaped Curve)
"data clusters around the mean"
mode and median are values around the mean
standard deviations included in curve
Non-Parametric Statistical Test
Tests which are run when the data (measurements) DO NOT come from a normal distribution (Bell Shaped Curve)
- data is ordinal or nominal
- sample sizes are small
- used when normal distribution cannot be assumed
Both distributions can involve what type of stats?
descriptive and inferential
Descriptive Stats
(those which describe, organize & summarize data)
Describe things so you cannot compare things or extrapolate the data to anyone else
Frequency
the # of occurrences of a repeating activity based on a unit of time
Percentages
a way of expressing a number as a fraction of 100
Percentiles
values such that a specified percent of the data falls above or below a value.
Prevalence
the total number of cases in the population or sample at a given time
usually a percentage
Incidence
a measurement of the number of new individuals who develop a disease of condition w/in a particular period of time
normally a percent*
Central Tendency
o Mean: arithmetic average
o Mode: the value which occurs most frequently
o Median: the value which separates a sample from the upper half & the lower half.
Relative Position
Range
Standard Deviation
Standard Error of Mean
Standard Deviation
measure of the variability of a population, sample or probability distribution
*want them to be low (low is closer to the mean)
Standard Error of Mean
it quantifies the certainty with which the mean computed from a random sample estimates the true mean of the population from which the sample was drawn
*more accurate than SD (extra step)
Inferential Stats Test
Test which uses data from samples drawn from a population to make inferences about the total population (3 types)
Student t-test
parametric test
need normal distribution
2 types: unpaired/paired
Unpaired t-test
one sample t-test
used to test whether the mean drawn from a normal population differs from a hypothesized value
Paired t-test
whether the means of 2 groups are different - samples drawn in pairs/ or are related
Analysis of Variance (ANOVA)
an extensive class of related statistical models (tests) & their associated procedures, in which the observed variance (SD & SEM) is separated into categories due to different independent variables
Statistical tests which normally involve 3 or more independent variable and only 1 dependant variable
ANOVA
Correlation Coefficients
an index of the degree of association between 2 variables or the extent to which the order of individuals on 1 variable is similar to the order of individuals on a 2nd variable
Linear regression analysis
establishes a mathematical relationship between 2 or more variables
• Strong correlations of data do not necessarily prove
cause and effect
Pearson's Correlation Coefficient
quantifies the strength of association between 2 variables that are normally distributed
 Parametric b/c it comes from a normal distribution
 Usually shown by ‘r’ in papers
 Used a lot with a true experimental design
Interclass Correlation Coefficient (ICC)
it demonstrates the consistency of measurements when 1 or more raters takes the measurements
- consistency/conformity between multiple testers
Spearmen Rank Correlation Coefficient
it is used to quantify the strength of association between 2 variables that are measured on an ordinal scale
 Want to see if 2 variables are related in a correlation in a positive/negative manner.
 Non parametric test
Cronbach's Alpha
frequently used as a measure of the internal consistency reliability of an assessment
 How close is the dependent variable to being close each time you measure
 Measure of internal consistency, how accurate
Cohen's Kappa Coefficient
it is a measure of inter-rater agreement for qualitative (categorical) items
QUALITATIVE only
True Positives
sick subjects who have the disease
False Positives
healthy subjects wrongly identified as having the disease
True Negatives
healthy subjects correctly identified as not having the disease
False Negatives
sick subjects incorrectly identified as not having the disease
Sensitivity
A value which indicated the proportion (percentage) of actual positives which are correctly identified as being positive (TRUE POSITIVE)
Formula: No. of True Positives/No of True Positives + No. of False Negatives
Specificity
A value which indicates the proportion of negatives which are correctly identified as being negative (TRUE NEGATIVE)
Formula: No. of True Negatives/No. of True Negatives + No. of False Positives
Relative Standards
If you don’t want to use standardized tests you may choose to use relative standards, which means you use the subject as their own control & not some standardized chart
NOT relevant to a population - only to patient
P value
In accepting the Null Hypothesis (no difference exists) you need to know ahead of time what kind of chance you are willing to take of being wrong. This is the P value.
Type 1 Error
(Alpha Error/False Positive - synonyms): you reject the null hypothesis when the null hypothesis is true
Type II Error
(Beta Error/False Negative - synonyms): You fail to reject the null hypothesis when the null hypothesis is false
Cut-off Point
Can be anything that we determine to be indicative of a problem or a non-issue in terms of patient presentation.
Subjective points used to base decisions about whether or not a person has a condition, is eligible for a specified intervention, or needs to be referred for further testing.
Z-score
how many standard deviations a person is above or below the mean
Norm-referenced cut-off points are?
The BEST - then criterion, then arbitrary (i.e. pain)
Positive Predictive Value
A proportion of individuals identified by the cutoff as being abnormal who are classified as having a target condition by a criterion measure
Negative Predictive Value
A proportion of individuals identified by the cutoff as being normal who are classified as not having a target condition by a criterion measure
Sources of instability in a study
verification bias
small samples
errors in criterion measure
construct irrelevant variance
Likelihood Ratio
 Incorporates sensitivity and specificity
 Provides a direct estimate of how much a positive or negative test result will change the likelihood of having a condition or disease.
Positive Result of Likelihood Ratio (LR+)
how much the likelihood of the condition increase when a test is positive.
LR+ = Sensitivity/1-Specificity
Negative Result of Likelihood Ratio (LR-)
how much the likelihood of the condition decrease when a test is negative.
LR- = 1-Sensitivity/Specificity
Minimal Detectable Change
Minimal change that could be attributed to intervention vs. error
Minimally Important Difference
Clinically relevant change; magnitude of change that is meaningful or change in function beyond natural progression.
Clinical Significant Change
A change that is recognizable to peers and others
 A proportion of persons who show improvement
 A proportion of elimination of the presenting problem
 Minimally clinically significant change
Z-score for the mean is always?
0 (zero)
Z-score for the standard deviation is always?
1 (one)
EX: Z-Score of -1.4 = 1.4 standard deviations below the mean
How do you calculate a Z-score?
X- x/sd
E.g. raw score (X) = 15, mean (x) = 10, standard deviation (sd) = 4 (15 -10/4) – DO NOT DO ORDER OF OPERATIONS
• Z-Score = 1.25