Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
465 Cards in this Set
- Front
- Back
|
•Introduction and Background for Measurement and Evaluation
|
|
|
•Introduction of Measurement and Evaluation
|
|
Began
|
•Increased interest in fitness began in the late 1970s due to Dr. Kenneth Cooper
|
|
|
–Fitness craze may be linked to publication of Aerobics in 1968
|
|
no longer
|
•Kinesiology no longer for just teachers and coaches
|
|
need
|
•All kinesiology professionals need an understanding of testing
|
|
|
•Timeline of Measurement in Physical Education and Sport
|
|
dr. edward
|
•1861: Dr. Edward Hitchcock developed standards for age, height, strength of the upper arm, girths of the chest, arms, and forearms
|
|
dr. edward
|
–Considered father of measurement in kinesiology
|
|
dr. edward
|
–Field of study known as anthropometry
|
|
|
•Timeline of Measurement in Physical Education and Sport
|
|
dr. dudley
|
•1878: Dr. Dudley Sargent and William Brigham developed strength tests
|
|
dr. dudley and william brigham
|
–Developed 40+ different anthropometric measurements for exercise prescription
|
|
who developed modified?
|
•1931: W.W. Tuttle developed modified block-stepping test for endurance and general training
|
|
|
•1952: Balke Treadmill test
|
|
|
•Timeline of Measurement in Physical Education and Sport
|
|
minimum strength
|
•1954: Kraus-Weber Test for Minimum Strength
|
|
school age
|
•1958: AAHPERD published first fitness test for school-age American kids
|
|
AAHperd
|
–Initial test given to kids was the Physical Best test
|
|
FitnessGram
|
–California currently uses FitnessGram test
|
|
|
•Given to all 5th, 7th, and 9th grade students
|
|
DR. Kenneth
|
•1968: Dr. Kenneth Cooper developed 12-minute walk-run test
|
|
|
•Current Happenings in Measurement in Physical Education and Sport
|
|
Current Hapenning
|
•Physical activity is a complex, multifaceted behavior
|
|
7 componentsCurrent Hapenning
|
–7 components: physical, social, occupational, environmental, intellectual, spiritual, emotional
|
|
Growing Tend
|
•Growing trend to use physical performance for employment decisions
|
|
Legal
|
•Legal concerns when testing
|
|
Legal
|
–Title IX impact
|
|
Current Hapenning
|
•Current Happenings in Measurement in Physical Education and Sport
|
|
Current Hapenning
|
•Challenges of working with older adults
|
|
Current Hapenning
|
•Healthy People 2010: National Health Promotion and Disease Prevention
|
|
Current Hapenning
|
•Competency testing
|
|
Current Hapenning
|
•Authentic assessment
|
|
Current Hapenning
|
•ACSM recommendation for physical activity
|
|
ACSM
|
–30 minutes or more on most days of the week
|
|
|
•Measurement
|
|
definition of measurment
|
•Definition: systematic assignment of numerical vlaues or verbal descriptors to the characteristics of objects or individuals
|
|
|
•Measurement (cont.)
|
|
|
•Measurement vs. Test
|
|
measuremetn vs test
|
–A test must be administered to obtain a measurement
|
|
|
•Measurement vs. Evaluation
|
|
measurment vs evaluation
|
–An evaluation compares the measurements
|
|
|
•Measurement (cont.)
|
|
Objective and subjective
|
•Types of measurements
|
|
types of measuremtnsw
|
–Objective: a measurement that cannot be interpreted any differently
|
|
examples of types of measurements
|
•Example: a person ran the mile in 6:00
|
|
subjective def
|
–Subjective: a measurement that can be interpreted differently
|
|
subjective example
|
•Example: a person is a fast runner
|
|
|
•Measurement (cont.)
|
|
|
•4 Steps in the Measurement Process
|
|
4 steps in Measurments
|
–Define the characteristics you want to measure
|
|
4 steps in Measurments
|
–Select the appropriate test
|
|
4 steps in Measurments
|
–Administer the test
|
|
4 steps in Measurments
|
–Analyae the data
|
|
|
•Evaluation
|
|
Define evaluation
|
•Definition: obtaining information and using it to form judgements
|
|
Evaluation steps
|
•Steps involved
|
|
|
–Define objective of the test
|
|
|
–Measure the performance
|
|
|
–Find comparison values for the test
|
|
|
–Compare performance to standard
|
|
|
–Evaluate the comparison
|
|
|
•Tests
|
|
define test
|
•Definition: measures individual differences on a specific trait
|
|
purpose of test
|
•Use of tests
|
|
|
–Motivation
|
|
|
–Achievement
|
|
|
–Improvement
|
|
|
–Diagnosis
|
|
|
–Prescription
|
|
|
–Grading
|
|
|
–Classification
|
|
|
–Prediction
|
|
|
ØTest Administration
|
|
|
ØSteps for finding the best test
|
|
Steps to search
|
lRead and search the literature
|
|
steps to ask
|
lSeek advice from other professionals
|
|
step of possible
|
lUse the best tests possible
|
|
step
|
lPilot test
|
|
step to do
|
lAdminister the test
|
|
step to data
|
lScore the test
|
|
|
ØRisk Management
|
|
define risk management
|
ØDefinition: systematic analysis of the services offered for personal injury and financial loss
|
|
|
ØSteps to decrease risk
|
|
steps to decrease risk
|
lTest the equipment
|
|
steps to decrease risk
|
lMake sure testing environment is safe
|
|
steps to decrease risk
|
lParticipant readiness
|
|
steps to decrease risk
|
lAdminister readiness
|
|
steps to decrease risk
|
lSafety equipment
|
|
|
ØTest Administrators
|
|
|
ØTest considerations before administering a test
|
|
test considerations
|
lCollect data in professional manner
|
|
test considerations
|
lShow an interest in all participants
|
|
test considerations
|
lPlan ahead and be organized
|
|
test considerations
|
ØAdministering tests
|
|
test to administer
|
ØTest considerations during administering a test
|
|
test to administer
|
lGive clear directions for the test
|
|
test to administer
|
lBe cautious of encouragement
|
|
test to administer
|
•Some tests may be altered by encouragement
|
|
test to administer
|
lSafety concerns
|
|
|
ØConsiderations for maximal effort tests
|
|
Maximal effort considerations
|
ØInstruct subject that the test can be stopped at any time if they feel pain or ill
|
|
Maximal effort considerations
|
ØHow does the test administrator know that the participant is giving their max effort
|
|
exmple of maxim test administration
|
lSchool settings vs. non-school settings
|
|
|
ØScoring tests
|
|
multiple trial
|
ØAdministering multiple trial tests
|
|
|
lSuccessive trials vs. waiting between trials
|
|
scoring
|
ØScoring multiple trials
|
|
methods of scoring
|
lUse best trial
|
|
methods of scoring
|
lUse mean value of the scores
|
|
methods of scoring
|
lEliminate high and low scores and use middle scores
|
|
|
ØTiming of tests
|
|
|
ØWho should time?
|
|
who should time?
|
lNon-participating subject
|
|
who should time?
|
lAutomatic timing system
|
|
who should time?
|
lTester
|
|
who should time?
|
lTrained subject
|
|
|
ØSelection of timing depends on the situation, purpose of the test, and age of the participants
|
|
|
ØTiming of tests
|
|
|
ØWhen does the timer start the watch?
|
|
How to begin with a watch?
|
lOn a verbal command such as “go,” on a whistle, or on a hand signal
|
|
How to begin with a watch?
|
lOn the subject’s first movement
|
|
How to begin with a watch?
|
lOn the initial movement of the implement
|
|
Timing of test?
|
ØChoice of when to start the timing depends on the most appropriate method and should remain consistent for all subjects
|
|
|
ØTiming of tests
|
|
|
ØWhere should the timer stand?
|
|
where should it be located?
|
lClose to and perpendicular to the finish line
|
|
where should it be located?
|
lWhen calling out time to runners as they pass, face the runner.
|
|
examples of location time?
|
•Start yelling off times several seconds before the cross in front of you
|
|
|
ØScoring tests
|
|
|
ØScore all participants the same
|
|
How to score?
|
lMust be fair and consistent in the manner used to score tests
|
|
How to score?
|
ØUse exact instructions for everyone involved in the test
|
|
|
ØModifying Tests
|
|
|
ØStandardized tests
|
|
standarized test?
|
lDO NOT MODIFY!!! Follow instructions exactly as they are written.
|
|
|
ØTests for your own purposes can be modified
|
|
tests of modifying?
|
lSome tests do no give exact instructions, so the tester must determine appropriate instructions
|
|
tests of modifying?
|
lChanging a test effects reliability and validity
|
|
|
ØModifying Tests (cont.)
|
|
|
ØModifying a test due to time constraints
|
|
Modifying examples?
|
lTest over 2 or more days
|
|
Modifying examples?
|
lHave more stations set-up
|
|
Modifying examples?
|
lLook for a different test
|
|
Modifying examples?
|
lAs a last resort, modify the test
|
|
|
ØSport Skills Tests
|
|
|
ØProcedure for constructing skills tests
|
|
Procedure for constructing skill tests?
|
lDetermine the purpose of the skill test
|
|
Procedure for constructing skill tests?
|
lMake a list of essential skills or components necessary to perform the skill
|
|
Procedure for constructing skill tests?
|
lDetermine the tests that you want to use to measure each of the essential components
|
|
Procedure for constructing skill tests?
|
lCompare what you want to test with what you are actually testing
|
|
Procedure for constructing skill tests?
|
lEstablish reliability, validity, and objectivity of the tests
|
|
Procedure for constructing skill tests?
|
lAdminister the test and establish norms
|
|
|
ØSport Skills Tests
|
|
|
ØDetermining the validity of a sports skill test
|
|
|
lConstruct validity
|
|
Definition of validity of sports?
|
•Definition: The skilled people should score high and the unskilled people should score low if the test id valid.
|
|
|
ØTest Bias
|
|
definiation of bias?
|
ØDefinition: One group who takes the test score higher or lower on the test because of a common characteristic of the group
|
|
assumption of bias
|
ØIt is assumed that the test itself is valid, reliable, and objective
|
|
types of bias?
|
ØTypes of test bias: race, gender, socioeconomic status, disability, sexual orientation, culture
|
|
|
ØTest Bias
|
|
when is bias tested?
|
ØTest bias becomes a concern when one group taking the test has an advantage.
|
|
Most common b ias?
|
ØMost common causes of test bias in kinesiology are gender and age.
|
|
|
ØTest Bias
|
|
|
ØMethods to accommodate test bias
|
|
Methods of bias?
|
lProvide different norms for different groups
|
|
Methods of bias?
|
lAdminister different tests to different groups
|
|
Methods of bias?
|
lUse different testing procedures or different equipment
|
|
Methods of bias?
|
lGive directions verbally and written
|
|
|
ØAdministrative Concerns in Test Selection
|
|
Tests selection concern
|
ØRelevance
|
|
Tests selection concern
|
ØEducational value
|
|
Tests selection concern
|
ØEconomic value
|
|
Tests selection concern
|
ØTime
|
|
Tests selection concern
|
ØNorms
|
|
Tests selection concern
|
ØDiscrimination
|
|
Tests selection concern
|
ØBias
|
|
Tests selection concern
|
ØReliance on another person
|
|
Tests selection concern
|
ØSafety
|
|
|
nTypes of Standards
|
|
types of standard
|
nNorm reference standard (NRT)
|
|
types of standard
|
nCriterion reference standard (CRT)
|
|
|
¨Uses of CRT
|
|
|
nEvaluation
|
|
|
nTypes of evaluation
|
|
types evaluation
|
¨Formative
|
|
example of formative
|
nOccurs during the activity
|
|
types of evalatuion
|
¨Summative
|
|
examples of summative
|
nOccurs after the activity
|
|
|
nNorms
|
|
Norms provide
|
nProvide basis for evaluation
|
|
types of norms
|
nTypes of norms
|
|
types of norms
|
¨Local
|
|
norms types
|
¨State
|
|
norms types
|
¨national
|
|
|
nGoals & Objectives
|
|
goals define?
|
nGoals: an endpoint for the future
|
|
objectives define
|
nObjectives: brief, clear statements that describe desired outcomes
|
|
relation of goals and objectives?
|
nBoth are important when administering a test
|
|
criteria of both?
|
nCriteria for effective goals/obejectives
|
|
|
nCharacteristics of a Test
|
|
|
nFour components of a good test
|
|
four components of good test
|
¨Define the characteristics to be measured
|
|
four components of good test
|
¨Validity
|
|
four components of good test
|
¨Reliability
|
|
four components of good test
|
¨Objectivity
|
|
|
nSelecting a test
|
|
|
nConsiderations when administering a test
|
|
consideration when administering a test
|
¨Clear test directions and scoring
|
|
consideration when administering a test
|
¨Cost
|
|
consideration when administering a test
|
¨Time
|
|
consideration when administering a test
|
¨Ease of administration
|
|
consideration when administering a test
|
¨Ease of scoring
|
|
consideration when administering a test
|
¨Availability of norms
|
|
|
nValidity
|
|
validity
|
nNo test, scale, or inventory is 100% valid or valid for all circumstances
|
|
|
nNorm referenced test validity
|
|
|
nContent validity
|
|
content of validity
|
¨Degree to which the sample of items, tasks, or questions on a test are representative of some defined area of content
|
|
content of validity
|
¨Relies on subjective decision making
|
|
|
nNorm referenced test validity
|
|
|
nCriterion-related evidence of validity
|
|
definition of criterion-related evidence of validity
|
¨Definition: comparing test scores with one or more external variables that are considered direct measures of the characteristic or behavior
|
|
|
¨2 Types
|
|
tyeps of criterion-related validity
|
nConcurrent validity
|
|
tyeps of criterion-related validity
|
nPredictive validity
|
|
|
nNorm referenced test validity
|
|
|
nConstruct-related evidence of validity
|
|
definition of construct-related evidence of validity
|
¨Definition: degree to which a test measures an attribute or trait that cannot be directly measured
|
|
|
¨Methods to establish construct validity
|
|
methods construct validity
|
nGroup Differences Method
|
|
methods construct validity
|
nValidate each individual test for a battery of tests
|
|
methods construct validity
|
nCorrelational evidence
|
|
|
nCriterion referenced test validity
|
|
|
nDomain-referenced method
|
|
methods of domain-referenced method
|
¨Requires careful description of criterion behavior
|
|
methods of domain-referenced method
|
¨Requires evidence that the test adequately represents the domain
|
|
|
nCriterion referenced test validity
|
|
|
nDecision Method
|
|
decision of criterion referenced test validity
|
¨Can only be used when the ability to correctly classify a person as master or non-master is present
|
|
|
nTest Components
|
|
|
nReliability
|
|
reliability definition?
|
nDefinition: consistency of an individual when repeatedly performing the same test
|
|
reliability refers to?
|
nReliability refers to the dependability of test scores.
|
|
A test can be?
|
nA test can be reliable without being valid, but a valid test has to be reliable.
|
|
|
nFactors that Affect Reliability
|
|
factors affecting reliability?
|
nEquipment
|
|
factors affecting reliability?
|
nAdministrator
|
|
factors affecting reliability?
|
nSubjects
|
|
factors affecting reliability?
|
nTest environment
|
|
|
nTwo Types of Reliability
|
|
types of reliability?
|
1.Reliability of Norm-Referenced Tests
|
|
define one of type?
|
uDefinition: The test was administered on 2 different occasions to the same people and the same differences between people’s scores were detected.
|
|
give example of norm-referenced tests?
|
uExamples
|
|
|
nTwo Types of Reliability
|
|
the other reliability type?
|
2.Reliability of Criterion-Referenced Tests
|
|
define criterion-referenced
|
uDefinition: consistency of classification as master or nonmaster
|
|
|
nTypes of Reliability Seen in Kinesiology
|
|
types of relibity tests?
|
1.Single test administration: consistency of a test across a single administration
|
|
types of relibity tests?
|
2.Test-Retest: test shows consistency when an individual is tested twice within a short period of time
|
|
types of relibity tests?
|
3.Individual test score: estimating the standard error of measurement
|
|
|
nFactors Influencing the Reliability of a Test
|
|
factors influencing reliablity?
|
nType of test
|
|
give example of type test influence?
|
uReliability coefficients for different tests
|
|
factors influencing reliablity?
|
nRange of ability
|
|
factors influencing reliablity?
|
nLevel of ability
|
|
factors influencing reliablity?
|
nTest length
|
|
factors influencing reliablity?
|
nTest administration procedures
|
|
|
nTwo types of Objectivity
|
|
named a objective type?
|
1.Tester reliability
|
|
tester reliablity define
|
uIntrajudge: consistency in scoring when the same person scores the same test the same way on two or more occasions
|
|
tester reliablity define
|
uInterjudge: consistency between 2 or more independent judgments on the same performance
|
|
|
nTwo types of Objectivity
|
|
named a objective type?
|
2.Instrument reliability
|
|
define instrument reliablity?
|
uDefinition: reliability of the equipment used during the test
|
|
what is not to consider of instrument?
|
uDoes not consider the reliability of the person who is operating the equipment
|
|
|
|
|
|
►Reading and Understanding Research
|
|
|
►12 Step Guide to Understanding a Qualitative Research Report
|
|
citation, purpose, rationale, participants, context, steps in sequence, data, analysis, results, conclusions, cautions, discussion
|
►
|
|
|
►1. Citation
|
|
citation question>
|
►What study report is this?
|
|
citation requirement?
|
►Record a complete reference citation in APA format.
|
|
|
►2. Purpose and General Rationale
|
|
purpose questions?
|
►What was the purpose of the study?
|
|
purpose questions?
|
►How did the authors make a case for its general importance?
|
|
|
►3. Fit and Specific Rationale
|
|
questions of fit?
|
►How does the topic of the study fit into the existing research literature?
|
|
questions of fit?
|
►How does this study add to the current knowledge?
|
|
|
►4. Participants
|
|
participant ?
|
►Who was the author(s) and how was he or she related to the purpose, participants, and study site?
|
|
particpant ?
|
►Describe who was studied (give number and characteristics) and how they were selected.
|
|
|
►5. Context
|
|
context taken place?
|
►Where did the study take place?
|
|
important of context?
|
►Describe important characteristics.
|
|
|
►6. Steps in Sequence
|
|
steps in sequence
|
►Describe the main procedural steps in the study in the order they were performed.
|
|
examples of steps in sequence
|
§Include time required and any important relationships among the steps.
|
|
|
►7. Data
|
|
data ?
|
►What constituted data?
|
|
data ?
|
►How was it collected?
|
|
data ?
|
►What was the role of the investigator in that process?
|
|
|
►8. Analysis
|
|
analysis?
|
►What form of data analysis was used and what was it designed to reveal?
|
|
|
►9. Results
|
|
results ?
|
►What did the author(s) identify as the primary results?
|
|
|
§Findings from the data analysis
|
|
|
►10. Conclusions
|
|
conclusions ?
|
►What did the author(s) conclude from the how the results answered the purpose?
|
|
conclusions ?
|
►How did the events and experiences of the entire study contribute to the conclusions?
|
|
|
►11. Cautions
|
|
cautions?
|
►What cautions does the author(s) raise about the study or about interpreting the results?
|
|
cautions?
|
►Add any of your own reservations about the credibility of the methods?
|
|
cautions exmple
|
§Trustworthiness and believability
|
|
|
►12. Discussion
|
|
discussion?
|
►What interesting facts or ideas did you learn from reading the report?
|
|
discussion?
|
§Include anything that was of value, including results, research designs and methods, references, instruments, history, etc.
|
|
|
|
|
|
oStatistics
|
|
|
oQuantitative Statistics
|
|
QUANTITIVE DEPENDENT
|
oDependent variable: variable that the researcher wants to analyze
|
|
QUANTITIVE INDEPENDENT
|
oIndependent variable: variable that the researcher manipulates
|
|
|
oActivity
|
|
|
oName the independent and dependent variables in the examples.
|
|
|
1.The purpose of the study was to determine the weight gains of underweight adults who attended two different health classes.
|
|
|
2.The purpose of the study was to determine if the number of hours a college student studied per month could be used to predict their GPA.
|
|
|
3.The purpose of the study was to determine if the addition of a dietary supplement to an exercise program resulted in significant strength gains.
|
|
|
oNormal Curve
|
|
|
oStandard Scores
|
|
DEFINITON OF STANDARD SCORES
|
oDefinition: a score that is derived from a set of raw data
|
|
TYPES OF SCORES
|
oTypes
|
|
TYPES OF SCORES
|
nz-scores
|
|
TYPES OF SCORES
|
nT-scores
|
|
TYPES OF SCORES
|
nPercentiles
|
|
TYPES OF SCORES
|
nPercents
|
|
TYPES OF SCORES
|
nRanks
|
|
|
oz-scores
|
|
DEFINE Z SCORES
|
oDefinition: standard score with a mean of 0 and a standard deviation of 1
|
|
|
oEquation
|
|
X DEFINE
|
|
|
X DEFINE
|
|
|
S DEFINE
|
|
|
|
oT-scores
|
|
t SCORES DEFINE
|
oDefinition: standard score used in many physical education skill tests with a mean of 50 and a SD of 10
|
|
|
oEquation
|
|
|
oPercentiles
|
|
|
oDefinition: percentage of individuals in a group who have achieved a certain quantity
|
|
|
oPercents
|
|
DEFINE PERCENTS
|
oDefinition: converting a score by making a fraction or ratio of the individual score divided by the total and multiplied by 100
|
|
|
oRanks
|
|
DEFINE RANKS
|
oDefinition: converting quantitative scores so that each person within the set knows how they stood in comparison to the group
|
|
|
oDescriptive Statistics
|
|
|
oMeasures of Central Tendency
|
|
Mode:
|
oMode: single number, within the data set, that appears most often
|
|
median?
|
oMedian: 50th percentile
|
|
mean?
|
oMean: arithmetic average
|
|
|
oMeasures of Variability
|
|
variablitly range
|
oRange: high score minus the low score
|
|
v. standard deviation
|
oStandard deviation: a measure of the spread of the data set
|
|
variablity variance
|
oVariance: measure of statistical dispersion, indicating how its possible values are spread around the expected value
|
|
|
oPlotting Raw Data
|
|
plotting raw/scatter plot
|
oScatter plot: shows a correlation and regression
|
|
|
oPlotting Raw Data
|
|
example of raw data/plotting
|
oPie charts
|
|
|
oPlotting Raw Data
|
|
plotting raw data?
|
oBar or column charts: show individual scores for a variety of variables
|
|
|
oPlotting Raw Data
|
|
ploting raw data
|
oLine charts: to show improvements over time
|
|
|
oPlotting Raw Data
|
|
ploting histograms
|
oHistograms: individual raw data are converted to grouped data and plotted on an x-axis
|
|
|
oTerms used to describe distributions on a histogram
|
|
|
oModality
|
|
term modality
|
nUnimodal
|
|
term modality
|
nBimodal
|
|
term modality
|
nMultimodal
|
|
|
oSkewness
|
|
|
nPositive
|
|
|
nnegative
|
|
|
oTerms used to describe distributions on a histogram
|
|
|
oKurtosis
|
|
terms used kurtosis
|
nLeptokurtic
|
|
terms used kurtosis
|
nMesokurtic
|
|
terms used kurtosis
|
nplatykurtic
|
|
|
oInterpreting numerical values associated with skewness
|
|
|
oSkewness
|
|
most common values
|
nvalues between +1.00 and -1.00 are considered a normal distribution
|
|
|
n>+1.0 means positively skewed
|
|
|
n<-1.0 means negatively skewed
|
|
|
oInterpreting numerical values associated with kurtosis
|
|
|
oKurtosis
|
|
|
nValues between +1.00 and -1.00 are considered a normal distribution
|
|
leptokurtotic
|
n>+1.0 means a leptokurtotic distribution
|
|
platykurtotic
|
n<-1.0 means a platykurtotic distribution
|
|
|
lBivariate Correlation
|
|
definition of bivariate correlation
|
lDefinition: describes the relationships between 2 variables
|
|
examples of bivariate
|
lCorrelation coefficients and scatter plots can be used
|
|
|
lScatter Plot of Relationships
|
|
|
lRelationships
|
|
positive scatter plot
|
lPositive: high scores on one variable are paired with high scores on the second variable and vice versa
|
|
|
–Also referred to as a direct relationship between the 2 variables
|
|
|
lRelationships
|
|
negative
|
lNegative: high scores on one variable are paired with low scores on the second variable
|
|
negative other name
|
–Also referred to as an inverse relationship
|
|
|
lRelationships
|
|
no relation
|
lNo relation: scores on one test has no bearing on scores from another test
|
|
|
lCorrelation Coefficient
|
|
coefficient
|
lCorrelation coefficient denoted as “r”
|
|
bivarete
|
lBivariate correlation coefficient determines direction and strength of the relationship between the 2 variables
|
|
bivarete determins
|
–+ or – sign indicates direction
|
|
bivarete -
|
–numeric number indicates strength
|
|
|
lInferential Stats
|
|
definition infernetial
|
lDefinition: to make an assumption about a set of data from a sample of subjects that is assumed to represent the population
|
|
|
lInferential Stats
|
|
sample of inferential
|
lSample: represents a portion of the population from which measurements are actually obtained
|
|
population in inferential stats
|
lPopulation: all units possessing a certain characteristic defined by the researcher
|
|
|
lSampling Techniques
|
|
|
lRandom sampling
|
|
simple random:
|
–Simple random: insures that each element in the population has an equal chance of being selected
|
|
stratisified random:
|
–Stratified random: insures that all subsets of the population are represented in appropiate numbers
|
|
|
lSampling Techniques
|
|
|
lNon-random
|
|
non-random
|
–Volunteers
|
|
|
•Sample of convenience
|
|
|
lSampling Problems
|
|
sampling problems
|
lResponse rate
|
|
sampling problems
|
lRefusal to participate
|
|
sampling problems
|
lAttrition
|
|
|
uTailed Tests
|
|
one tailed test:
|
uOne-tailed test: sensitive to differences in only one direction
|
|
|
•Used when the direction of the difference between populations is known or when the researcher is concerned about a differencein one direction
|
|
two-tailed test:
|
uTwo-tailed test: sensitive to significant differences in either direction
|
|
|
uP-value
|
|
def P Value
|
uDefinition: the area under the tail or tails of a distribution beyond the value of the test statistic or the probability that the value of the calculated test statistic occurred by chance
|
|
|
uHypothesis Steps
|
|
first hypothesis
|
1.State the null hypothesis.
|
|
null hypothesis
|
•Ho: μ1 = μ2
|
|
|
uHypothesis Steps
|
|
second hypothesis
|
2.State the alternative hypothesis
|
|
hypothesis
|
•HA : μ1 < μ2
|
|
hypothesis
|
•HA : μ1 > μ2
|
|
hypothesis
|
•HA : μ1 ≠ μ2
|
|
|
uHypothesis Steps
|
|
third hypothesis
|
3.Determine the critical value and rejection regions
|
|
|
•Level of significance (denoted as α)
|
|
define level of significance
|
uDefinition: probability that defines how unlikely the event must be before the researcher can reject the null hypothesis
|
|
most common
|
u Most common value is 0.05
|
|
|
•Level of confidence
|
|
|
uHypothesis Steps
|
|
fourth hypothesis
|
4.Compute the calculated value
|
|
|
•Use t-test or z test statistic
|
|
|
uHypothesis Testing
|
|
5 hypotheisis
|
5.Come to a conclusion about HO
|
|
|
•Either reject or fail to reject the null hypothesis
|
|
5th 1st method
|
•Method 1: use calculated and tabled values
|
|
|
•obtained value ≤ tabled value: fail to reject Ho
|
|
|
•obtained value ≥ tabled value: reject Ho
|
|
5th method 2
|
•Method 2: use p-value and α
|
|
reject
|
•p ≤ α : reject Ho
|
|
fail to reject
|
•p ≥ α: fail to reject Ho
|
|
|
uHypothesis Testing
|
|
|
uReport the conclusion in non-technical terms
|
|
|
uPossible Errors in Hypothesis Testing
|
|
type 1 error
|
uType I error
|
|
type 2 error
|
uType II error
|