• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/123

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

123 Cards in this Set

  • Front
  • Back

Standard Error of the difference between two scores

· Helps to determine whether a difference between two scores is significant and is computed from the SEMS of the individual test



Standard Error of Measurement vs. Reliability Coefficient:

SEM is best for individual scores (think: if we gave this person a test multiple times, what would be the standard deviation of all those scores? Useful for determining possible measurement error, in that a person’s performance might vary day to day so think about the overall distribution of those hypothetical scores)


· Reliability Coefficient: the ratio of true scorevariance to the total variance of the testàused to comparereliability of different tests


· So, which of these tests is more reliable,versus how reliable is this score given possible standard error?

Validity

- The extent to which a test measures what itintends to measure


- Cannot be reliable and still valid· Begins with test construction but is an ongoingprocess


- Multiple types of validity

Content Validity

The degree to which the questions, tasks, oritems on a test are representative of the construct being tested


Especially useful when there is already a lotknown about the construct, so an expert can determine whether is a test isreally measuring the pertinent aspects of a given construct

Criterion- Related Validity

· A type of validity in which a test is effectivein predicting performance on an outcome measure


· The outcome measure must be reliable also, andbe appropriate for the test (for example: a measure of depression symptomsmight predict a diagnosis of depression—they’re appropriately related) andcan’t be contaminated by the test itself

Concurrent Validity

- Based on correlations between new test andexisting testàso, think if you were measuring spelling with a new test, you also give anexisting spelling measure (that’s been validated, of course!) and see if youget comparable scores


-Test scores and criterion information areobtained simultaneously


- Appropriate for achievement tests, personalitymeasures, etc.\"p0WĊr

Predictive Validity

· Test scores estimate outcome measures to beobtained later (think SAT scores to later academic achievement)


· Commonly used with entrance exams or employmenttests


· A regression equation describes the best-fittingstraight line for estimating the criterion from the test

What is a construct?

· A construct is a quality or trait (inferred frombehavior) in which people differ.


· Test estimates underlying characteristics arebased on a limited sample of behavior· Examples include: reading ability, workingmemory, etc. (basically all the subtests we’ve learned—each are designed to tapa particular construct) but also personality traits (such as extroversion) oremotional states, etc.


· Constructs are not directly observable, must bemeasured (this is often because they represent a tendency to think or feel in acertain way)

Construct Validity

· Does this test measure the construct it’ssupposed to be measuring?


· Refers to the degree to which a test or othermeasure assesses the underlying theoretical construct it issupposed to measure (i.e., the test is measuring what it is purported tomeasure)


· Based on multiple types of evidence from varioussources


· Do the relationships with non-test criteriasupport the test as a measure of the particular construct? (not sure what thismeans exactly)

Approaches to Construct Validity: Group Differences

do different groups score differently on this test?

Approaches to Construct Validity: Factor Analysis

· describesvariability; identifiesinterrelationships among items and group items that are part of unifiedconcepts.

Approaches to Construct Validity: Classification Accuracy

Does a test scoreprovide an accurate classification according to the construct at hand?receive

Approaches to Construct Validity: Test Homogeneity

- The extent to which a test measures a singleconstruct


-Is it internally consistent?


- Think: is this test measuring just one thing, likespelling ability?

Approaches to Construct Validity: Developmental Differences

- Age differentiation: does this testdifferentiate between different age groups or developmental levels?


-Sequential pattern of development

Correlation Coefficient

- Measures the strength and direction of a linearrelationship between two sets of scores from the same people


- Expresses degree of correspondence between twosets of scores

Correlation of +1.00

top scoring individual in variable 1 is also top scoring individual invariable 2; 2nd best in variable 1 is also 2nd best invariable 2...

Perfect negative correlation (-1.00) fromcomplete reversal of scores:

best in variable 1 is worst in variable b etc.

PearsonProduct-Moment Correlation Coefficient:

Most common way of computing correlation coefficients

StatisticalSignificance (what is the probability that the relationship exists?)

· Tells us what the probability is that we wouldbe making an error if we assume that we have found that a relationship existso If this probability of making an error is small,then we say our observation of the relationship is statistically significant


o Can evaluate differences between two or moremeans, differences between a score and the mean of the scale, and differencesof correlations from zero


o Can never be 100% certain that a relationshipexists because there are too many sources of error

Significance levels refer to risk of error we’rewilling to accept in drawing conclusions about our data. What levels do most psychologist use

Most psychological research uses either 0.01 or 0.05 levels.


o p <.05 level = it would happen by chance only 5% of the time (5 or fewer times in100)


o Decides how confident you can be about results

Confidence interval for coefficient

o The confidence level is equivalent to 1 – thealpha level. So, if your significance level is 0.05, the correspondingconfidence level is 95%

Types of Reliability

the degree to which an assessment tool produces stable and consistent results.




Test re-test, Alternate forms, Inter-scorer InternalConsistency

Internal Consistency

Correlation of items with one another


o Split Half


o Spearman-Brown (adjusts for length): Used to predictthe reliability of a test after changing the test length Ex. A test is made up of 10 items and has areliability of .67. Will reliability improve if the number of items is doubled,assuming new items are just like the existing ones?


o Coefficient Alpha


o Kuder-Richardson

Test-RetestReliability (reliability over time)

· Repetition of the identical test on a secondoccasion; the reliability coefficient is the correlation between the scoresobtained by the same person on the two administrations of the test


· Time interval must be specified


o Especially in test manuals


· Useful only for tests not greatly affected by repetition


· Practice effects could be a problem


o Some skills associated with practice effect morethan others (e.g. jigsaw puzzle)

AlternateForms Reliability (reliability over time)

Same people are tested with one form on the first occasion and tested with another equivalent form on the second occasion


· Assesses both temporal stability and consistencyof response to different item samples


· Requires a statement of length of interval


· Forms must be truly parallel· Immediate and delayed


o Immediate – the two test forms are administeredback to back§ Shows reliability across forms, but not acrossoccasions


· Viewed to be more widely applicable thantest-retest reliability


· Problem: Alternate forms will reduce but noteliminate practice effects

SplitHalf Reliability (internal consistency)

Two scores of a measure are obtained for each person during one administration by dividing the test into equivalent halves; reliability is only on half of the test Avoid splitting the items in a way that groups items dealing with a single problem or construct Odd/even is customary but not always possible Calculate r between two halves Correct with Spearman Brown The length of a test changes the coefficient

CoefficientAlpha and Kuder-Richardson (internal consistency)

· Indexes of homogeneity of test; degree to whichall items measure same constructo Both use a single administration of a singleform


· Coefficient Alpha: aka Chronbach’s Alphao In addition to use with dichotomous tests can beused with tests containing nondichotomous items· Kuder-Richardson:for tests with 2 choice responses T

Inter-Scorer Reliability

· Used for observational measures or in some tests of creativity and personality,which involve scorer judgments for data collection· Especially important when subjectivity ofscoring may be a concern and when scoring is susceptible to “drift” (ex.Behavioral coding)· Reliabilitycan be found by independent scoring of same material by 2 examiners


· Percentageof agreement is the percentage of intervals where both raters agreed behavioroccurred

Speed Tests

· Test in which time limit is set and there aretoo many items for anyone to get a perfect score—dependent on speed of individualperformance


· Must be very low difficulty so that it isassumed everyone could get most items right and the only variance will be inspeed


o Asopposed to: Power test, in which items increase in difficulty, with some itemstoo hard for anyone to solve…but long enough time limit to permit everyone totry everything to the best of their ability


· Reliability should be based on test-retest ortwo split half administrations with Spearman-Brown correction


o Canadminister 2 equivalent halves of the test with separate time limits, andcompare these


· It is inappropriate to estimate reliabilityusing internal consistency – this inflates reliability

Restriction of Range

· Results in spuriously low test-retestreliability


· Restriction of range of your testing groupleads to low variability b/c scores become more homogenous when data range islimitedo Translates into a smaller proportion ofvariance explained by your testing instrument, ultimately deflating thereliability coefficient

Factors AffectingReliability: Test length

More items on a test (longer the test) = higher internal consistencyreliability

Factors AffectingReliability: Homogeneity of items

More homogenous test items = higher reliability

Factors AffectingReliability:Test-retest interval

The shorter the time interval between two test administrations, the lesslikely that changes will occur = higher test-retest reliability

Factors AffectingReliability: Variability of scores

o Greater the variance of scores on a test =higher reliability§ Small changes in performance have a greaterimpact on reliability of test when the range, or spread, of scores is narrowthan when it is wide§ Homogenous samples (small variance) will likelyyield lower reliability estimates than heterogeneous samples (large variance) o Since the reliability coefficient is acorrelation coefficient, it is maximized when the range of scores isunrestrictedo The range is also affected by the difficultylevel of the test items§ When all items are either very difficult or veryeasy, all examinees will obtain either low or high scores, resulting in arestricted range§ Best strategy is to choose items so that theaverage difficulty level is in the mid-range ;

Factors Affecting Reliability: Guessing

o Less guessing = higher reliability§ Even guessing that results in correct answersintroduces error into the scoreo All other things being equal, a true/false testwill have a lower reliability coefficient than a four-alternativemultiple-choice test which, in turn, will have a lower reliability coefficientthan a free recall test


Factors Affecting Reliability:Variation in test situation

(e.g., students misunderstanding or misreading test directions, noise level, daydreaming, distractions, sickness, examiner factors – scoring errors, misreading instructions) o Fewer variations in test situation = higher reliability

Factors Affecting Reliability: Sample size

Larger samples = more dependable estimate of reliability

Standard Error of Measurement

SEM estimates how repeated measures of a personon the same instrument tend to be distributed around his or her “true” scoreo The true score is always unknown because nomeasure can be constructed that provides a perfect reflection of the true scoreTy

Standard Error of Measurement: Index of measurement error

o SEM is directly related to the reliability of atest - the larger the SEM, the lower the reliability of the test and the lessprecision there is in the measures taken and scores obtained


o Since all measurement contains some error, it ishighly unlikely that any test will yield the same scores for a given personeach time they are retested ault

Standard Error of Measurement: Confidence Interval

· Confidence interval places score within a rangebased on SEM


o Statements about an examinee’s obtained score(the actual score that is received on a test) are expressed in terms of aconfidence interval

Confidence Intervals for Obtained Scores

Range of scores around the obtained score thatindicates how certain we want to be that “true score” falls in that range


Increased levels of confidence expand the rangeof scores included in the probability statements

Why report do we always report overall scores?

Important so that the reader can be informed ofthe probability that the examinee’s true score lies within a given range ofscores

What are the most typical confidence intervals?

68%, 90%, or 95% (the range within which a person’s “true” score can be found 68%, 90%, or 95% of the time)

Is it possible to constructa confidence interval within which an examiner's true score is absolutelycertain to lie?

No, b/c of measurement error

Scales of Measurement: Nominal

categories with no sequential order that allowfor classification i.e. demographic information.

Scales of Measurement: Ordinal

variables ordered along some dimension with no regard for the distance between scores i.e. likert scale.

Scales of Measurement: Interval

hasan arbitrary zero point and equal intervals between point i.e. temperature ·

Scales of Measurement: Ratio

has a true zero point, equal intervals between adjacent units, and allows for ordering and classification. It is rarely used in psychology, because of the need for an absolute zero point.

Raw Scores

anunaltered measurement. It cannot be used or interpreted in isolation, but takeson meaning in relation to norms. Norms summarize a large number of scores.

FrequencyDistribution

A table that displays the frequency of variousoutcomes in a sample. Each entry in the table contains the frequency of countof the occurrences of values within a particular group or interval, and in thisway, the table summarized the distribution of values in the sample.

Measurements of Central Tendency

· Mean—thesum of scores divided by the total number of scores


· Median—themiddle score in a distribution of scores arranged numerically


· Mode—thescore that occurs most frequently in a distribution

Variance

measures the spread of and distance between scores within a set. For instance, a variance of zero indicates that all the scores are the same. Large variances indicate that scores are widely distributed within the set. Measures of variance vary widely depending on the type and range of possible scores measured.

Standard Deviation

square root ofthe variance. SD also measures spread of and distance between scores within aset, but has been standardized so that sd values can be compared between setsof scores. Can also be used to describe how far a given score is the from themean (e.g. an IQ score of 115 is 1 standard deviation above the mean.) 68.2% ofscores in a normally distributed set fall within one standard deviation aboveor below the mean, 95.4% of scores fall within 2 standard deviation above orbelow the mean, and 99.6% within three standard deviations above or below.

The Normal Distribution

symmetric and has bell-shaped density curveswith a single peak. It has a predetermined mean z score of 0 and sd of 1. Some traits, like intelligence are thought to be normally distributed throughout the population. As such, IQ tests (and others!) construct their standardized scoresto follow the normal distribution

Skewness inPsychological Testing

· Positive skew—The majority of scores fall at the low end of the distribution and indicates that a test has too few easy items


· Negativeskew—The majority of scores fall at the high end of the distribution indicatesthat a test has too few difficult items

Percentiles

Percentage of people in standardized sample who scored at or below a specific raw score. Advantages include that they are easy to compute and communicate.

Disadvantages are that they distort the measurement scale, especially at the extremes because so few people score within those ranges. As such, raw scores between the 1st and 2nd percentile may be very different as do scores at the 98th and 99th percentile. For this reason, it is important not to rely on percentiles to report scores even though it may be tempting.

Test bias

objective statistical indices that examine the patterning of test scores for relevant sub populations


o considered biased when deferentially valid for different subgroups


i.e.,test score has meanings or implications for a relevant, definable subgroup of test takers that are different from the meanings or implications for the remainder of the test takers

Bias in IQ Tests: Criterion related validity

test does not predict criterion equally well for people from different groups. Results don’t cluster around a single regression line.


· bias in predictive/criterion-related validity: inference drawn from the test score is not made with the smallest feasible random error or if there is constant error in an inference or predication as a function of membership in a particular group


o a different regression model is needed to predict performance based off of SATscores in African American students versus Caucasian students

Bias in IQ Tests: Construct validity

test measures differentconstructs for one group than another


· bias in construct validity: testis shown to measure different hypothetical traits for one group than foranother; differing interpretations of a common performance are shown to beappropriate as a function of ethnicity, gender, or another variable ofinterest, one typically but not necessarily nominal


o factorstructure shouldn’t change across relevant subpopulations


o rankorder of item difficulties


o constructvalidity for most aptitude/achievement tests held up for the most part

Characteristics of unbiased IQ tests

· The same factor structure should apply torelevant sub populations


· Rank order of item difficulties should bevirtually the same, for all age groups


· Typically a test isn’t biased and that’s notthe usual cause for unfairness


o Ex. bad schooling so kid at disadvantage – test may no be fair but it’s not biased if still accurately predicts things


· unbiased test may sill be deemed unfair becauseof the social consequences of using it for selection decisions

Learning Disorder Federal definition (mainpoints)

· 1) Disorder in one or more of the psychological processes involved in language, reading, writing and/or math


· 2) Excludes learning problems resulting from sensory or motor handicaps, MR, ED, or environmental causes


o Federal definition says intellectual ability must be at least average


o IDEA2004 – Last revision of federal law (in 2008) said can no longer require discrepancy– must use response to intervention (RTI) Clinical significance /research / public policy and schools

DSM 5 Specific Learning Disorder

· Single diagnosis incorporating deficits that impact academic achievement


· Can’t require IQ/Achievement discrepancy as outlined in IDEA 2004


· 4 criteria:


o 1. Academic skills must be well below the average range in reading writing or math Skills like – word decoding, reading fluency, reading comprehension, spelling,writing, number sense/fact, calculation, mathematical reasoning


o 2.Impaired functioning at school, work, activity given certain age/grade


o 3. Not to be diagnoses until school years.


o 4. Not better explained by other factors Absence of intellectual disability, visual/hearing impairments, mental disorders,neurological disorders, psychosocial difficulty, language differences, lack of access to adequate instruction


· Note: Diagnosis is different from eligibility at school! Different criteria


o Psychologists work completely separate but with an awareness of RTI process at school and where the child is in the process

What is “response to intervention”

· Should this be primary method of identifying students with learning disabilities?


· Interventions employed and kid moved to next level or back down to general classroom (pyramid diagram)


· Screening program can take most of a schoolyear bc weeks at each level


o Didn'tuse to take so long after finding a kid with a problem


o A kid who is terribly behind in school should not have to wait a year! No federal mandate they can’t though


o If eligible, they then get an IEP· Schools required to teach in LRE – least restrictive environment


· Inclusion – special ed teacher co-teaching


· FAPE = free and appropriate public education in LRE

Components of Battery for Learning Disorder

• Relevant history• Intellectual assessment• Academic assessment• Other relevant assessment of cognitive processing, speed, memory, etc• Language, when needed• Graphomotor skills• Emotional and behavioral assessment• Attentional assessment• (note: used to think LD was bc of perceptual and motor difficulties – would make them do lots of motor tasks)

Assessment of Learning Disorder

• 1) A learning disorder can only be diagnosed when a battery of psychoeducational tests has been performed and interpreted by a qualified psychologist (and other causes have been ruled out)


• 2) This is only possible when the professional uses appropriate interpretation of the instruments along with a comprehensive history and medical evaluation.


• 3) Even if the child is accurately diagnosed with LD, they still might not qualify for services within the school system because of eligibility criteria :-/


• 4) Eligibility criteria varies not only from school system to school system, but also from state to state. Therefore if a child moves to a new state, they may lose their services.


• (Important to use age-based norms when testing for LD)

Reading disabilities– Areas of concern

• Phonological awareness• Letter identification• Sound symbol association• Blending• Structural analysis• 80% of LDs

Reading disabilities: Description, Assessment, Remediation

• Description – problem with basic reading and comprehension.


• Assessment – look at all components separately and together (eg nonsense word phonological processing task)


o Ex. CTOPP Comprehensive Test of Phonological Processing: Phonological awareness (elision, blending words, phenome isolation), Phonological memory, Alternate Phonological Awareness (blending nonwords, segmenting nonwords), Rapid symbolic naming (rapid digit naming, rapid letter naming)


• Remediation – systemic and intense emphasis on phonics (eg teaching decoding skills) is a superior method to heavy emphasis on word acquisition

Mathematics disabilities – Areas of concern

• Counting


• Enumeration


• Calculation


o Basic operation


o Strategies


• Reasoning


• Related concepts


o Time, money, measurement

Mathematics disabilities: Description, Assessment, Remediation

• Description – problems with recognizing numbers and symbols, memorization, mathematical reasoning, etc. These skills are implicated in understanding time, money and measurement. Deficiency can be specific to math skills OR may be an underlying problem with a skill that affects math, like memory weakness for example.


• Assessment – Eval separate components (processing speed, graphomotor skills, memory, visual-spatial skills)


• Remediation – drill and practice are effective and approached based on learning theory. Core skills are taught through a careful sequence – reinforcing each sill until child has mastered particular criteria. Training is also given in metacognitive skills (eg checklist tailored to particular errors to self-monitor whether all steps performed).



Writing disabilities – Areas of concern

• Mechanics


o Eye-hand coordination


o Directionality


o Fine motor coordination


o Visual discrimination of letters and words


• Production


• Conventions


o Punctuation


o Spelling


o Capitilization


• Linguistics


o Syntactic and semantic structures


• Cognition and organization


• Most commonly missed LD

Writing Disabilities: Description, Assessment, Remediation

• Description: Can involve any aspect of written communication (graphomotor skills, grammar, spelling, or limitations in putting one’s thoughts into words). Can solely be related to writing, or secondary to reading or language problems. o Writing samples from kids with writing difficulties: are brief and have little detail indicate little/no planning have low richness of content indicate that child has thought about writing in its simplest terms (e.g. focusing more on artificial aspects like spelling and neatness rather than idea expression)


• Assessment: Most kids don’t get routine writing tests the way the get regular math and reading assessments (e.g via the Iowa or Stanford Achievement tests). Therefore lots of kids with writing disorders are missed (and can go undetected for some years). Unfortunately, there aren’t many adequate standardized instruments to assess writing, so qualitative assessments of writing skills are an essential component of evaluation. Because of the limitations of these writing tests (e.g. not penalizing for errors in spelling), we can’t assume that an average score precludes a writing deficiency.


• Remediation: “Process approach” from Graham & Harris: o based on the idea of meaningful writing in context versus explicit sentence- building skills (which have been shown to be ineffective) o plenty of time is regularly devoted to developing writing skills o kids pick topics of interest to them o share their work with others & get feedback

Language Disabilities – Areas of concern

• Receptive


o Vocabulary


o Syntax


o Pragmatics


• Expressive


o Vocabulary


o Syntax


o Pragmatics

Test homogeneity

• The extent to which a single construct is measured: you want questions on a measure/test to correlate highly with each other and with your construct of interest. Want to make sure you are tapping into appropriate construct


• Internal consistency of test

Cattell-Horn-Carroll (CHC) Theory

• This theory represents the integration of 2 other theories


• Consists of 10 broad categories and more than 70 narrow abilities


• Potential theory that may be able to unify the 2 separate lines of research between intelligence theories and intelligence tests


• Most tests are now interepreted based on the CHC theory


• 3 levels: g, broad, narrow


• This theory is popular because it has the strongest empirical basis of intelligence theories


• Considerable evidence that both broad and narrow CHC cognitive abilities explain a significant portion of the variance in specific academic abilities, over and above the variance accounted for by g


• Level II factors change and are variously reported – classifications of cognitive and academic abilities


• Level I (list of narrow abilities) are constantly being expanded


• When you think about narrow abilities, think about subtests


• When you think about broad abilities, you should gravitate toward crystallized vs. fluid intelligence

Test Development has been influenced by CHC andcross-battery methods

o If your test is based on CHC theory there is more opportunity to generate research in the future


o Now intelligence batteries encompass a wider range of broad and narrow abilities than previous editions


o Majority of tests published after 1998 measure 4-5 broad cognitive abilities compared to the 2-3 that former tests measures

Test interpretation also influenced by CHC Theory

o Less dependence on single test batteries


o More varied testing, to encompass multiple broad cognitive abilities as opposed to just relying on a single intelligence battery


o Psychometrically defensible evaluation of data/test integration


Cross battery approach introduces standard nomenclature for test interpretation across batteries – more comprehensive understanding and score reporting

CHC Broad Abilities: Crystallized Intelligence/Knowledge

o Refers to acquired skills and knowledge (i.e. vocabulary, general information)—influenced by culture and educationo Influences by formal and informal education throughout the lifetime

CHC Broad Abilities: Fluid Intelligence/reasoning

o Nonverbal, culture-free mental capabilities (i.e. processing speed, working memory) that involve adaptive and new learning capabilities. More dependent on physiological structures and is more sensitive to effects of brain injury then crystallized intelligence

CHC Broad Abilities

Crystallized Intelligence/KnowledgeFluid Intelligence/reasoning• Domain-specific knowledge• Visual-spatial abilities• Auditory Processing• Broad retrial (memory): LTM and STM• Cognitive Processing Speed• Decision/Rxn Time
Bias in IQ Tests

• Bias is a validity/scientific question--- differential validity for 1 population vs. another


• There is no real empirical evidence that IQ tests are biased


o But we should keep in mind cultural, social, and linguistic differences that may make certain items or subscales more or less difficult for a certain population compared with another


o Bias should not be confused with fairness – this account more for those social/ ethical differences

Bias in IQ Tests: Content validity
Item of sub-scale is more difficult for members of one group than another, controlling for general ability

o Remember if an IQ test is legit biased then we may see a different factor structure for certain sub-populations of people after factor analysis. We don’t see stuff like this in the current IQ tests

Types of questions a psychologist might ask when assessing a preschooler?

-Kindergarten Readiness: Covers health and academic preparedness, should not exclude children, but should lead to appropriate programming in kindergarten


-Delays: Broad-scale screening for possible developmental delay. Depending on results, outcome may be either individual evaluation or ongoing observation and pre-referral interventionAdvanced development: how will the child’s strengths be assessed, and what variables will be considered?


-Atypicality: are there any qualities in the child that are not representative of his/her stage/age/group. Could be socially, intellectually, mentally, etc.

Global delay

term used for children under 5 years when it is not possible to reliably and validly assess the severity of their intellectual disability. This diagnosis could pertain to a child who is not meeting expected developmental milestones in several areas of intellectual functioning and whose intellectual functioning cannot be assessed as in case of children who are too young to participate in testing.

Fine and/or gross motor

Fine motor skill —the abilities required to control the smaller muscles of the body for writing, playing an instrument, artistic expression and craft work. The muscles required to perform fine motor skills are generally found in the hands, feet and head. Gross motor skills —the abilities required to control the large muscles of the body for walking, running, sitting, crawling, and other activities. The muscles required to perform gross motor skills are generally found in the arms, legs, back, abdomen and torso.

Language and/or speech

delays or differences in patterns of language acquisition are sensitive indicators of developmental problems. Assessment of language abilities in preschoolers should involve an evaluation of more than one dimension of language (including measures of phonological short term memory).

Why is it challenging to assess language in preschool years?

It is challenging to assess language in the preschool yearsbecause of individual differences and variability (e.g. late bloomers).Preschool children who develop specific language impairment (SLI) are usuallycharacterized by having language difficulties from the outset of the languagelearning process (slow in reaching language milestones from the beginningunlike ASD’s language loss). :UM

SLI difficulties in children

SLI children can also have difficulties in understanding what is said to them, eg. Following instructions. Children with SLI have more difficulty with talking-articulation (expressive language) than with understanding what is said to them (receptive language). The DSM-IV definition: requires substantially worse performance of verbal abilities compared to non-verbal cognitive functioning.

Interpersonal and/or behavioral delays in preschool

developmentally appropriate emotion knowledge, affective perspective taking, and understanding of mental states to specific aspects of prosocial behavior (e.g., sharing, cooperation, and prosocial responses to others’ emotions) as well as global dimensions of social competence and peer acceptance

assessing cognitive delays in preschool yrs

delays in cognition is targeted by IQ testing (processing speed, working memory, reasoning, etc.), however it is not a reliable measure of cognitive delay in preschoolers and should only be given when needed. Preschoolers may be unfamiliar with the procedures required by the testing situation. They may lack well developed verbal skills, specifically when responding to unfamiliar adults especially children with cognitive or language difficulties. Lastly, a preschooler’s expression of his/her ability may vary day to day.

In Depth Preschool Assessment

An in-depth preschool assessment covers a broad range of procedures used to gather information relevant to understanding the functioning of young children using an ongoing multifaceted, collaborative approach including standardized testing, behavioral observations (in multiple settings), play, parent/teacher reports, and environmental considerations. An in depth assessment collaborates with the child’s school, community, family, daycare, and general adults to assess the child’s: Attention, Memory, Visual-motor, Gross motor, Fine motor, Language, Phonological Processing, Self-help/adaptive functioning, Cognitive, and Social/interpersonal skills.

Assessment for Prediction and Prescription: Defining areas of need

Overall, considerations to remember when planning preschool assessments are: (1) what is your assessment question? (i.e. areas of concern), How will the results be used? (2) From what sources will information be obtained? (3) How comprehensive will the assessment be? (4) How will the children’s strengths as well as difficulties be assessed, and what variables will be considered? (5) Technical adequacy of assessor and ethical considerations (6) How will families be involved in the process?

Assessment for Prediction and Prescription: Prognosis

developmental delays, or areas of difficulty should be screened/identified (by law before the age of 3). Some specialists raise important questions about the potential negative effects of labeling and the overall poor predictability of early childhood measures to later school achievement. Such problems include: (1) Mislabeling of some children as “disabled” due to assessor’s lack of knowledge regarding racial, cultural, and linguistic diversity (2) The irrelevance of labels to many children’s instructional needs(3) Reduced expectations for children placed in special education(4) Limited modifications of instructional programs to me the diverse needs of children. There are two major reasons why a label is assigned: (1) To determine eligibility for preschool special education services provided for by IDEA 2004. (2) Identify children’s preparedness for kindergarten or first grade in order to place children into transitional classes or to hold them back or place them in classes for the gifted.

Assessment for Prediction and Prescription: Determining treatment recommendations

Determining treatment recommendations: Assessment and intervention need to be viewed as reciprocal activities and as an ongoing, collaborative process. Here are a list of treatment recommendations that should be considered in the development of assessment procedures:


- Intervene early, before persistent educational and/or emotional problems develop (pre-referral early intervention). In this case, observation and consultation with parents/teachers are used to develop a short-term pre-referral plan, to recommend modifications in instruction or responses to behavior, or to alter aspects of the physical environment. The outcomes are then evaluated and modified.


- Offer enrichment programs. For example, parent programs, workshops, etc. Enriched instructional opportunities can be provided for children whose environments may place them at risk.


- Focus on teacher’s beliefs and instructional interactions. Curriculum revision to foster excellent instruction, one to one in class tutoring support, parent support, regular reassessment. Example of this would be the Head Start program.


- Promote emotional and social competence. Curricula should be implemented as necessary.


- Develop strong parent-professional partnerships to support child development. The quality of parent-professional partnerships influences the ability of parents and professionals to work together


- Ensure the psychological and physical safety of children at home and in schools or daycares. Abuse, or neglect should not be tolerated. Staff training in conflict resolution, appropriate discipline techniques, behavior management, and stress/anger management will provide teachers and caregivers with the support and resources to address problematic interactions as they arise.

Assessment for Prediction and Prescription:Obtaining eligibility for services

preschool assessment serves multiple functions such as determining eligibility for special education, including the possible causes of behavior and specific recommendations for intervention. Screening/Assessments (direct observation and reported information) is important to determine if child is at risk and critical to the development of IEPs, and other services. Obtaining eligibility for services for children with Significant Developmental Delay (SDD)


1. Initial eligibility must be established, and IEP in place before child’s 7th birthday.


a. SDD eligibility is determined by assessing a child in each of the five skill areas of adaptive development, cognition, communication, physical development, and social/emotional development.


b. 2 std dev below the mean in one or more of the areas or 1.5 std dev below the mean in two or more areas




2. For kids in kindergarten or older, initial eligibility should also include documented evidence that the impact of educational performance is not due to: Lack of appropriate instruction, limited English proficiency, visual, hearing or motor disability, emotional disturbances; cultural factors; or environmental or economic disadvantage.



3. All five areas should be assessed using at least one formal measure (assessment). In areas where significant delay is suspected, at least one additional formal assessment must be utilized to determine extent. All assessments must be age appropriate, and all scores given in std dev.




4. For children eligible under SDD with hearing; visual; communication; or orthopedic impairments, a complete evaluation must be obtained to determine if the child is eligible for those services.

DSM-5 Criteria Symptom criteria: ATTENTIONAL, EMOTIONAL AND BEHAVIORAL PROBLEMS

Symptoms ≠ Criteria- Diagnosis 6 of 9 symptoms of inattention, and/or 6 of 9 symptoms of hyperactivity


- Inattention: can’t pay close attention to details, careless mistakes, can’t maintain attention, can’t organize tasks, doesn’t follow instructions/ listen, forgetful


- Hyperactivity: fidget, leaves seat in class, runs about/ climbs excessively, “on the go”


- Impulsivity: Difficulty waiting for his/ her turn; intrudes others


- #12 the problems mentioned are antisocial behaviors, substance abuse, peer rejection, depression, processing social info

DSM Age of Onset: Attentional, emotional, behavioral problems
ADHD begins in childhood. The requirement that several symptoms be present before age 12 years conveys the importance of a substantial clinical presentation during childhood. At the same time, an earlier age at onset is not specified because of difficulties in establishing precise childhood onset retrospectively. Adult recall of childhood symptoms tends to be unreliable, and it is beneficial to obtain ancillary information.

DSM-5 Criteria of attentional, emotional, and behavioral problems: Not explainable by another diagnosis

Togive the ADHD diagnosis, other related and common comorbid disorders cannotexplain the symptoms. In clinical settings, comorbid disorders are frequent inindividuals whose symptoms meet criteria for ADHD

DSM-5 Criteria of attentional, emotional, and behavioral problems:More than one setting

Symptomsmust be present in more than one setting (i.e. school, home, and otherprograms). This is important when considering environmental (physical) factorsthat may explain or rule out ADHD symptoms

DSM-5 Criteria of attentional, emotional, and behavioral problems: Impairment

- Cognitive deficits due to poor self-regulationo Difficulty in planning; lower IQ scores (due to memory), learning disabilities (reading, math, spelling, etc.); memory difficulties (can’t memorize); impaired behavioral & verbal flexibility & creativity- Social & adaptive functioning deficits

o Poor self-help skills, difficulty taking personal responsibilities, impulsive, externalizing blame, limited insight into own problems- Motivational & emotional deficitso Limited persistence, emotional reactivity, difficulty getting work done - Motor, physical, and health deficits


o Poor motor coordination, prone to accidental injuries, minor physical anomalies, general health problems/ growth delay

In depthParent Interview: Identification of problems and context

general descriptions of concerns by parents must be followed by specific questions from the examiner to elucidate the details of the problems and any apparent precipitants that can be identified. Such an interview probes for not only the specific nature, frequency, age of onset, and chronicity of the problematic behaviors, but also the situational and temporal variations in the behaviors and their consequences. If the problems are chronic, what prompted the referral at this time reveals much about parental perceptions of the children’s problems, current family circumstances related to the problems’ severity, and parental motivation for treatment.

In depth Parent Interview:Medical History

general history of any current/previous medical conditions, events (traumatic or causing medical care), concerns, nutrition. Sleep disturbances, previous medications, supplements given by parents, and prescribed by doctor (think about Cassie’s case). Any antenatal concerns, birth complications?

In depth Parent Interview: Developmental History

review with parents potential problems with domains of: motor, language, intellectual, thinking, academic, emotional, and social functioning. This would aid in the differential diagnosis (see other attachment for more details). Questions about inappropriate thinking, affect, social relations, and motor peculiarities may reveal a more seriously and pervasively disturbed child.

In depth Parent Interview: Family History of psychopathology and learning problems and School History

information on the school and family histories should be obtained. Including a discussion of possible psychiatric difficulties with parents, siblings, marital difficulties, and any family problems centered on chronic medical conditions, employment problems, and other potential stress events within family.

Parent and Teacher Behavior Rating Scales: ADHD symptom coverage

need to cover the symptoms of the major child psychiatric disorders likelyto be seen in children with ADHD (as set forth in the DSM-V). Clinicaljudgement will always be needed in the application of DSM guidelines to indiv.Cases. Clinicians must review in some systematic way with the parent of eachreferred child the symptom lists and other diagnostic criteria for variouschildhood mental disorders.

Parent and Teacher Behavior Rating Scales: Coverage of other psychopathology

Important to rule out other child pathologies aswell as parent-child interaction and parent management style.

Parent and Teacher Behavior Rating Scales: Adaptive functioning

often refers to the child’s development of skills and abilities that helphim or her to become more independent, responsible, and self-caring individuals.It includes: Self-help skills (e.g., dressing, bathing, using time), Interpersonalskills (e.g. sharing, cooperation, and trust), Motor skills (e.g., fine/gross),Communication skills, Social responsibility (e.g. performing chores). IQ scoresand adaptive functioning has been argued to be a hallmark of ADHD. Instrumentsavailable: Vineland Adaptive Behavior Inventory, CBCL, and BASC-2.

ASSESSMENTOF PERSONS WITH PHYSICAL, SENSORY, OR MENTAL DISABILITIES: Adaptive Functioning

Must be evaluated when determining possible presence of mental retardationImportant to evaluate when CP or sensory impairment is presentImportant to evaluate when considering diagnosis of ASDWhy? --- Capabilities vs. adaptability. Think: If the child is not capable (as in the case of some physical impairments), adaptability cannot be inferred.

Understanding the Role of Other Professionals: Physical Therapist
Pediatric physical therapists treat and examine children from birth to age 18 who have problems moving and performing other physical activities. Pediatric physical therapists help treat problems like injuries, pre-existing conditions and problems caused by illnesses or diseases.

Understanding the Role of Other Professionals:Occupational Therapist

Occupational therapy helps children to develop the underlyingskills necessary for learning and performing specific tasks, but it alsoaddresses social and behavioral skills. It can help with the child’sself-concept and confidence. Pediatric occupational therapy helps childrendevelop the basic sensory awareness and motor skills needed for motordevelopment, learning and healthy behavior

Understanding the Role of Other Professionals: Visually impaired, hearing impaired specialist

Anaudiologist identifies, diagnoses and treats children with hearingdisorders. They also identify how hearing disorders affect communication.Services offered include: periodic hearing evaluations to monitor hearingsensitivity, Hearing aid consultations, fittings, dispensing and repairs, Assistivelistening device (ALD) consultations, fittings, dispensing and repairs. ALDshelp with patients with their day-to-day communications, Counseling regardingcommunication and educational options, Counseling and aural habilitation /rehabilitation. The pediatric ophthalmologist has additional training,experience, and expertise in examining children, and has the greatest knowledgeof possible conditions that affect the pediatric patient and his/her eyes. Neurologic development of vision occurs upuntil approximately age 12 years.

Understanding the Role of Other Professionals:Speech pathologist

Speech-languagepathologists assess, diagnose, treat and help prevent speech, language andcommunication disorders. Speech-language pathologists work with people whocannot make speech sounds or cannot make them clearly. They also work withthose who have: Speech rhythm and fluency problems, such as stuttering; Voice-qualityproblems, such as inappropriate pitch or harsh voice; Problems understandingand producing language; Cognitive communication problems, such as attention,memory and problem-solving disorders; Oral motor problems that cause eating andswallowing difficulties U

Scales ofMeasurement: Nominal

categories with no sequential order that allow for classification i.e. demographic information.

Scales of Measurement: Ordinal

variables ordered along some dimension with no regard for the distance between scores i.e. likert scale.

Scales of Measurement: Interval

—has an arbitrary zero point and equal intervals between point i.e. temperature

Scales of Measurement: Ratio

—has a true zero point, equal intervals between adjacent units, and allows for ordering and classification. It is rarely used in psychology, because of the need for an absolute zero point.

Raw Scores

an unaltered measurement. It cannot be used or interpreted in isolation, but takes on meaning in relation to norms. Norms summarize a large number of scores.
Frequency Distribution
A table that displays the frequency of various outcomes in a sample. Each entry in the table contains the frequency of count of the occurrences of values within a particular group or interval, and in this way, the table summarized the distribution of values in the sample.

Measurements of Central Tendency

• Mean—the sum of scores divided by the total number of scores • Median—the middle score in a distribution of scores arranged numerically• Mode—the score that occurs most frequently in a distribution

Variance

measures the spread of and distance between scores within a set. For instance, a variance of zero indicates that all the scores are the same. Large variances indicate that scores are widely distributed within the set. Measures of variance vary widely depending on the type and range of possible scores measured.

Standard Deviation

square root of the variance. SD also measures spread of and distance between scores within a set, but has been standardized so that sd values can be compared between sets of scores. Can also be used to describe how far a given score is the from the mean (e.g. an IQ score of 115 is 1 standard deviation above the mean.) 68.2% of scores in a normally distributed set fall within one standard deviation above or below the mean, 95.4% of scores fall within 2 standard deviation above or below the mean, and 99.6% within three standard deviations above or below.
The Normal Distribution
symmetric and has bell-shaped density curves with a single peak. It has a predetermined mean z score of 0 and sd of 1. Some traits, like intelligence are thought to be normally distributed throughout the population. As such, IQ tests (and others!) construct their standardized scores to follow the normal distribution.
Skewness in Psychological Testing
• Positive skew—The majority of scores fall at the low end of the distribution and indicates that a test has too few easy items

• Negative skew—The majority of scores fall at the high end of the distribution indicates that a test has too few difficult items

Percentiles
Percentage of people in standardized sample who scored at or below a specific raw score. Advantages include that they are easy to compute and communicate. Disadvantages are that they distort the measurement scale, especially at the extremes because so few people score within those ranges. As such, raw scores between the 1st and 2nd percentile may be very different as do scores at the 98th and 99th percentile. For this reason, it is important not to rely on percentiles to report scores even though it may be tempting.