• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/101

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

101 Cards in this Set

  • Front
  • Back
List initial stage assessment skills.
1. Assess client problem
2. Conceptualize problem
3. Implement treatment
4. Evaluate eounseling
Why formal assessment strategies?
1. Meets expectations of professionalism.
2. Identifies problems
3. Accesses diverse information
Why formal assessment strategies?
4. Assists client decision making
5. Verifies client strengths and limitations
Why formal assessment strategies?
6. Can influence counselor credibility
7. Provides assessment and accountability.
Two major themes of counselor compentency
1. Knowledge of test and its limitations
2. Responsible for competent use of the test
Competent use of assessment tools involves two major responsibilities. Name them.
1. Determining what inferences can be made.
2. Interpreting test results appropriately. p. 12
How does the counselor make appropriate use of assessment tools and interventions?
1. Evaluate and select tools skillfully
2. Use instruments and interventions wisely.
Name six of the basic types of assessment tools. p. 13
1. Standardized vs. non/stan
2. Individual vs. group
3. Objective vs. subjective
4. Speed vs. power
5. Verbal vs. nonverbal
6. Cognitive vs. affective
Standardized vs. nonstandardized?
Fixed instructions for administering and scoring. Content remains constant and developed by professional standards. Appropriate representative sample. Standardized meets criteria.
Individual vs. group drawbacks?
Concerns administration: convenient and takes less time, but non-verbal behaviors lost.
Objective vs. subjective?
Concerns scoring: Predetermined methods of scoring, controls bias & inconsistencies vs. professional judgments (explores client issues)
Speed vs. power?
Concerns difficulty level: power tests vary in difficulty, speed test examines number completed
Verbal vs. nonverbal?
Concerns influence of language and culture on assessment: nonlanguage, performance tests. Difficult to design non-verbal tests.
Cognitive vs. affective?
Cognitive assesses cognition: perceiv
Define the measurement scales.
Nominal: classifies by name based on characteristics of person or object.
Ordinal: ranks individuals or objects against others.
Interval: units are in equal intervals
Ratio: properties of interval with meaningful zero
Norm-referenced?
How an individual's score compares with scores of others who have taken the same instrument.
Criterion-referenced?
How an ndividual's score compares with an established standard or criterion. Attention paid to domain being measured
Problems with criterion-referenced tests?
No universal agreement witrhin the field about which theories are most important. Mastery level difficult to determine.
Frequency polygon?
Graph that charts scores on the x-axis (horizontal) and the frequencies of scores on the y-axis (vertical).
Problem with criterion-referenced instruments?
Determining the mastery level is a problem. Levels set high to minimize false positives.
Why is the fictional Counseling Aptitude Scale a norm-referenced instrument?
Because we are comparing scores of individuals and use statistics to help interpret the scores.
Why use a frequency polygon?
A graphic representation makes data easier to understand.
What happens when the range of numbers is too large to plot each individual score on a frequency polygon?
We use interval scores (e.g., 1-50, 51-100, etc.)
Name the three measures of central tendency.
Mode: most frequent score
Median: middle score
Mean: arithmetic average of scores
Why measures of central tendency?
They give some indication of how individuals perform on an instrument.
With the CAS in mind, how do measures of central tendency assist you in understanding your score?
They help you understand where your score falls with regard to other individuals. Variations in scores affect how your score is interpreted.
Define range.
Range provides a measure of the spread of scores and indicates the variability between the highest and lowest (subtract lowest from highest to figure).
Variance and Standard Deviation?
Precise measures to indicate how scores vary from the mean.
In the area of assessment, SD is primarily used in two ways. Name them.
1. SD provides some indication of the variability of scores.
2. SD can provide an indication of whether a score is below the mean, close to the mean, or higher than the mean. P. 30 (under table 2.3)
What does the term "normal distribution" have to do with SD?
If the scores on an instrument fall into a normal distribution, SD provides even more interpretive information.
Tell the difference between positively skewed and negatively skewed distributions.
Positively skewed=scores at lower end of range of scores with the mean more positive than the mode or the median.
Negatively skewed=scores at the higher end of distribution where the mean is more negative than the mode or the median.
Types of scores?
Raw score: simplest
Percentile rank: those who had a score above or below a given raw score.
Standard Scores?
They address the limitation of unequal units of percentiles. They provide a shorthand method of converting raw scores so there is always a set SD and Mean. They can be used with all types of instruments.
Z Score Formula?
Score minus mean, divided by the SD of the instrument used. Z=x-M
s
Sample test: m=75, s-5, x=85
z=85-75 divided by 5 = 2.00 (z score) p. 35
Describe a normal distribution with regard to its mathmetical properties.
Normal curve reflects that the largest number of cases fall within the center range. Numbers decrease gradually in both directions.
What percent of cases fall between 1 SD below the mean and 1 SD above the mean on a normal distribution?
68%
What percent of the cases fall between 2 SD below the mean and 2 SD above the mean?
95%
What percent fall between 3 SD below the mean and 3 SD above the mean?
99.5%
Describe a T score.
T score has a fixed mean of 50 and a SD of 10. A Z score can be converted to a T score by multiplying the z score by 10 and adding or subtracting that number from 50. Z score is considered the base of the standard score.
Describe a stanine score.
Another normalized standard score with a mean of 5 and a SD of approx. 2. Stanines range from 1 to 9. Raw scores are converted to stanines. Con: stanines represent a range of scores that are represented by one number. See p. 36-37.
Describe other standard scores.
Deviation IQ's and CEEB (college entrance examination board). Dev. IQ's (mean 100, SD 15) replaced the ratio method of IQ (MA over CA x 100.
Describe age or grade equivalent scales.
Two developmental scales widely used in educational settings. Compares raw scores with average raw score of individuals at that same dev. level. See p. 38 for warnings.
Why evaluate the norming group when using a norm-referenced instrument?
To determine whether or not the norming group is suitable for the clients.
What determines the adequacy of a norming group?
1. the clients being assessed
2. the purpose for the assessment
3. the way in which information is going to be used.
On whom does the determination of an adequate norming group rest?
The practitioner using the instrument.
From where is a sample drawn?
The larger population of the group being assessed.
Define simple random sample.
Every individual in the population has an equal chance of being selected.
Describe a stratified sample.
Often used in area of apprasal. Individuals selected based on demographic characteristics (African-American, etc.).
Describe cluster sampling.
Involves using existing units rather than selecting individuals (all elementary schools in a state).
Define standard error of measurement.
A deviation that provides an indication of what an individual's true score would be if he or she took the instrument repeated times.
Define standard score.
A transformed raw score that has a set SD and Mean.
Define Split-half relibility.
One of the internal consistency measures of reliability in which the instrument is administered once and then split into two halves. The scores on the two halves are then correlated to provide an estimate of reliability. Spearman-Brown formula ofen used to correct the coefficients to what it would be if the original amt. of items were used.
Define Test-retest reliability
A reliability coefficient that is obtained by correlating a group's performance on the first administration of an instrument with the same group's performance on the second administration of that same instrument.
Define validity coefficient.
The correlation between the scores on an instrument and the criterion measure.
Define Predictive Validity.
A type of criterion-related validity in which there is a delay between the time the instrument is administered and the time the criterion information is gathered.
Define projective technique.
A type of personality assessment that provides the client with ambiguous stimulus, thus encouraging a nonstructured response (it is assumed that the indiv. will project his or her personality into the response.
Define learning disabilities
A general term referring to a group of disorders that result in difficulty in the acquisition and use of listening, speaking,, reading, writing, reasoning, and mathmatical abilities.
Define item analysis.
A set of procedures used to evaluate individual iems on an assessment instrument. The most common item analysis techniques are item difficulty and item discrimination.
Define concurrent validity.
A type of criterion-related validity in which there is no delay between the time the instrument is administered and the time the criterion information is gathered.
Define construct validity.
One of the three types of validity that is broader than either content or criterion-related validity. Construct validity is concerned with the extent to which the instrument measures some psychological trait or construct. This type of validation involves the gradual accumulation of evidence.
Define criterion-related validity
On of the three types of validity in which the focus is the extent to which the instrument confirms (concurrent validity) or predicts (predictive validity) a criterion measure.
Define content-related validity
One of the three major categories of validity in which the focus is on whether the instrument's content adequately represents the domain being assessed. Evidence of content-related validity is particularly important in achievement tests.
Define correlation
A statistic that provides an indication of the degree to which two sets of scores are related. A correlation coefficient can range from
-1.00 (negative or inverse) to + 1.00 (positive corr.) Note: .00 = absence of relationship.
Define aptitude test
A test that provides a prediction about the person's future performance or ability to learn.
Define crystallized abilities.
A factor theorized to be part of intelligence that includes acquired skills and knowledge. They are thought to be influenced by cultural, social, and educational experiences.
Define fluid abilities.
A fator theorized to be part of intelligence and related to the ability to respond to and solve entirely new kinds of problems. These abilities are thought to be influenced by genetic factors.
Name the three types of cognitive tests.
Intelligence, General ability, and Aptitude tests.
Define alternate form reliability
A reliability coefficient is estimated by examing the relationship between two alternate, or parallel, forms of an instrument. The reliability coefficient indicates the extent to which the two forms are consistent or reliable in measuring that specific content.
Define reliability.
The degree to which a measure or score is free of unsystematic error. In classical test theory, reliability is the ratio of true variance to observed variance.
Define selected-response item format
A method of writing an item in which the individual is provided with a choice of answers and needs to select an answer from those provided alternatives.
Define variance
The average of the squared deviation from the mean. Variance is a measure of variability and its square root is the SD of the set of measurements.
True or False: To determine whether or not an instrument measures what it is intened to measure the instrument is validated.
False. It is the uses of the instrument that are validated. p. 62. Be sure that your instrument is validated for the specific manner in which you choose to use it.
True or False: Reliability is a prerequisite to validity.
True: If an instrument has too much unsystematic error, then the instrument cannot measure anything consistently. The reverse also true, however, when an instrument is reliable but not valid.
Give an example of the above.
An instrument may measure something consistently, but it may not be measureing what it is designed to measure.
Name the three categories of gathering validation information.
Content-related, Criterion-related, and Construct-related evidence. There are n rigid distinctions among the three. Validity is a untary concept, but assessment is moving away from this traditional approach to validity.
Describe content-related validation.
Content-related validation evidence concerns the degree to which the evidence indicates that the items, questions, or tasks adequately represent the intended behavior domain.
Explain further. p. 64
With content-related validity, instrument developers must provide evidence that the domain was systematically analyzed and ensure that the central concepts are covered in the correct proportion.
Give an example.
Say a test was given on five chapters of a given text, but the questions were all taken from only one of those chapters. A person could argue that there was a problem with the content-related validity of the test.
What is this similar to?
It's similar to the procedure used for drawing a proportionate sample of the larger population you wish to assess.
Describe criterion-related validation evidence.
Criterion-related validity is the extent to which an instrument is systematically related to an outcome criterion.
Give some examples of these types of "predictive" instruments (predicts certain behaviors).
SAT (predicts college performance), Armed Services vocational Aptitude Battery (ASVAB), which predicts performance in training programs and occupations.
Name the two types of criterion-related validity.
Concurrent (no time lag) validity and Predictive (time lag) validity (difference lies in the period of time between taking the instrument and gathering the criterion information.
Give an example of the two.
"Bob is depressed," (concurrent), "Bob is likely to be depressed in the future," (predictive validity).
Define criterion.
The criterion is what the instrument is designed to predict (e.g., job performance, personality).
How can we determine that criterrion is reliable and relatively free from unsystematic error?
It should predict what it is designed to meaningfully predict. It should be free from bias. It should be immune from criterion contamination (prior knowledge).
Describe validity coefficient.
The final step to establish criterion-related validity is to corelate the performance on the instrument with the criterion information. The result of that calculation begin a validity coefficient.
Define Regression (another commonly used method for determining the criterion-related validity of an instrument).
A statistical technique which allows us to analyze the relationships among multiple variables.
Explain regression further.
Regression is closely related to correlation. It is commonly used to determine the usefulness of a variable or set of variables in predicting another meaningful variable. Ex: does an instrument measure potential behaviors or predict them. Regression is based on the premise that a straight line (regression line), can describe the relationship between scores and the criterion. The line that best fits the points (scores on instrument plotted in relation to scores on criterion)is the regression line. The line of best fit is used to predict performance on the criterion based on scores on the instrument. See fig. 4.2
True or False: Just as no instrument can have perfect reliability, no instrument can have perfect criterion-related validity. p. 69.
True, As discussed in Chap. 3, we take the lack of perfect reliability into consideration and provide scores in a range using standard error of measurement. The same logic aplies with criterion-related validity (standard error of estimate).
Define false negative.
In decision theory (p. 70-72), a false negative occurs when the assessment procedure in incorrect in predicting a negative outcome on the criterion.
Define false positive.
In decision theory (p. 70-72), a false positive occurs when the assessment procedure is incorrect in predicting a negative outcome on the criterion.
Explain construct validity, the third historical type of validation evidence.
Becase there is considerable debate about the construct of, say self-esteem, developing an instrument is more difficult than developing items that match the construct of depression.
Explain further. p. 73
The process of establishing validity for these constructs is mroe complex and consists of gradual accumulation of information.
True or False: an instrument's construct validity cannot be verified through one study.
True: an instrument's construct validity cannot be verified through one study; rather, construct validity evidence is demonstrated by multiple pieces of evidence, indicating that the instrument is measureing the construct or trait of interest.
Describe factor analysis, another method used to contribute to the construct validation evidence of an instrument, particularly the internal structure of the instrument.
A statistical technique used to analyze the interrelationships of a set or sets of data.
Explore reasons for using factor analysis.
Exploring patterns among variables, Analyzing clusters of variables (redundancy), Reducing large numbers of variables to smaller number of statistically uncorrelated variables.
Describe item analysis.
Item analysis focuses on examining and evaluating each item within an assessment.
Compare item analysis with validity evidence.
Validity evidence concerns the entire instrument; item analysis examines the qualities of each item.
Describe item discrimination.
Item discrimination provides an indication of the degree to which an item correctly differentiates among the examinees on the behavior domain of interest.
Give an example of item discrimination. p. 78
Two people are depressed. One reports it and the other one doesn't.
Describe item response theory. p. 79
a system of analysis that defines the precise items a person answers correctly based on traits or abilities, rather than their age or grade level.