• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/73

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

73 Cards in this Set

  • Front
  • Back
What is the goal of construct representation from an information processing perspective?

a. Logical reasoning
b. Task decomposition
c. Nomothetic span
d. Pattern recognition
Task decomposition
_______ sets out to test hypotheses about factors that are already presumed to exist.

a. Confirmatory factor analysis
b. Exploratory factor analysis
c. Experimental factor analysis
d. Correlational factor analysis
Confirmatory factor analysis
Which of the following is true about concurrent validation?
a. It is used when indexes of the criteria that test scores are meant to assess are not yet available
b. It is often used as a substitute for predictive validation
c. Gathering evidence of it is often impractical
d. It is relevant for test scores that are meant to be used to make decisions based on estimates of future performance
It is often used as a substitute for predictive validation
Validity coefficients are higher in retrospective validation studies when
a. the samples are drawn from a homogenous population
b. the range is restricted
c. the samples are larger
d. scores undergo factor analysis
the samples are larger
The tripartite view of validity divides validity into these three types:
a. predictive, concurrent, and content
b. content, criterion-related, and predictive
c. criterion-related, content, and construct
d. construct, concurrent, and content
criterion-related, content, and construct
Nomothetic span refers to
a. the network of relationships of a test to other measures
b. examination of test responses
c. identifying theoretical mechanisms that underlie task performance
d. task decomposition
the network of relationships of a test to other measures
__________ refer to validation strategies that require the collection of data on two or more distinct traits by two or more different methods.
a. Multitrait-multimethod matrices
b. Factor matrices
c. Concurrent validations
d. Predictive validations
Multitrait-multimethod matrices
Multiple predictors, such as multiple regression techniques and equations, are used to deal with

a. the prediction of complex criteria
b. the prediction of relevant test scores
c. the retrospective or regressive prediction of earlier behaviors and abilities
d. the prediction of any characteristics
the prediction of complex criteria
Task decomposition refers to
a. the goal of construct representation
b. an idea from Embertson
c. the network of relationships of a test to other measures
d. strength, frequency, and pattern of test scores
the goal of construct representation
In terms of test validation, factor analysis is beneficial because it can

a. be generalized across populations
b. determine to what extent specific factors capture the purpose of the test
c. simplify score interpretation
d. examine one single measure rather than multiple measures
simplify score interpretation
Which of the following is a source of evidence for criterion-related validity?
a. Relevance and representativeness of test content
b. Correlations among tests and subtests
c. Multitrait-multimethod matrix
d. Accuracy of decisions based on concurrent validation
Accuracy of decisions based on concurrent validation
Which of the following are correlations between the original measures in the correlation matrix and the factors that have been extracted?
a. Factor loadings
b. Factor analysis
c. Factor variance
d. Factor error
Factor loadings
Consistently high correlations between measures designed to assess a given construct is
a. discriminant validity
b. predictive validity
c. face validity
d. convergent validity
convergent validity
Typically assessed in economic terms, the _______ of tests, and test scores, refers to the benefits they bring to decision making.
a. selection
b. utility
c. analyses
d. generalization
selection
This criterion is one of the oldest sources of evidence for validating ability tests.
a. Experimental results
b. Age differentiation
c. Factor analysis
d. Face validity
Age differentiation
Accuracy of decision or predictions based on predictive validation describes
a. Criterion-related validity
b. Content-related validity
c. Convergent validity
d. Discriminative validity
Criterion-related validity
Which of the following is true of the multitrait-multimethod matrix (MTMMM)?
a. It was first proposed in 1989 as a response to complaints of laborious data collection
b. It is useful and user-friendly for tests with a good deal of method variance
c. It is designed to investigate patterns of convergence and divergence among data gathered
d. It is still frequently used in its full-fledged form because it makes data collection easier
It is designed to investigate patterns of convergence and divergence among data gathered
The first step in exploratory factor analysis
a. is a correlation matrix
b. depends upon the specific choice of techniques
c. is a factor matrix
d. is a multitrait-multimethod matrix
is a correlation matrix
Which of the following is true of confirmatory factor analysis (CFA)?
a. It was the original approach of dealing with huge amounts of correlations and constructs
b. It sets out to discover which factors underlie the variables under analysis
c. It is less sophisticated than exploratory factor analysis, from a methodological standpoint
d. It sets out to test hypotheses about factors that are already presumed to exist
It sets out to test hypotheses about factors that are already presumed to exist
Replication of predictor-criterion relationships on separate sample is a process known as
a. Synthetic validation
b. Cross-validation
c. Construct validity
d. Differential validity
Cross-validation
Incorrectly accepting the null hypothesis when it is false is a _______ error.
a. Type I
b. Type II
c. Type III
d. Type IV
Type II
Testing standards defines _______ as “the degree to which all the accumulated evidence supports the intended interpretation of test scores for the proposed purpose.”
a. reliability
b. bias
c. validity
d. variability
validity
Validity is
a. a quality that characterizes tests in the abstract
b. an all or none distinction
c. the sole responsibility of the test developer
d. a matter of judgments that pertain to test scores
a matter of judgments that pertain to test scores
A consequence of range restriction is that the correlations between test scores and criteria are usually ____ that would be if the sample were drawn from more heterogeneous populations.
a. much bigger
b. only slightly bigger
c. smaller
d. the same
smaller
Which is a source or evidence for the content-related aspect of construct validity?
a. Face validity/superficial appearances
b. Correlations among tests and subtests
c. Exploratory factor analysis
d. Accuracy of decisions based on concurrent validation
Face validity/superficial appearances
Selected-response items : objective items :: constructed response items : _______
a. fixed-response items
b. free-response items
c. subjective items
d. dichotomous items
free-response items
A test that has no time limits and difficulty is manipulated by increasing or decreasing the complexity of items is called a
a. test that blends speed and power
b. pure speed test
c. free-response test
d. pure power test
pure power test
If the average percentage passing (p) for the items in a test is 85%, the average score on the test will be
a. 1 SD below p
b. 1 SD above p
c. 85%
d. cannot determine this from information given
85%
Quantitative item analysis is designed to ascertain the psychometric characteristics of items based on
a. stylistic characteristics of items
b. responses obtained from samples
c. accuracy
d. fairness
responses obtained from samples
______ refers to the extent to which items elicit responses that accurately differentiate test takers in terms of the behaviors, knowledge, or other characteristics that a test is designed to evaluate.
a. Item difficulty
b. Item discrimination
c. Item validation
d. Item fairness
Item discrimination
In any test that is closely timed, the p values and discrimination indexes of items are a function of their ________ rather than of their ________.
a. individual lengths; level of difficulty
b. position within the test; discriminant validity
c. level of difficulty; position within the test
d. discriminant validity; individual lengths
position within the test; discriminant validity
A test whose items cluster around a p value of 0.10
a. is designed to select the top 10% of individuals
b. would be an appropriate classroom achievement test
c. does not differentiate among test takers
d. would yield scores with an average of 90%
is designed to select the top 10% of individuals
Which of the following is an advantage of constructed-response items?
a. They can be scored with ease and reliability
b. They make efficient use of testing time
c. They yield responses that can be easily transformed into numerical scales
d. They elicit authentic samples of test takers’ behavior in specific domains
They elicit authentic samples of test takers’ behavior in specific domains
For tests of ability, the p value (or proportion) should cluster around what percentage in order to provide maximum differentiation among test takers?
a. 50%
b. 70%
c. 30%
d. 99%
50%
Item discrimination refers to a measure of
a. reliability
b. homogeneity
c. heterogeneity
d. validity
validity
Because of vulnerability to distortion, many personality inventories use what special set of items to detect misleading or careless responding?
a. Discriminant scales
b. Validity scales
c. Distinction scales
d. Truth scales
Validity scales
Constructed-response items are also known as ______, and they are _______ items.
a. free-response; open-ended
b. free-response; multiple choice
c. selected-response; open-ended
d. selected-response; multiple choice
free-response; open-ended
Conducting additional trial administrations in order to check whether item statistics remain stable across different groups is called
a. cross analysis
b. cross validation
c. checking for reliability
d. sample supplementation
cross validation
In terms of item difficulty, the _______ the percentage passing the test, the _______ the item is.
a. Higher; easier
b. Higher; more difficult
c. Lower; easier
d. Percentage passing has no effect on item difficulty
Higher; easier
Which of the following is a disadvantage of selected-response items?
a. High scoring error
b. Objectivity
c. Correct guessing is possible
d. Open-endedness of questions
Correct guessing is possible
Item discrimination is
a. the incorrect alternatives that influence item difficulty
b. the extent to which items elicit responses that accurately differentiate test takers
c. the choice of criteria against which test items are validated
d. a statistical procedure that is used for tests of ability that are scored as pass/fail
the extent to which items elicit responses that accurately differentiate test takers
For psychological tests, the most important aspect of quantitative item analysis centers on statistics that address
a. item discrimination
b. item validity
c. item fairness
d. item difficulty
item validity
Item sequences can be individually tailored to the test taker’s ability levels on the basis of prior responses in this flexible and efficient test format called
a. Computerized adaptive testing (CAT)
b. Item discrimination
c. Item response theory (IRT)
d. Classical test theory (CTT)
Computerized adaptive testing (CAT)
In a pure speed test, difficulty is manipulated mainly through ________, whereas in a pure power test, difficulty is manipulated by _________.
a. Increasing or decreasing the complexity of the item; time
b. Time; increasing or decreasing the complexity of items
c. Time; the number of items completed by the test taker
d. Performance; item difficulty and item discrimination
Time; increasing or decreasing the complexity of items
Why do test developers transform p values of test items into z scores?
a. p values are ordinal numbers that represent unequal units
b. p values are interval numbers that represent unequal units
c. z scores can be used to compare scores obtained on different tests that measure similar abilities
d. z scores are ordinal numbers that represent equal units
p values are ordinal numbers that represent unequal units
Why should items with extreme p values be avoided on tests of ability?
a. They are obviously too difficult
b. They are obviously too easy
c. An extreme p value usually indicates a lack of validity
d. They fail to differentiate among test takers
They fail to differentiate among test takers
Why is Classical Test Theory (CTT) used more often that Item Response Theory (IRT)?
a. Although IRT is more easily employed, CTT is more mathematically sophisticated
b. IRT is still evolving and is unfamiliar to many testing professionals
c. Using CTT is less expensive than using IRT
d. CTT requires more significant assumptions than IRT
IRT is still evolving and is unfamiliar to many testing professionals
Ipsative scores are grouped in comparison to
a. the scores obtained by the normative groups
b. the scores obtained by the standardization sample
c. the other scores obtained by the same individual
d. the scores obtained by the reference group
the other scores obtained by the same individual
Another name for constructed-response items is
a. forced-choice items
b. objective-response items
c. free-response items
d. ipsative-response items
free-response items
Which of the following is a downside to constructed-response items?
a. Limited options
b. Forced choice
c. Scorer subjectivity
d. Ipsative validity
Scorer subjectivity
Which of the following is the standard way of communicating test results?
a. Overview of subscale performance
b. Printed report of test scores
c. Written scorecard containing g score
d. Written psychological report
Written psychological report
Test sophistication is a synonym for
a. test technicality
b. test organization
c. test taking skills
d. level of reliability in a test
test taking skills
Interviewing, questionnaires, and examination of existing records are methods of obtaining
a. heritage
b. biodata
c. family history
d. lifelogs
biodata
The best way to prevent the misuse of a test is
a. to ensure that the individuals involved are competent and qualified
b. to examine normative data
c. to ensure the test is the most advantageous
d. to minimize costs associated with testing
to ensure that the individuals involved are competent and qualified
In an attempt to combat the potential weaknesses inherent in interview data, current practices in most fields that use interviewing techniques stress either ______ or ______.
a. intensive training of interviewers; use of structured interviews
b. setting of clear time limits; intensive training of interviewers
c. collaborative interviews; setting of clear time limits
d. use of structured interview; collaborative interviews
intensive training of interviewers; use of structured interviews
A base rate that is 0.10 indicates that accurate selection is
a. very easy
b. a good indicator of success
c. unreliable and thus unusable
d. very difficult
very difficult
The Validity Indicator Profile was designed specifically to
a. evaluate the possibility of malingering on cognitive tests done for forensic assessments
b. determine the usefulness of different tests with certain populations
c. establish the influences of racial bias within the use of intelligence tests
d. compare the validity of different tests assessing the same constructs
evaluate the possibility of malingering on cognitive tests done for forensic assessments
The most basic guideline in communicating test results is
a. detailing the way in which the test results will be used
b. explaining the score in relation to the normative sample
c. providing information on the reliability and validity of scores
d. reporting the test score results in an understandable language
reporting the test score results in an understandable language
If the base rate of a condition within a population is high, the likelihood of a false positive finding is
a. Low
b. high
c. nonexistent
d. just as likely as a false negative finding
low
_______ scales are embedded in many personality inventories in order to detect various types of attempts at impression management or response sets such as defensiveness.
a. Reliability
b. Split-half
c. Validity
d. Fairness
Validity
Which term refers to the improvement contributed by a test in selection decisions?
a. Validity data
b. Base rate data
c. Incremental validity
d. Selection ratio
Incremental validity
What is the term for diagnosing a condition when it is not present?
a. Positive finding
b. Negative finding
c. False negative finding
d. False positive finding
False positive finding
As a general rule, when interscorer reliability coefficients can be computed, they should approach ________ and should not be much below _______.
a. 0.75; 0.6
b. +1.0; 0.90
c. 0.90; 0.80
d. +1.0; 0.85
+1.0; 0.90
Utility is closely related to
a. Reliability
b. Validity
c. Efficiently
d. Objectivity
Validity
The frequency with which pathological conditions, such as depression, occur in a given population refers to
a. validity data
b. selection ratio
c. base rates
d. selection ratio
base rates
_______ refers to the harmonious relationship that should exist between test takers and examiners.
a. Agreement
b. Collected harmony
c. Positive connection
d. Rapport
Rapport
In deciding on whether or not to use psychological tests, what is the first issue
to be decided?
a. Is the test needed?
b. How should the test be administered?
c. What sample population will be tested?
d. How reliable is the test?
Is the test needed?
If the base rate is 0.85, the probability of having a false positive finding when using a predictor equivalent that predicts at only a chance level is
a. >85%
b. 85%
c. 15%
d. <15%
15%
The most significant advantages that psychological tests offer pertain to their
a. efficiency and objectivity
b. utility and validity
c. subjectivity and reliability
d. validity and efficiency
efficiency and objectivity
Validity scales designed to detect whether test takers attempted to present
themselves in an unrealistically favorable or unfavorable fashion are most
often embedded in
a. ability tests
b. personality inventories
c. occupational assessments
d. tests designed to detect giftedness
personality inventories
Even the most carefully developed and psychometrically sound instrument is subject to
a. construct relevant error
b. revision
c. misuse
d. criticism
misuse
The Validity Indicator Profile is an example of a tool that
a. identifies error due to the environment
b. identifies error due to the examiner/test taker rapport
c. identifies error due to test anxiety
d. identifies test taker dissimulation and unrealistic self-representation
identifies test taker dissimulation and unrealistic self-representation
In order to arrive at defensible answers to complex questions that professionals are asked, they need _______ and _______.
a. objective tests; favorable environments
b. multiple sources of data; informed judgment
c. uniform testing procedures; favorable testing environment
d. to eliminate test taker anxiety; to communicate results
multiple sources of data; informed judgment