• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/64

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

64 Cards in this Set

  • Front
  • Back

what factors affect response bias?

testing content, testing context, testing format

Define Acquiescence Bias in terms of response bias

occurs when individual agrees with statements without regard for the meaning of those statements e.g when given 2 statements: I enjoy my job and I hate my job, they will respond with "Strongly agree" to both

What issue can arise due to acquiescence bias?

spurious correlations: if a questionnaire only contains positively or negatively keyed items, then they become susceptible to acquiescence responding

what three factors increase acquiescence?

1. ambiguous items (people can't be bothered spending time to figure out the item


2. long items (cant be bothered reading whole question)


3. large number of items (people get bored/tired along the way & start to acquiesce just to finish)

what method is used to measure acquiescence?

- "method of matched pairs"

embed several pairs of items which are polar opposite. if several of these items are matched you can calculate an acquiescence index.


Each time a person answers "agree" "strongly agree" to the matched items they get a score of 1. You then decide to omit people with high scores.




Define extreme and moderate responding in terms of response bias

responses that are either on the extreme of the scale, or in the middle (to avoid making strong claims)

what is the limitation of extreme & moderate responding?

can generate artificial differences among respondent scores or mask true differences

Define a social desirability response in terms of response bias

tendency for a person to respond in a way that seems socially appealing, regardless of his/her true characteristics.


Can diminish validity of the measurement process

what are the three sources of Social Desirable Responding?

1. test content


2. test context


3. personality of the respondent

how does test content affect social desirable responding?

- items may be more susceptible to SDR- generally viewed as appealing characteristics


- items with high face validity are more likely to be affected by SDR

how does test context affect social desirable responding?

- some contexts lend themselves to greater or lesser amounts of SDR- if testing anonymous, may lead to more honest answers


- if test has important implications, people may respond in more SD way

how might someones personality affect the possibility of Socially desirable responses?

- an individuals idea of personal autonomy may correlate negatively with SDR


- people with a high need for autonomy aren't affected by social disapproval

according to Paulhus' research there are 2 moderately correlated dimension to social desirability responses- what are they?

1. impression management


2. self-deceptive enhancement

Define Impression Management in terms of SDR

conscious process whereby the participant intentionally attempt to appear socially acceptable


-can fluctuate between time and circumstances

Define Self-Deceptive Enhancement in terms of SDR

less conscious process whereby people believe their own exaggerations or claims


- +ve correlated with narcissism, people having inflated impressions of their own abilities


- more trait-like, its consistent across situations and contexts

Define Malingering in terms of response bias

respondents attempt to exaggerate their psychological problems- "faking bad"


- occurs in situations where respondent perceives benefit for appearing more injured or distressed then they really are

what are some example situations for Malingering responses?

1. criminal competency hearings


2. disability evaluations


3. workers compensation claims


4. personal injury examinations

does malingering responses affect the validity or reliability of a test?

validity

Define careless or Random responding in terms of response bias

people respond the same way for all questions (irrespective of content), or are not paying attention and answer questions semi-randomly

how does guessing affect response bias?

affects internal consistency reliability downwardly (someone with a low ability might guess a hard item correctly- reducing correlations between items

What are the 3 general methods for coping with response bias and the goals these strategies attempt to accomplish?

Methods:


1. manage the testing context


2. manage the test content and/or scoring


3. use specially designed 'bias' test


Goals:


1. minimise existence of response bias


2. minimise effect of response bias


3. detect biased response and intervene in some way

how does managing the test context help reduce response bias and what ways is this done (3 ways)?

- prevents them from happening in the first place


- by creating tests that seem anonymous (decrease SDR)


- creating test situations that minimise respondent fatigue, stress, distraction, frustration


- "bogus pipeline technique" telling participants that the test can detect lies, fake answers

what are the 3 factors that test developers use to avoid response bias?

1. writing clear, concise, unambiguous items


2. write items that are endorsable


3. forced choice- formats (require respondent to choose one alternative- reduce SDR eg. choosing between Calm & Hardworking)



what are the problems with the forced choice-format when creating tests? (hint related to RESPONSE BIAS)

- difficult to calculate internal consistency reliability


- its not valid to compare scores between people


- difficult to meet main criteria of test construction

what is one method used in managing a test that helps minimise the effects of response bias?

'balanced scales': consists of an equal number of positively keyed & negatively keyed items

what is the practicality with using a 'balanced scale' method, when creating a test?

can reverse score negatively keyed items to develop a more reliable assessment of scores

one way of detecting response bias is to incorporate a validity scale. what are the 3 validity scales used by the self- report inventory MMPI?

1. L(ie) Scale- assesses naive or unsophisticated attempts by people to present themselves in an overly favourable light


2. F (infrequency scale)- represents a deviant form of responding that is consistent with malingering, acquiescence or psychopathology


3. K Scale- attempts to assess more subtle distortion of response particularly clinically response

Define Response Sets in terms of response bias

are temporary and due to factors such as circumstances surrounding the testing or the test itself

Define response styles in terms of response bias

are considered relatively stable and enduring trait that are observable across tests and testing situations

What does Digital Span Backward measure?

Working Memory Capacity


note: digital span forward does not



Define Fluid Intelligence:

ability to perceive complex relationships, develop aids to learning, reason, abstract and solve novel problems in which a cultural context is not needed

How did Spearman originally define intelligence?

'education of relations and correlates'- only talking about fluid intelligence

what is the best method for testing non verbal fluid intelligence?

Raven's Progressive Matrices

What is the internal Consistency Reliability associated with Raven's test?

>.80 & has good predictive validity

T/F: Raven's Progressive Matrices test is a good test of general intelligence?

False

Define Processing Speed

the notion that more intelligent people are quicker at processing

What are the two methods used to measure individual differences in processing speed?

Reaction Time & Inspection Time

Define Reaction Time:

the time (in milliseconds) it takes an individual to respond cognitively to a stimulus

What is the Choice Reaction Time test?

participants must choose between two alternatives presented on the screen- eg. 2 easy to understand words are presented & participant must choose whether they are synonyms or antonyms

Define Reaction Time and Movement Time (2 different things)

Reaction Time (cognitive process): time between presenting the stimulus and the removal of the finder from the 'home key'


Movement Time (physical process): time between removing the finger from the home key and pressing either the left or right side button

T/F: Movement Time is not the variable of interest when looking at processing speed

True

What is the correlation scores between Simple reaction time and Intelligence? (According to Deary, Der & Ford, 2001)

-.30

What is the correlation scores between Choice reaction time and Intelligence? (According to Deary, Der & Ford, 2001)

-.50

Describe how the Inspection Time test works? (in testing processing speed)

- basic stimulus is presented on a computer screen


- 'flash mask' is then presented on screen for approx. 300msec

According of Kranzler & Jensen's 1989 review what was the correlation between Inspection Time & Intelligence?

.50

How do Clinicians measure Processing Speed?

Coding or trials A & B (they don't use reaction or inspection time)

Define Crystallised Intelligence

represents an individuals acquired knowledge (eg. knowledge of worldly facts and vocabulary size)

What is the heritability of Crystallised Intelligence?

monozygotic: almost 100% genes shared, r= .79




Dizygotic: 50% genes shared, r= .25




therefore its highly heritable



What is the term given to the positive correlation between cognitive abilities tests

'positive manifold'

What is the correlation between cognitive abilities such as reasoning skills, vocabularies, processing speed etc. ?

magnitude varies between .20 - .75

How does Spearman explain the occurence of positive manifold?

through the existence of a general facror of intelligence- he developed an analytic technique (similar to the PCA) to test and estimate the general factor of intelligence

Based onthe WISC-V what is the strongest indicator of g ?

fluid intelligence: 0.93


crystallised intelligence: 0.86


processing speed: 0.52



T/F: g factor accounts for as much as 85% of the variability in test scores

TRUE

What is an alternative test that some researchers use to assess how well people rate their own intelligence?

self-report test

What are the two overall dimensions within Blooms Taxonomy (taxonomy for learning, teaching & assessing)?

Knowledge dimension & Cognitive Processing Dimension

What is the difference between the old Bloom Taxonomy and the revised version?

the introduction of a 'creativity model' above the 'evaluation' model

what are the stages of Blooms Taxonomy (from top to bottom)?

creating, evaluating, analysing, applying, understanding, remembering

What higher-order thinking skills are not used when undertaking a mulit-choice test?

creating & evaluating- these need essay type assignments

what are some strategies to improve reliability of essay marks?

1. all candidates write on the same topic


2. remove names from essays


3. have detailed marking key


4. train markers on marking key


5. mark essays by at least 2 different markers

How could assess Internal Consistency Reliability of an Psychometrics Essay?

by looking at if marks allocated for each section- someone who can write a good introduction should be able to write a good conclusion


- consistency in marks across sections is a form of reliability (ICR)

How would you measure the inter-rater reliability of a Psychometrics Essay?

By getting the markers to mark each essay twice & looking at the correlation between the two marks awarded by the same marker


(a substantial & positive correlation would imply consistency between the marks)

What is the inter-rater reliability of essay marks?

between .50 - .60- not high enough to make important decisions

Why is the inter-rater reliability not important enough to make decisions?

because markers are able to agree on what essays are very good or very bad- but not on the intermediate essays

How can you increase the inter-rater reliability of essay marks?

by increasing the number of raters (markers)


- in this case the markers are the items, more items = higher reliability