• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/116

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

116 Cards in this Set

  • Front
  • Back

Developing Measures

know the literature, modify commonly used measures/see new uses for old ones, refine constructs of interest in study,

Constructs

which behaviours will adequately reflect the construct will be inferred and not directly observable

Habituation

a gradual decrease in responding to stimuli

Habituation Procedure

show stimuli, repeat, then show new stimuli done in order to see if young children know certain concepts (natural vs unnatural test events)

Natural Test Event

show image/video that naturally occurs n reality (a ball rolling down a hill aka gravity)

Unnatural Test Event

show image/video that does not naturally occur in reality (a ball rolling up a hill)

Mental Rotation

rotate objects in mind to see if they match others, an experiment on this judged reaction time

Reliability

if the results are repeatable when behaviours are remeasured

Measurement Error

if a great deal of this reliability is low, instruments used to test this

Validity

if a behavioural measure measures what it is supposed to, not another construct

Content Validity

whether or not the actual content of the items on a test makes sense in terms of the construct being measured (also if it includes items that assess each of the attributes)

Face Validity

whether the measure seems valid to those taking it in order for subjects to take the task seriously

Criterion Validity

whether the measure can a) accurately forecast some future behaviour or b) is meaningfully related to some other measure of behaviour

Construct Validity

concerns whether a test adequately measures some construct and it connects directly with the operational definitions (gets better with more supportive research)

Convergent Validity

scores on a test measuring 1 construct should relate to scores on other tests that are theoretically related to the construct

Discriminant Validity

shouldn't relate to unrelated constructs

Measurement Scales

assign numbers to events: type being used helps determine appropriate statistical analysis to be completed

Nominal Scales

classify into 1 group or another (categories)`

Ordinal Scales

sets of rankings showing the relative standing of objects/individuals

Interval Scales

include in rankings the concept of equal intervals between ordered events

Mean

average

Median

middle

Mode

most frequent

Outliers

scores far removed from others in data sets

Range

the difference between high and low scores (estimate variability)

Standard Deviation

a set of sample scores is an estimate of the average amount by which sample scores deviate from the mean score

Variance

standard deviation squared

Interquartile Range

used when there are outliers (median of scores above and below the median of the entire data set)

Histogram

graph showing the number of times each score occurs (normal bell curve)

Frequency Distribution

a table that records the number of times each score occurs

Inferential Statistics

study small sample and hope it applies to wider population

Null Hypothesis

assume there's no difference in performance between conditions you're studying

Research/Alternate Hypothesis

outcome you're hoping to find

Alpha Level

probability of obtaining your particular results

Type 1 Error

rejecting the null hypothesis (H0) when it is actually true (suspected when a study fails to be replicated)

Type 2 Error

when you fail to reject the null hypothesis (H0) but you're wrong (when there are unreliable measures/not sensitive enough)

Statistical Determinism

establish laws about behaviour and make predictions with probability greater than chance

Inferential Analysis

analyze 2 types of variability (systematic variance and error variance)

Systematic Variance

the result of an identifiable factor, either the variable of interest or some factor you've failed to adequately control

Error Variance

non systematic variability due to individual differences and random events

File Drawer Effect

studies finding no difference are less likely to be published

Effect Size

provides an estimate of the magnitude of the difference among sets of scores while taking into account the amount of variability in the scores

Meta Analysis

uses effect size analysis to combine results from several experiments that use the same variables even if they have different operational definitions

Confidence Intervals

a range of values expected to include a population value with a certain degree of confidence

Power

the chance of rejecting the null hypothesis (h0) when it is false, as this increases the chance of type 2 error decreases and vice versa

Correlation

establishes whether or not there is a relation between or among 2 variables (no manipulation only naturally existing variables)

Experiment

involves random assignment to different conditions being investigated, the manip of the IV and the measurement of a DV

Independent Variable

manipulated and under the experimenters control

Dependent Variable

something expected to depend on/vary with the manipulation of the IV

Extraneous Variable

the factors being held constant (need various IV levels to compare)

Field Experiments

take place in the field

Field Research

broader term for any empirical research outside the lab, including both experimental and non experimental studies/methods

Situational Variables

features in the environment that subject might encounter

Task Variables

researchers vary the type of task preformed by subjects (ex. give groups of subjects dif types of problems to solves)

Instructional Variables

manipulate by telling different groups to preform a specific task in different ways

Control Group

receives no experimental treatment (baseline for comparison)

Confounds

effects intermixed with IV, like third variable in correlational studies, they make the results difficult/impossible to interpret

Measuring DV

know prior research and use already established DV's that are reliable and valid

Cieling Effect

occurs when average scores for the groups in the study are so high that no difference can be determined between conditions

Floor Effect

when all scores are extremely low, usually because a task is too hard for everyone, produces a failure to find any differences between conditions

Subject Variables

existing characteristics of the individual participating in the study. can't be manipulated, must select based on desired characteristics (quasi experiment)

Quasi Experiment

when subjects are selected based on characteristics and are then sorted into random groups and studied

Drawing Conclusions with Subject Variables

cant draw causal conclusions bc limited to making educated guesses about the reasons why something is true (bc subjects may differ from each other in ways unknown to you)

Validity of Experimental Studies

psych research is said to be valid if it provides the understanding of the behaviour it is supposed to

Statistical Conclusion Validity

concerns the extent to which the researched uses stats properly and draws the appropriate conclusions from analysis

External Validity

the extent to which results can be generalized beyond the study

Ecological Validity

research with relevance for everyday cognitive activities of people trying to adapt to their environment

Internal Validity

when the effect can be confidently attributed to the manipulation of the IV

Pretest

to judge whether change occurs, evaluate people prior to experience (need control group to compare with)

Posttest

measure taken after experience to see if change occurs

History

an event occurs between pre and posttest that produces large changes unrelated to the treatment program itself (use control group to account for this)

Maturation

developmental changes that occur with the passage of time and impact studies that extend over time (control group can account for this)

Testing

the mere fact of taking part has an effect on posttest (control account)

Instrumentation

when measurement instrument changes form pre to posttest

Subject Selection Effects

if the groups aren't equivalent there is a confound

Hist x Selection Confound

some historical events might affect one group but not the other, as well the group may mature at different rates,respond to testing at different rates, or have different degrees of regression from the mean

Attrition

people leave the study so a subject selection problem arises because the group starting the experiment is not equivalent to the group completing it

Between Subjects Design

if the subjects receive either A or B treatments but not both, comparison of conditions will be a contrast between 2 groups of individuals

Equivalent Groups

sometimes experience at one level of Iv makes it impossible to do other levels because of experience

Random Assignment

every person volunteering for the study has equal chance of being placed in any of the groups being formed

Blocked Random Assignment

a procedure ensuring that each condition of the study has a subject randomly assigned to it before any condition is repeated a second time. each block of the study contains all conditions in a randomized order

Matching

subjects are grouped together on some subject variable and then distributed randomly to different groups in the experiment (smaller groups)



Matching Criteria

give reliable and valid measure of characteristic, must have reason to believe it will have a predictable effect on the outcome of the study, be confident that MV and Dv are correlated

Within Subjects Design

each participant receives both levels A and B of IV, everyone is measured several times

Advantages to Within Design

fewer people needed, good for when pop of interest is small, eliminate equivalent groups problem, individual variance not a problem

Order Effect

once a subject has completed the first part of the study, the experience/altered circumstances could influence performance in later parts of the study

Progressive Effects

it is assumed that performance changes steadily from trial to trial

Carryover Effect

some sequences might produce effects different from those of other sequences (if this exists usually a between design is chosen)

Counterbalancing

use more than one sequence (2 categories: Test Once Per Condition or Test More Than Once Per Condition)

Complete Counterbalancing

every possible sequence will be tested at least once (determined by calculating X! where X is # of conditions and ! is math calculation of a factorial)

Problem with Complete Counterbalancing

as the # of levels increases the possible sequences increase dramatically

Partial Counterbalancing

a subset of the total # of orders is used (either sample from complete set of possible orders or randomize order)

Latin Square

assured that a) every condition of the study occurs equally often in every sequential position and b) every condition precedes and follows every other condition exactly once

Reverse Counterbalancing

the experimenter presents the conditions in one order, then presents them again in the reverse order

Error Bars

indicate the amount of variability that occurred within each condition (show central tendency and variability)

Cross Sectional Study

between subjects approach with different age groups

Longitudinal Study

within subjects approach with a single group studied over time

Problems with Longitudinal Studies

attrition, ethical problems when people change their minds about participating, informed consent is an ongoing process

Cohort Effects

a group of people born at about the same time, differ in enviros and histories

Cohort Sequential Design

a group of subject is selected and retested every few years and additional cohorts are selected every few years and also retested over time

Experimenter Bias

desire to confirm a strongly held hypothesis might lead an unwary but emotionally involved experimenter to behave (w/o awareness) in such a way as to influence the outcome of the study

Controlling for Experimenter Bias

mechanize procedures as much as possible

Protocols

train people well and give highly detailed descriptions of the sequence of steps that should be followed in each research session

Double Blind Procedure

neither researcher or subject are aware of who has been placed in the treatment, placebo, and control groups

Subject Bias

can occur in several ways, depending on what subjects are expecting and what they believe their role should be in the study

Hawthorne Effect

when behaviour is affected by the knowledge that one is in an experiment and is therefore important to the study's success

Good Subject

be cooperative through repetitive/boring tasks in the name of science. Furthermore, if subjects figure out hypothesis they may try to behave in a way that confirms it

Demand Characteristics

aspects of the study that reveal the hypothesis being tested (reduce internal validity) more often occur in within designs

Evaluation Aprehension

subjects want to be evaluated positively so they may behave as they think an ideal person should

Controlling for Subject Bias

reduce demand characteristics to a minimum through methods like deception or placebo

Manipulation Check

ask subjects in a deception study what they think the true hypothesis is

Ratio Scale

numbers refer to quantities and intervals are assumed to be of an equal size, a score of zero denotes absence of phenom being measured

Descriptive Statistics

stats which describe the sample data without drawing inferences about the larger population

Population

the complete set of events you're interested in, must be clearly defnied

Sample

set of actual observations - subset of population

Regression to Mean

when selecting based on an extreme score subsequent scores tend to be less extreme