• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/102

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

102 Cards in this Set

  • Front
  • Back

scientific method

a method of procedure where the problem is identified, data is gathered, hypothesis is formed and the hypothesis is empirically tested



empirical approach

information gained by experience, observation or experiment

parisomy

preference to most simple explanation

testability

the assumption that the explanations ofbehavior can be tested and falsified through observation. Being able to proveyourself wrong

basic research

a type of research conducted with the goal of understandingfundamental processes of phenomena

applied research

type of research conducted with the goal of solving every day problems

external validity

the degree to which you can apply your findings to the outside world/population

ethnocentrism

thinking your culture is best

independent variable

a factor the researcher believes to affect the observed behavior

hypothesis

a proposed explanation based on limited evidence

descriptive or correlational hypothesis

a prediction that your experimentalmanipulation will have an affect on the variables. Compares the means of groups

structure of an empirical journal article

has an introduction, lit review, why it's important, how it addresses current issues, hypothesis, method, results, participants

construct

concept or trait that is not tangible, like an emotion

theory

explanation of behavior to be tested

casual inference

drawing a conclusion of possible outcomes

operational definition

Researchprocedure that results in measurement of the concept. description of how you defined the behavior. used to define something (variable) in terms of a process (set of validation tests) needed to determine its existence

psuedoscience

posing as science but can't be tested

external validity

the degree to which you can apply your findings back to the population

internal validity

the degree to which a study provides casual information about behavior

reliability

consistency over time; the extent to which a measure is free of error

operational defintion

the definition of an abstract concept used by the researcher to measure/manipulate the concept in a study

inter-rater reliability

the measure of the degree to which different observers rate behaviors in similiar ways

naturalistic observation

a type of observation done in a natural setting or the person's typical environment

quantitative data

numerical data

qualitative data

non-numerical participant responses/data

variable

a behavior, situation or characteristic that differs from between people

research design

the overall strategy you use to integrate the different components of the study in a logical way. examples include a case study, a correlational/descriptive study, quasi-experiment.

correlational research

research method that allows you to examine the relationship between two variables

predictor variable

also known as the independent variable; does not change based on something else

outcome variable

also known as the dependent variable; what is being measured in the experiment

correlation coefficient

measures the strength between two variables for example, the pearson r coefficient

experimental research

research where the scenario is contrived and highly controlled

quasi-experiment

a type of research design where a comparison is made but the groups aren't randomly assigned

treatment group

the individuals in an experiment that receive a treatment

control group

subjects in an experiment that don't receive the treatment

random assignment

randomly assigned indivduals

pre-test/post-test design

allows you to assess the DV level or score before the IV is introduced

time sampling

type of sampling that occurs at set, systematic times

event sampling

type of sampling that is used when a behavior or event actually happens

situation sampling

type of sampling that happens at different locations/circumstances

observation with intervention

type of observation that allows you to intervene while observing

observation without intervention

type of observation in a natural setting that examines relationships between variables

structured observation

type of observation used when the event isn't likely to occur. for example, change blindness and the guy asking for directions/switching the painting

qualitative types of measurement

types of measurement that include narratives, video recordings, etc.

quantitative types of measurement

types of measurement that include a frequency checklist or rating scale. numerical

measurement scales

four levels of measurement that are expressed differently numerically

the four measurement scales

nominal, ordinal, interval, ratio

likert scale

scale of responses that measure a participant's agreement or disagreement with different types of statements. for example, 1 = strongly disagree

rating scale

type of interval scale that allows people to express feelings. for example, attractiveness on a 10-point scale

semantic differential

a rating scale that identifies the connotative meaning of objects/words/concepts. can measure an individual's unique, perceived meaning of something

reactivity

this happens when participants know they are being observed and change their behavior

demand characteristics

this happens when participants try to figure out what is being asked of them

observer bias

type of bias that researchers may bring to their work

construct validity

the extent to which a measured variable actually measures the concept it's designed to measure

content validity

to what degree the measure covers the entire domain of the construct

face validity

the degree to which it appears to be valid

internal validity

the degree to which a study provides a good test of the hypothesis

field experiment

type of experiment that is in the natural setting but is manipulated. for example, re-arranging toys in a classroom

internal consistent (alpha)

items in the measure that are internally correlated/consistent with how the group would score them; items in a scale consistent with each other

probability sampling

a type of sample where individuals have a specific probability of being chosen

sample

a group of people chosen from the population to be in a study

sampling bias

type of bias caused by choosing non-random data for analysis so that members of the intended population are less likely to be included

simple random sampling

each person has an equal chance of being selected in this type of sample

stratified sampling

type of sampling where the proportion of a group in the sample is equal to the proportion of that group in a population

systematic random sampling

type of sampling where members are chosen from the larger population according to a random start point and a fixed interval

haphazard/convenience sampling

sample chosen from the population where they are chosen. example, sign up sheets

social desirability

a bias where the group presents itself in the best light so they change their answers unintentionally

debriefing

participants are told the purpose of the study and its benefits

informed consent

telling participants about the study and getting their approval to participate

IRB

oversees the research conducted to make sure it's ethical

confidentiality

keeping participant's information private

risk to benefit ratio

weighing the risks to the benefits to make sure the benefits outweigh the risks

mean

average

measures of central tendency

mean, median, mode

descriptive stats

measures of central tendency and dispersion

median

midpoint of scores

mode

most common response, most frequent

normal distribution

measures of central tendency are all the same and look like the bell curve

range

highest score - lowest score

variance

a measure of dispersion; the average of the squares of the deviations of the numbers in the list from their mean

measure of dispersion

variability of scores around a midpoint (usually the mean)

standard deviation

the average amount the scores differ from the mean; the square root of the variance

skewed distribution

having fewer scores on either side

kurtosis

Howspread apart or together items are distributed. Degree to which the tails ofthe distribution contain few scores or a lot of scores

random error

Error in measurement caused by factors thatvary from one measurement to another

systematic error

Error having a non-zero mean, so the effect isnot reduced when observations are averaged

standard score

Individual test score expressed as thedeviation from the mean score of the group in units of standard deviation

null hypothesis

Theopposite hypothesis to see if an effect or relationship doesn’t exist in thepopulation the decision to reject ornot reject the null hypothesis

alpha

a way to measure internal consistenty by how they are internally consistent with how the group would score them; correlating the even with odds

type 1 error

error when you find results that aren't there and rejecting the null when you shouldn't have

type 2 error

error when you don't reject the null and don't find significant results that are there

types of sampling techniques

time sampling, event sampling, situation sampling

types of reliability

interrater, split-half reliability, test-retest reliability, cronbach's alpha

interrater reliability

the degree to which the raters agree

test-retest reliability

type of reliability where the scores from the pre and post test are similar

split-half reliability

type of reliability where you correlate the odd items with the even items

cronbach's alpha

Method of testing scores’ internal consistentythat indicates the average correlation between scores on all pairs of items ona survey

construct validity

operationaldefinition of a concept and conclusions drawn from the data

content validity

to what degree the measure covers the whole domain of the construct

predictive validity

type of validity used to predict future performances based on the past

probability sampling techniques

survey methods, simple random samples, cluster samples, stratified random samples

validity is based on

reliability is based on