• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/57

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

57 Cards in this Set

  • Front
  • Back
Define generalizability
The same as external validity/applicability. Degree to which the conclusions based on one research sample are applicable to another, often larger, population.
What are the five steps scientists take when conducting empirical research?
1. State the problem
2. Design a research study
3. Measure variables
4. Analyze the data
5. Draw conclusions from the research
Define theory
A statement that proposes to explain relationships among phenomena, e.g. a theory of why individuals are attracted to each other.
What is the 'inductive' method of science?
A research process in which conclusions are drawn about a general class of objects or people based on knowledge of a specific member of the class under investigation.
Define 'deductive method'
A research process in which conclusions are drawn about a specific member of a class of objects or people based on knowledge of the general class under investigation.
Define research design
A plan for conducting scientific research for the purpose of learning about a phenomenon of interest
Define internal validity
The extent to which the relationships evidenced among variables in a particular research study are accurate or true. Or, Extent to which a study speaks to causality. internal validity means is that you have evidence that what you did in the study (i.e., the program) caused what you observed (i.e., the outcome) to happen.
external validity
Generalizability. The extent to which findings from a research study are relevant to individuals and settings beyond those specifically examined in the study.
Primary research method
A class of research that generates new information on a particular research question.
Laboratory experiment
A type of research method in which the investigator manipulates IVs and assigns subjects to experimental conditions.
Quasi experiment
Research method used for conducting studies in field situations where the researcher may be able to manipulate some IVs. (The people in the study do not perceive the setting as having been created to conduct the research).
Questionnaire
A type of research method in which subjects respond to written questions posed by the investigator
Observation
Research method in which investigator monitors subjects for the purpose of understanding their behavior and culture
Secondary research method
Class of research methods that look at existing information from research studies that used primary methods.
Meta analysis
A quantitative secondary research method for summarizing and integrating the findings from original empirical research studies.
Qualitative research
A class of research methods in which the investigator takes an active role in interacting with the subjects he or she wants to study.
What three kinds of purposes did Maxwell (1998) propose for conducting a scientific study?
Personal - eg to further your career in research
Practical - focus on accomplishing something
Research - focus on understanding something
Ethnography
A research method that utilizes field observation to study a society's culture
Criterion variable
A variable that is a primary object of a research study; it is forecasted by a predictor variable.
Predictor variable
A variable used to predict or forecast a criterion variable
Variability
The dispersion of numerical values evidenced in the measurement of an object or concept
Standard deviation
A statistic that shows the spread or dispersion of scores around the mean in a distribution of scores
Participants in psychological research are granted which 5 rights as specified in the APA code of ethics?
1. right to informed consent
2. right to privacy
3. right to confidentiality
4. right to protection from deception
5. right to debriefing
What 3 features distinguish empirical research conducted in industry?
1. Research questions in industry often arise from organizational problems, eg absenteeism
2. The way results are used (e.g. In industry, if research research are positive, they are used to 'sell' the findings within an organization
3. Research is done to enhance an organization's efficiency
Define criteria
Standards used to help make evaluative judgments
Define empirical research
Arriving at conclusions based on observations.
Define hypothesis
Speculation about the relationship between variables
Define low reactivity
Where people are more likely to respond honestly, e.g. if I wanted to measure workplace theft
Define 'reactivity'
The extent to which people change their behavior, or the reports of their behavior
List three disadvantages of case studies
Small sample size
Low generalizability
No cause-effect conclusions
List one advantage of a case study
Provides in-depth, detailed information
List 1 advantages & 2 Disadvantages of observational study
Rich source of information
Reactivity (people might modify their behavior if being observed)
Can't draw conclusions regarding cause and effect (don't know direction of causality + findings could be due to a 3rd variable)
personal biases and expectations of observer might influence
Laboratory experiment
A Research method in which the investigator manipulates IVs and assigns subjects to experimental and control conditions
Advantage and disadvantage of lab experiments
Adv: more control over extraneous variables
Disadv: unrealistic, artificial conditions
Advantage / disadvantage of field (quasi) experiments
Adv: More realistic, natural conditions
Disadv: less control over extraneous variables
Example of a threat to internal validity
Rosen (1970) study in which foremen were replaced them with actors. The actors were told to use different management styles: autocratic, democratic, nothing at all. For 10 weeks. – nobody changed their productivity no matter what the management style. The aspect that came out in the study was ‘demand characteristics’ – people found out in week 10 that they were in a study and they changed their productivity. That was a good example of a threat to internal validity – the employees only changed their productivity when they found out they were in a research study.
Example of a threat to external validity
Gordon, Slade & Schmitt (1986) looked at studies involving college students and those that didn't - found big diff. in results
ecological validity
For a research study to possess ecological validity, the methods, materials and setting of the study must approximate the real-life situation under investigation. Unlike internal and external validity, ecological validity is not necessary to the overall validity of a study.
Example of threat to ecological validity
Study using 'paper people' - researchers took fake resumes with a gender neutral name and manipulate the picture. They were looking at to what extent race, gender, attractive versus non-attractive would impact people’s views. They found a variety of results. One of the issues was that they were using ‘paper people’ – to what extent were these ‘paper people’ on the resume valid compared to the rest of the population.
Other example: mock-jury research is designed to study how people might act if they were jurors during a trial, but many mock-jury studies simply provide written transcripts or summaries of trials, and do so in classroom or office settings. Such experiments do not approximate the actual look, feel and procedure of a real courtroom trial, and therefore lack ecological validity. However, the more important concern is that of external validity--if the results from such mock-jury studies generalize to real trials, then the research is valid as a whole, despite its ecological shortcomings. Nonetheless, improving the ecological validity of an experiment typically improves the external validity as well.
Advantages of surveys
Inexpensive
Efficient
Everybody gets the same stimulus (survey) compared with an interview where the interviewer might be tired, using a different tone, etc.
Low reactivity – people are more likely to respond honestly, e.g. if I wanted to measure workplace theft
Define reactivity
The extent to which people change their behavior, or the reports of their behavior
Disadvantages of surveys
People often only give responses when they feel strongly (e.g. maybe only the people dissatisfied with their jobs will respond to a job satisfaction survey)
Tend to get low response rate – a response rate of 50% is considered good
People sometimes don’t understand questions
Some people don’t read
Advantages of interviews
good for non-literate popns
Better response rates
less bias in responses
Disadvantages of i/views
time consuming
expensive
stimulus may differ
reactivity
Define non-reactive measure
Don't require any cooperation from population being studied. Looks at traces of behavior, e.g. to assess whether a program designed to help employees be healthier might involve looking at vending machine sales (water vs Coke).
Advantages and Disadvantages of non-reactive measures
Adv: no change in subjects' behavior
Disadv: open to misinterpretation (e.g. museum exhibit where carpet is worn could just be because it's on the way to bathroom)
Correlation
= r/ship b/w variable X and variable Y
Zero correlation means....? What would scatterplot look like?
No r/ship b/w variables
Dots all over the place
Positive correlation means....?
Variables changing in the same direction (as one variable increases, the other variable increases)
negative correlation + give example
as one variable increases, the other decreases
People with higher salaries have shorter distances to front door
Reliability
Consistency in measurement. In order for us to measure something, we need to be measuring it consistently. If I measured the height of every student every single day, it would fluctuate slightly, e.g. due to the person slouching or due to the tester not measuring correctly. This is termed unreliability.
Validity
Are we measuring what we’re supposed to be measuring, i.e. a standard for evaluating tests that refers to the accuracy or appropriateness of drawing inferences from test scores. eg. I give a test to job applications, and to test validity, we can give job applicants a test, and then hire them and after a while measure their job performance and measure a correlation b/w scores on selection test and their job performance scores. if the test scores point to job performance, we can say the test has validity.
Give an example of ‘Restriction of range’
Study showed that a spelling test is not a good predictor of job performance in directory assistance operators. That's because If you don’t have really high or low performers, you won’t have the whole range of performers on variable X and variable Y and because of this, you’ll have a lower correlation coefficient between X and Y than what might exist in reality. The really bad spellers weren’t hired in the first place and the really good spellers had been promoted so what was left was a whole bunch of people in the moderate range.
Problems with Pearson's R correlation coefficient
Doesn't capture non-linear r/ships
Restriction of range
Correlation does not mean causation
Direction of causality not known
Possible 3rd variable
Examples of 3rd variable
Correlation between infant mortality and tons of cement poured
What 4 APA ethical issues relate to I/O Psychology
Responsibility: psychologists have to accept the responsibility to ensure their services are used appropriately. It’s not enough for someone to provide services but you have to ensure they are used appropriately.
Competence: psychologists have to recognize the boundaries of their training. You can’t do things that you haven’t been trained to do.
Confidentiality:
Assessment techniques: in psychology, we’re trying to measure people: skills, abilities, etc. When we are trying to assess these things, we are not perfect at it. We are never perfect at selecting people for jobs.
What are 'demand characteristics'?
An experimental effect where participants form an interpretation of the experiment's purpose and unconsciously change their behavior accordingly.