• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/92

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

92 Cards in this Set

  • Front
  • Back
What are empirical quantitative approaches?
Test theories in a <b>controlled setting</b>

Collect <b>numerical data</b>

Uses large samples

Analyze data via <b>statistical methods</b>

With the quantitative method, understanding behavior involves the scientific method: developing theories; generating hypotheses to test those theories; designing experiments or non-experiments to test those hypotheses, including operationally defining variables; collecting numerical data from many participants; and analyzing those data using statistics.
What are interpretive qualitative approaches?
In-depth description of <b>individuals in their natural setting</b>

Data = words rather than numbers

Small number of participants

Data are <b>content analyzed for major themes</b>

the goal in qualitative approach is to obtain an in-depth description of people’s behavior in their natural settings. The data collected is non-numerical. Instead of focusing on numbers, the<b> focus here is on words</b>. Also, with qualitative methods, only a <b>small number of participants </b>is required. In terms of analyzing the data, researchers read though what participants have said, trying to identify major themes or ideas that emerge from the discussions.
Example of quantitative and qualitative?
Let’s suppose you wanted to know what sorts of things women worry about when they have breast cancer. You might have some ideas but in order to be sure, it would be best to ask the women directly. One way you could do this is to administer a questionnaire to a large group of women who have had breast cancer. This would be a quantitative study. When you do your literature, search, however, you discover that no such questionnaire exists. That means you’ll have to develop your own. In order to develop a questionnaire, you need to know what to ask your participants. What questions or items are you going to put in your questionnaire when you don”t know what it is these women worry about? One solution to this problem is to conduct a qualitative study, in which you interview the women and ask them to describe their experience with breast cancer. In the interviews, you might notice that certain themes come up again and again. For example, the women might tell you they worry about their families, or disfigurement, or
What is naturalistic observation?
Observations made in a natural setting<b> over a period of time</b>, using a variety of techniques

Describe and understand how people live, work, and experience their particular setting

Researchers become immersed in the situation

Goal = accurate description & interpretation

Strategies = observation, interviewing, and surveying documents


Advantages/disadvantages of qualitative methods?
It is most useful when investigating complex social settings both to understand the settings and to develop theories based on the observations.

It is useful for gathering data in real-life settings and generating hypotheses for later experiments.

The inability to control the setting, however, makes it challenging to test well-defined hypotheses under precisely specified conditions.
What is systematic observation?
Careful observation of <b>specific behaviors in a particular setting</b>

Less global than naturalistic observation

Observations are <b>quantifiable</b>

Testing specific a priori hypothesis

Observations are summarized using a <b>coding system </b>

Should be simple & easy to apply

Pre-established coding systems available

Examples of systematic observation?
Example: The “Just for Laughs” gags are examples of contrived situations designed to elicit specific behaviors.

Example: Let’s say you were interested in finding out who someone is more likely to help, a man or a woman. You could devise a scenario where you have a man or woman confederate drop a bag of groceries just as they are leaving the grocery store. You would then record who comes to their assistance.

This research method is much less global than naturalistic observation research, and is used more often from a quantitative rather than qualitative perspective. The researcher is interested in only a few very specific behaviors, the observations are quantifiable, and the researcher typically has developed prior hypotheses about the behaviors.
What type of reliability is systematic observation concerned with?
<B>inter-rater reliability</b>

When conducting systematic observation, two or more raters are usually used to code behavior.

Reliability is indicated by a high agreement. Researchers strive to attain 80% agreement or higher among raters. Achieving high inter-rater reliability can be difficult with live coding. In fact the coding system needs to be quite simple (e.g., counting the number of times a participant touches her face during an interview) for high inter-rater reliability to occur.
What is reactivity?
Just as in naturalistic observation, the <B>presence of the observer</b> (be it a video camera or a live person) can affect people’s behaviors. As mentioned earlier, reactivity can be reduced by concealed observation.

Example: For example, the use of one-way mirrors and hidden cameras can conceal the presence of an observer in the laboratory.
What is sampling of behaviours?
Researchers must decide <b>how long to take observations for</b>. For many research questions, samples of behavior taken over a long period provide more accurate and useful data than single, short observations.

Example: If you were interested in studying aggression in <B>hockey</b>, it would be wise to follow a team throughout an entire season rather than just a few games. Aggression is likely to increase as the games become more important, e.g., playoffs as opposed to season openers.
Why conduct surveys?
Surveys are a common and important method of studying behaviour.

Surveys provide us with a methodology for asking people to tell us about themselves.

We tend to think of survey data as providing a “snapshot” of how people think and behave at a given point in time. However, the survey method is also an important way for researchers to study relationships among variables and ways that attitudes and behaviours change over time or among different groups of people
What are response sets?
<B>Tendency to answers all questions in a particular manner</b>

pattern people use when responding to things

e.g. end of course questionaire - all 5's, all 1's, make a zigzag, etc.

<B>Social desirability: “faking good”</b>

if you were asking people about social behaviours - particularly discrimination and racism - people tend to respond with what is the politically correct response because they want to be viewed as socially desirable

<B>Scales available to detect this response set</b>
How does one construct questions to ask?
Define the research objectives:

<B>Attitudes and beliefs? </b>- something like a likert scale (0-7 agree, neutral, disagree, etc)

<B>Facts and demographics? </b>- want to create groupings

e.g. is your household income between 0-30,000 30,000-59,000, etc

<B>Behaviors? </b>- likert scale - does the behaviour happen frequently, some of the time, rarely or never?

problem: context (might just happen in a certain situation with the individual)
what's rarely? whats frequently? might not all agree
What are double-barrelled questions?
asks more than one question at a time
What are loaded questions?
has some type of personal slant
What is negative wording?
wording the statement in the negative

e.g."don't run!"

reverse would be "walk please"

people get confused
What is yea-saying/nay-saying?
true/false

doesn't give a wide range of options
What approach favours closed-ended questions? open-ended?
<B>Closed-ende</b>d question tend to be favoured from by those using a <B>quantitative approach</b>. It is a more structured approach; the responses are easier to code and the response alternatives are the same for everyone.

<B>Open-ended</b> questions, on the other hand, tend to be favoured by those adopting a <B>qualitative approach</b>. The questions are less structured and thus more time is required to code the responses.
What is a graphic rating scale?
requires a mark along a continuous 100-millimetre line that is anchored with descriptions at each end. A ruler is then placed on the line to obtain the score on a scale that ranges from 0 to 100 or 0 to 10.
What is a semantic differential scale?
Respondents rate any concept—people, objects, behaviours, ideas—on a series of opposite adjectives using 7-point scales.

Concepts rated using semantic differential scales are <B>rated along three basic dimensions</b>: The <B>first and most important is evaluation</b> (e.g., adjectives such as good–bad, wise–foolish, kind–cruel); the <B>second is activity</b> (active–passive, slow–fast, excitable–calm); and the <B>third is potency</b> (weak–strong, hard–soft, large–small).
What are interview surveys?
Face-to-face interviews
Telephone interviews
Focus group interviews

Problem: Interviewer bias
What is a panel study?
Panel studies can be done in <b>multiple waves</b>. In a “two-wave” panel study, people are surveyed at two points in time; in a “three-wave” panel study, there are three surveys; and so on. Panel studies are particularly important when the research question addresses the relationship between one variable at “time one” and another variable at some later “time two.”

Example: For example, Arim and colleagues (2011) used survey data from the Canadian National Longitudinal Survey of Children and Youth (NLSCY) to examine children’s aggressive behaviours. The NLSCY surveys new parents—and the children themselves when they are old enough—every two years, on a variety of variables relating to family life, health, and development. Arim et al. showed that children who reported feeling more nurtured and loved by their parents right before puberty (10 years old for girls, 12 years old for boys) engaged in less direct aggression (e.g., fi st fi ghts) and less indirect aggression (e.g., spreading rumours) two years lat
What are confidence intervals?
<B>Level of confidence that the true population value lies within an interval of the obtained sample</b>

Sampling error or margin of error

if you only measure university students - not getting a good idea of what the entire population thinks
What are sources of bias?
<B>The sampling frame </b>

only look at one place

e.g. looking at who likes to exercise at a gym - obviously people there like to

<B>Response rate </b>

how many people responded?

maybe people who didnt respond work 12hrs, people who did respond were stay at home moms - only getting one group
What are self-report measures?
Typically questionnaires or scales

Answer on a continuum anchored by polar opposites (e.g., Likert scale)

Psychologists often use self-report measures to study psychological attributes, such as self-esteem and attitudes, for example.
What is reliability of measures?
Consistency or stability of a measure

True score + measurement error

- true score is the person’s real score on the variable
The most common correlation coefficient when discussing reliability is _____________
the Pearson product-moment correlation coefficient. The Pearson correlation coefficient (symbolized as r ) can range from -1.00 to +1.00. A correlation of 0.00 tells us that the two variables are
How high should a correlation be before it is accepted as reliable?
0.80
What is alternate forms reliability?
Given that test-retest reliability involves administering the same test twice, the correlation might be artificially high because the individuals remember how they responded the first time.

<B>Alternate forms reliability involves administering two different forms of the same test to the same individuals at two points in time.
What is internal consistency reliability?
Internal consistency reliability <b>assesses how well a certain set of items relate to each other</b>. Because all items measure the same variable, they should yield similar or consistent results. Importantly, <b>responses are gathered at only one point in time. </b>


<b>Split-half reliability </b>- , we can divide a test into smaller parts (two, for example) and see if the different parts correlate well with one another. If they correlate highly with one another, then the test is reliable.


<b>Cronbach’s alpha</b> - the researcher calculates how well <b>each item relates to every other item</b>. This procedure generates a large number of inter-item correlations. The value of Cronbach’s alpha is based on the average of all the inter-item correlations and the number of items in the measure.


<b>Item-total correlations</b> - It is also possible to examine the correlation of each item score with the total score based on all items. �
What is interrater reliability?
Interrater reliability is the extent to which the raters<B> agree in their observations. </b>

High interrater reliability is obtained when most of the observations result in the same judgment; in other words, when the two raters rate similar behaviours as cooperative.

A commonly used indicator of interrater reliability is called Cohen’s kappa.
What is reliability and accuracy of measures?
Reliability indexes indicate amount of error but not accuracy

A measure can be highly reliable but not accurate
What is a nominal scale?
Categories with<B> no meaningful numeric value</b>

Ex.: males/females; experimental condition/control condition

Impossible to define any quantitative values or differences across categories
Sometimes nominal variables are called ________________.
categorical variables
What is an ordinal scale?
<B>Rank ordering</b> with numeric values

Ex.: restaurant ratings; birth order; Olympic medals

Has magnitude; values are smaller or larger than the next

Interval between items not known
What is an interval scale?
Has magnitude; values are smaller or larger than the next

Interval between items is known and is meaningful

No true zero point

Ex.: <B>Intelligence score, temperature</b>
What is a ratio scale?
Has magnitude; values are smaller or larger than the next

Interval between items is known and is meaningful

Has a true zero point

Ex.: <B>reaction time, duration of response</b>
What are the three approaches of analyzing research results?
Comparing group percentages

Correlating individual scores

Comparing group means
What is a frequency distribution?
<B>indicates the number of participants who receive or select each possible score on a variable</b>. There are several ways to graph frequency distributions.

For our purposes, we will focus on four types of <B>graphs</b>: pie charts, bar graphs, frequency polygons, and histograms.
What are bar graphs used for?
. Bar graphs are commonly used for comparing group <I>means </i>but can also be used for comparing group <I>percentages</i>.
What is a frequency polygon?
A frequency polygon <B>uses a line to represent frequencies</b>.

This is most useful when the data represent interval or ratio scales

The solid line represents the no-model group, and the dotted line stands for the model group.

This graph <B>does not allow us to compare group means</b>, but it does help us to visualize the distribution of aggressive acts in each group.
What is a histogram?
A histogram uses bars to display a frequency distribution for a quantitative variable.

In this case, the scale values along the X axis are continuous and show increasing amounts of a variable such as age, blood pressure, or stress.

In a histogram, the bars are drawn next to each other, which reflects the fact that the variable on the X axis is measured in continuous values.
What are the two measures of descriptive statistics?
<B>Central Tendency</b>: mean, median, and mode

<B>Variability</b>: standard deviation and range
What is central tendency?
Mean: the average of all scores

Median: the score that divides the group in half

Mode: the most frequent score
What is variabillity?
Standard deviation: average deviation of scores from the mean

Closer to the mean = less variation

Range: spread of scores
What is a common trick that is sometimes used by scientists and advertisers?
they exaggerate the distance between points on the measurement scale to make the results appear more dramatic than they really are.
What is Pearson <I>r</i>?
the correlation coefficient


Strength of relationship
Direction of relationship
Values range from 0.00 to ±1.00
What is the restriction of range?
The problem of restriction of range occurs when the individuals in your sample are very similar or homogenous on the variable you are studying.

Example: If you are studying age as a variable, for instance, testing only 6- and 7-year-olds will reduce your chances of finding age effects, compared to testing people aged 6 through 65.
What is effect size?
Refers to the<i> <B>strength of association</b></i> between variables

Pearson r is one indicator of effect size

Reporting effect size provides a scale of values that is consistent across all types of studies
What are the differences in effect size?
Small effects near r = .15
Medium effects near r = .30
Large effects above r = .40
Effect size - What is squared correlation coefficient?
Squared value of the coefficient r² - transforms the value of r to a percentage

Percent of shared variance between the two variables

If you are studying factors that contribute to people’s weight, you would want to examine the relationship between weights and scores on the contributing variable. One such variable might be sex: In actuality, the correlation between sex and weight is about .70 (with males weighing more than females). That means that 49 percent (squaring .70 and multiplying by 100) of the variability in the weights is accounted for by variability in sex.
What is Cohen's <I>d</i>?
Used in experiments with <B>two or more treatment conditions</b>

Describes the magnitude of the effect of the IV on the DV

Cohen’s d expresses effect size in terms of <B>standard deviation units</b>
What is construct validity?
The degree to which the operational definition of the measure reflects the construct

<B>Are you measuring what you say you are measuring</b>

For example, you can’t see self-esteem. It is an abstract concept. So, how do you know that the self-esteem measure you have developed is in fact measuring self-esteem? You do this by assessing different types of construct validity.
What is face validity?
The simplest way to argue that a measure is valid is to suggest that <B>the measure appears to accurately assess the intended variable</b>. This is called face validity.

Face validity is not very sophisticated; it involves only a judgment of whether, given the theoretical definition of the variable, the content of the measure appears to actually measure the variable.

Example: A measure of social support that asks people if they feel they have someone they can turn to in times of need has good face validity because it asks a question that one would see as relevant to the construct of social support.
What is content validity?
is based on <B>comparing the content of the measure with the theoretical definition of the construct</b>.

Example: The standard IQ test (the Weschler Adult Intelligence Scale) has good content validity because it assesses skills associated with intelligence, such as: vocabulary, arithmetic, reasoning, and comprehension.


Both face validity and content validity focus on assessing whether the content of a measure reflects the meaning of the construct being measured.
What is predictive validity?
Research that uses a measure to <B>predict some future behaviour</b> is using the predictive validity approach. Thus, with predictive validity, the criterion is some future behaviour.

Example: One can argue that future convictions are exactly what one would expect from criminals who are truly psychopathic; therefore, a scale that intends to measure psychopathy among criminals should be able to predict future convictions.
What is concurrent validity?
is assessed by research that examines the <B>relationship between the measure and a criterion behaviour at the same time </b>(i.e., concurrently).

A common method is to study whether two or more groups of people differ on the measure in expected ways.

Example: If you are studying helping behavior, you would expect that those high in empathy should show greater helping behavior than those low in empathy. If this is what you find, you have demonstrated high concurrent validity
What is convergent validity?
is the <B>extent to which scores on the measure in question are related to scores on other measures of the same construct or similar constructs</b>.

Measures of similar constructs should “converge”—for example, one measure of psychopathy should correlate highly with another psychopathy measure or a measure of a similar construct.

Example: Perhaps you are interested in studying social support and online networks. You find a measure of social support but you realize that the questions need to be modified to suit your particular research question. Whenever you modify a questionnaire in such a manner, there is a chance that the validity is affected. In order to be certain that your modifications have not had an adverse effect on validity, you can have your participants complete both questionnaires, the original version and your modified version. If the scores on both questionnaires converge, that is, they are highly correlated, then convergent validity is high. This means that your version measures social
What is discriminant validity?
If you wanted to further demonstrate the validity of your modified questionnaire, you could add in another scale that measures an unrelated construct, such as neuroticism. This allows you to establish discriminant validity. The scores from your social support measure should be unrelated (show a low correlation) to the score obtained from the measure of neuroticism. If this is what you observe, then discriminant validity is high. This means that your social support discriminates between the construct being measured and other unrelated constructs.
What is evidence for construct validity?
The more methods used, the better

Sometimes a measure is valid in one context but not another

Must be tested and reassessed
What is reactivity of measures?
Measure is reactive if awareness of being measured <B>changes an individual’s behavior</B>

Measures of behavior vary in terms of their potential reactivity

Can allow for familiarization & use unobtrusive measures
What is quasi-experimental design?
Used when the control (e.g., random assignment) of true experiments cannot be achieved

Lower Internal validity than true experiments

Use only when<B> true experimentation is not possible</b>
What is the one-group posttest?
The simplest of the quasi-experimental designs is the one-group posttest only. In this design, a researcher compares the responses of a number of individuals exposed to the same event.

Example: Suppose you want to investigate whether sitting close to a stranger will cause the stranger to move away. You might try sitting next to a number of strangers and measure the number of seconds that elapse before they leave.

This design is rather straightforward and simple but it lacks a crucial element of a true experiment, a control or comparison group. Without a basis for comparison, it is difficult to tell if the stranger moved away because of how close you were sitting or for some other reason. For this reason, internal validity is compromised in a on-group posttest only design.
What is the one-group pretest posttest design?
Add a baseline measure to provide a basis for comparison
Quasi-experimental design - what are threats to internal validity?
History effects

Maturation effects

Testing effects

Instrument decay

Regression toward the mean

Can be overcome by introducing a control group into the experiment

An equivalent control group is preferable but not always possible in quasi-experimental designs
What does <I>WEIRD</i> stand for?
Western, Educated, Industrialized, Rich, and Democratic,
What are characteristics of volunteers?
more highly educated, more in need of approval, and more social.
What is generalization as statistical interaction?
The problem of generalization can be thought of as a statistical interaction

Include the subject variable as another independent variable in the study

No interaction = generalizability

Example: Suppose, for example, that a study examines the relationship between crowding and aggression among males and reports that crowding is associated with higher levels of aggression. You might then question <B>whether the results are generalizable to females.
What role do experimenters play with regards to generalization?
The person who actually conducts the experiment is the source of another generalization problem. One precaution we need to take is to ensure that the influence the experimenter has on participants is constant throughout the experiment. Even when the experimenter remains consistent across groups, there is still the possibility that experimenter characteristics, such as friendliness or experience, and even gender can influence research results.

Example: Rabbits learn faster when trained by experienced experimenters.

Example: Participants seem to perform better when tested by an experimenter of the opposite sex.
Should a pretest be given?
Helps assess possible mortality effects

Can use a Solomon four-group design to assess any interaction between the IV and the pretest variable
What are the two types of study replications?
exact replications and conceptual replications.

What is an exact replication?
An exact replication is an attempt to replicate precisely the procedures of a study to see whether the same results are obtained. A researcher who obtains an unexpected finding will frequently attempt a replication to make sure that the finding is reliable and not simply a Type I error. Often, exact replications occur when a researcher builds on the findings of a prior study—this can be called a <i>replication with extension.</i>
What are conceptual replications?
The use of different procedures to replicate a research findings

The IV is manipulated in different ways than in the original study

The DV can also be measured differently

Conceptual replications are extremely important in the social sciences because the specific manipulations and measures are usually operational definitions of
complex variables. A crucial generalization question is whether the relationship holds when other ways of manipulating or measuring the variables are studied. For this reason, conceptual replications are even more important than exact replications in furthering our understanding of behavior.
What is a literature review?
a researcher reads a number of studies that address a particular topic and then writes a paper that summarizes and evaluates the literature. This is considered a narrative approach.

The literature review provides information that:

(1) summarizes what has been found,
(2) tells the reader what findings are strongly supported and those that are only weakly supported in the literature,
(3) points out inconsistent findings and areas in which research is lacking, and
(4) discusses future directions for research
What is meta-analyses?
Method for determining the <b>reliability</b> of a finding <B>by examining the results from many different studies</b>

Researcher pools actual results from other studies, which are then analyzed statistically

�One of the most important features of meta-analysis studies is the focus on <b>effect size</b>. A researcher will pool the results from other studies and then calculate the effect size using statistical analysis procedures.
What is the impact of psychological research?
health, law and criminal justice, education, work environments
What are confidence intervals?
<B>Level of confidence</b> that the true population value lies within an interval of the obtained sample

Sampling error or margin of error
What is a sample size?
A larger sample size reduces the size of the <B>confidence interval</b>

Must consider the cost / benefit of increasing sample size
What is the sampling frame?
refers to the <B>members of the population that are accessible to participate in your study</b>.

This will vary depending on the recruitment method used.

Example: Let’s say you wish to sample all the residents of Ottawa. You may decide to access them using a telephone book. That would be your sampling frame. The problem is that not everyone in Ottawa may have a land line, they ay be using their cell phones instead. Also, of those that have land lines, some ma have private numbers that do not appear in the telephone directory.

When evaluating the results of the survey, you need to consider how well the sampling frame matches the population of interest. Often the biases introduced are quite minor; however, they could be consequential.
What is the response rate?
The response rate in a survey is simply <B>the percentage of people in the sample who actually completed the survey</b>.

Thus, if you mail 1,000 questionnaires to a random sample of adults in your community and 500 are completed and returned to you, the response rate is 50 percent.

Response rate is one indicator of how much <B>bias </b>there might be in the final sample of respondents.
Sampling Techniques - What is probability sampling?
<B>Simple random sampling</b>: every member of the population has an equal probability of being selected

<b>Stratified random sampling</b>: population divided into subgroups (strata) and random samples taken from each strata

<b>Cluster sampling</b> – identify clusters and sample from the clusters




What is non-probability sampling?
In non-probability sampling , we don’t start out with a complete list of the members of the population and we don’t know the probability of any particular member of the population being chosen.
What are convenience samples?
<B>Less expensive; less time-consuming</b>

Suitable for measuring relationships between variables.

BUT findings may not be generalizable – further replication with different samples is required.
What are case studies?
Description of an individual or event

<B>Psychobiography</b> –the use of psychological theory to explain the life of an individual

Methods include library search, interviews, and sometimes direct observation
Why are case studies valuable?
Case studies are valuable for informing us of conditions that are rare or unusual and thus providing unique data about some psychological phenomena. Insights gained through a case study may also lead to the development of hypotheses that can be tested using other methods. Extreme caution must be taken when interpreting the results of a case study; however, as it has limited generalizability. In other words, the information gained from examining a unique case may not be applicable to a larger population.
What is archival research?
Involves using <B>previously compiled information</b> to answer research questions

Data can be <B>qualitative or quantitative</b>

Three major sources of data: <B>statistical records, survey Archives, and written records</b>
Archival Research - What are statistical records?
Public & private organizations

Public records

Major sports leagues
Archival Research - What are survey archives?
Consortium for Political & Social

Research

World Values Survey
General Social Survey
What is Consortium for Political and Social Research (ICPSR)
makes survey archive data available. Other very useful datasets intended as resources for social scientists include the World Values Survey and the General Social Survey (GSS). The GSS is a series of surveys funded in Canada by Statistics Canada and in the United States by the National Science Foundation.
Archival Research - What are written and mass communication records?
Written: diaries, letters, speeches

Mass comm.: books, newspapers,
magazine articles, tv programs
How is archival research analyzed?
When analyzing archival data, a <B>coding system</b> is used, just as with systematic observation.

When applied to archival data, these coding systems are referred to as <B>content analysis</b>.
What are issues with archival data?
<B>the desired records may be difficult to obtain</b>: They may be placed in long-forgotten storage places, or they may have been destroyed.

<B>we have no control over what data were collected and the way they were recorded</b>; we can never be completely sure of the accuracy of information collected by someone else.