• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/32

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

32 Cards in this Set

  • Front
  • Back

What must you include in your research about your participants?

Demographic information. Age, sex, gender, height, weight, anything important to the study, but nothing that isn't essential (personal info is touchy)

What's an "item"?

An item is an answerable statement, I guess. Could be a question, could be something different. Pretty much something that needs a response.

Describe the process in making a questionnaire.

You start with open ended questions that field the area, giving you good topics or points.


The actual questionnaire is typically going to consist of closed ended items tho

Explain what positively/negatively keyed means.

Positively and Negatively keyed means that answering a 5 on a positive question says "yes", where answering 5 on a negative question says "no".


"Do you hate jews? 1-5"


"Are jews amazing people? 1-5"

What is a double-barrelled question? Should we encourage or discourage them?

Double barrelled questions are questions that ask two things.


"Do you support the increase of health care through the liberal tax raising?"


... Well, I like health care... I'm not a liberal though..


They suck!! (Double barrelled questions. Liberals too.)

How do you calculate a score? When do you use one of either method or when are you forced to use the latter?

You can calculate the score as a sum or average of all of the items, GIVEN that every participant has answered every question. If even 1 participant chooses not to answer 1 question, everyone must take the average, not the sum, for obvious reasons.

What methods are used in a "verification or lie scale"

You could tell the participant to answer a certain way on the question, and if they don't, they aren't reading the question.


You can ask the same question twice, keyed negatively once, and see if they just give the same result twice.

Define the Marlowe-Crowne Social Desirability scale

You throw on a bunch of phony questions about how great you are, and if the dude always answers "flawlessly" then you know they're just trying to look good.

Define error choice method. When is it used?

Error choice method is used when trying to discern if someone is homophobic, racist. (They wouldn't volunteer that info.)


So error choice is literally that. You give a question about, say, gays, and then give two possible answers (BOTH WRONG.) the way the participant responds may show that they feel a certain way.


ex. ___% of illegal immigrants are rapists


a) 8% b) 24%


If they answer 24% they probably feel a certain way about illegal immigrants

What's the "Bogus Pipeline"? How does the community feel about it

The Bogus Pipeline is "hooking up" a participant to a "lie detector" that they think is active. They are much more likely to express undesirable attitudes under this condition.


The psych community is hesitant to use the bogus pipeline because of the active deception that you place your participant under.

Define the Likert Scale

The Likert Scale is the typical scale you see

Not like me I dunno Very like me


1 2 3

What's the issue with having odd numbers in the Likert Scale?

People say that when you have a central point, most people will gravitate towards it, especially on a question like "How do you feel about the Negro problem?" So, by removing that safe middle ground, you force participants to state how they feel about the negro problem.

What's the Visual Analog Scale? (VAS) When would you opt for VAS?

VAS is putting a statement above a line with two extremes, and having the participant shade in the line where they think it's indicative of them.


VAS is really helpful on questionnaires with loaded questions where even a marginal difference (a .5 on the likert scale doesn't exist) can be identified.

What's a "Semantic Differential Scale?"

You throw a bunch of antonyms on a scale


Ugly |_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|_|Pretty


And the participant ticks on a space they feel comfortable

Define the Conceptual Definition as well as Operational Definition of a Construct (what is a construct?)

A Construct is like hunger, intelligence, or energy. Stuff that we can't actually observe, but through other factors, can infer exist.


Conceptual Definition: The actual definition of the construct, kinda like if you looked up hunger in the dictionary.


Operational Definition: The actual physical factors you looked into to define the construct. (Time since last meal, saliva production, blood sugar levels.)

Define Validity!

Validity is the idea that your measurement actually is an operational definition for your construct!

Convergent and Discriminant Validity

Convergent validity is if a poop ton of experiments are all done on the same construct using different methods. If all the operational definitions converge to the same result, then good.


Discriminant Validity is when your construct is different from another, so you actually separate it

Content Validity is?

When the measurement covers all corners of the construct that is under research (the content of the study is valid.)

Face validity is?

Face validity sucks and a lot of things don't use it.


But anyways, it's the factor that at face value, the research method LOOKS like it's measuring the right thing (You actually don't want this, because then they can answer a certain way to get a certain result once they know what's being measured)

What's reliability?


Reliability is the ability to reproduce certain results by replicating the study. Things like measurement error reduce reliability, as well as chance factors like how tired you are, the time of day, the weather, and such.

A reliable measure is always valid?

No. "Foot size is a measure of IQ".


Between two weeks, the foot size measurement and IQ measurement stay consistent, but that doesn't validate that foot size is a genuine measure of IQ.


HOWEVER if a measurement were unreliable it would automatically NOT be valid.

Interrater Reliability

The factor that like if a rater was to observe something and grade it, then 10 other observers would see the same thing. +.90

Test-Retest Reliability

The idea that administering a test one day will yield the same results as re-administering it a year later. A temporal reliability

Internal Consistency Reliability (Split half reliability)

Internal consistency is the idea that if a survey has 10 questions on the same "factor", then a participant would respond the same way to each item. "Split half" is the way to measure this. You take the first half and second half and compare them to check for reliability

Cronbach's Alpha

Using all of the different ways to check for split halves. You wanna have a +.65 score on the alpha to be good


k= number of items


r= mean correlation (the amount of pairings available.)


3 item survey has 3. 6 item survey has 15 i think

Measurement Reactivity

Measurement reactivity is when a participant realizes what's being measured, and adjusts a certain way that's undesirable

Biosocial characteristics

Factors like age, sex, race. Participants may be uncomfortable sharing this

Experimenter Bias

Rosenthal & Fode told some children they were working with dumb rats, some nothing, and others they were working with smart rats. None of the rats actually were researched, but nonetheless, the interaction with the experimenters pushed the kids with the "smart" rats to train their rats better

Pygmalion in the classroom (experimenter bias)

Teacher overheard certain (random) students were "smarter" and they actually became smarter because the teacher treated them better

Blinding is... (single and double?)

Having an experimenter that doesn't know what to EXPECT, therefore eliminating a chunk of bias


Single-Blind: Participant has no clue


Double-Blind: neither participant nor experimenter know

Demand Characteristics

Telling the participant what the intended result is, and almost as a "placebo" effect they feel it

Floor and Ceiling effects.

Floor and Ceiling effects are when the test is too hard or easy, respectively, and everyone gets (near) 0 or perfect, respectively. This obscures any true differences that the study might've brought forward.