• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/75

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

75 Cards in this Set

  • Front
  • Back
Best explanation for Clever Hans' (the horse) math ability
Cued by his owner
Which is not true of scholarly journal articals?
a) Manuscripts are peer reviewed
b) Reviewers do not know owner's identity
c) Published studies rarely identify their own strengths and weaknesses
d) Manuscripts contain extensive footnotes and/or references
e) Published articles contain the most recent information available
E
Nominal variable & rules for categorizing, maybe some examples
A variable that is identified by qualitative features. To categorize:
Categories used to classify attributes must be mutually exclusive
All categories used to represent data must be equivalent
Must be exhaustive
One example would be a repeated coin toss, and the number of heads versus number of tails
Rank order is associated with which type of data? Can be either by numbers or by "good" "bad" and all
Ordinal
Can a measure be valid withut being reliable?
NO
Scale rating is associated with which type of data
Interval
Five sections to every published study
Rationale
Lit review
Methods
Results
Discussion
Things a published study has
A title
An abstract
Intro/rationale
Lit review
Hypothesis
Section on methods
Talk about the section on methods
Design statement - a single-sentence description of the type of experiment
Subjects/participants - description of sample group(s)
Method of selection
Demographics regarding subjects
Equipment/apparatus (description of equipment)
Procedures
Procedures - explain all involved in this part (hint; think of what section has the procedures listed)
In methods section, the procedures is a step-by-step description of the procedures, which includes:
A cover story if needed and instructions for the subjects
Operationalization of dependent variables
Measurement of dependent variables
A debriefing
And results (entails data treatment, data analysis, tables/graphs/charts, significance or lack thereof of data, discussion, and references)
Three pillars of experimental research (for the simplest type of experiment)
Manipulation, measurement, and control
Manipulation
Independent variable, usually conditions - involves varying participant attributes, or experimental stimuli. Hypothesized to have direct effect on depndent variable (outcome), and analogous to "cause" in cause-effect relationship, measurement, and control
Measurement
Focuses on dependent variable, involves measurement (this is where data comes from). Value changes based on influence of independent variable, analogous to effect in cause-effect relationship
Control
Involves holding all factors constant and controlled, icnluding the confounding, extraneous, or intervening variables (note that confounding variables can't always be prevented, but they can be controlled)
Research
The studious inquiry or examination, especially the investigation or experimentation, aimed at the discovery and interpretation of facts, revision of accepted theories or laws in light of new facts, or practical application of such new or revised theories or laws
Epistemology (also its characteristics)
Way of knowing
Scientific/quantitative (seeks to be objective, often relies on large samples of people)
Humanism/qualitative (very subjective, or based on opinions)
Steps of the scientific method
Theories
Predictions/hypothesis
Observations
Empirical generalizations
Talk about theories
A theory is a proposed explanation for a set of natural phenomena, capable of predicting future phenomena
Should be able to explain or describe natural phenomenon in attempt to satisfy natural curiosity
Must be testable in empirically (empiricism is the belief that science is only accepted as much as phenomenon can be "sensed" by average person)
Go more indepth on predictions/hypothesis(-es)
A hypothesis is a conclusion that occurs at the end of a series of propositions (proposition - statement that either confirms or denies something)
Antecedent - an "if" statement
Consequent - a "then" statement (this and antecedent make up a proposition)
Argument - hypothetical propositions help us get here, a set of propositions where one folows as a conclusion from the others
List some writing don'ts
Don't begin with a cliche ("In today's society ...")
Don't use "very" a lot - highly or extremely is much better
No big words or run-ons, go for short, concise sentences
Spell out numbers less than 10 - 10 and up are numerical
Avoid cliches
Avoid split infinitives ("To go boldly .." vs "To boldy go ...")
Don't end a sentence with a preposition
No slang or coloquialisms
How is most modern research done (groups and such)?
Subjects are assigned randomly to control or treatment group - treatment gets exposted to experiment, control doesn't, and the research studies the differences between the two groups
Self-selected groups
Groups based on choices already made by members (like democrats vs republicans)
Dyadic adjustment scale
Paper-pencil survey that asks respondents to rate quality of relationship with significant other
So statistical tests for significance are important - what are scientists usually content with?
5 % chance of risk (p < .05)
Steps to beginning research project
Identifying question/topic, clarifying research question and generating list of key terms, locating potential sources of information, and organizing and evaluating the information, and citing sources
Clarifying research question and generating list of key terms - what do you need to do?
State topic in form of a question
Identify key terms and topics
Generate a list of key term synonyms to search for background information
Locating potential sources of information (best things to use, and criteria for world wide web)
Best to use handbooks and subject encyclopedias, as well as electronic databases
World wide web criteria for examining credibility: accuracy (lack of errors), authority (author with credentials, .org or .gov), currency (date it was created), and objectivity (perhaps a mission statement of the website, no hidden bias)
Organizing and evaluating information
Use a list of key terms, and review all reference sources
Complete source record card for each source
Review abstract and then each discussion
Read bibliographies for additional sources
Title page formatting and such
Goes on top of the document itself, has a running head (series of words on every page followed by page number on upperhand right corner, half an inch from top and right side), the title is in the very center, and al margins are 1 inch, but since everything is centered it doesn't matter
Abstract
Second page of APA paper, has 1 inch margins all around, except for running head. "Abstract" is centered at the top line, and is "left justified"
Reference page format
Running head, "references" is centered 1 inch from top of page, and then references listed hanging indent, APA
First actual page of paper
Third overall page, contains text related to paper, as well as title centered and double-spaced below running head (new paragraphs are always indented_
Variables
Ingredients of research product, any entity that can take on a variety of different values
Attributes
Specific categories of a variable
Value
Numerical designation assigned to each variable for statistical analysis (age, but sometimes a number needs to be assigned)
Relationship
Connection or correspondence between two variables (can be positive or negative)
Difference types (there are 2)
Either in kind (football player vs cheerleader) or in degree (more interested in this when two groups have differing degrees of a variable)
Independent variable
Manipulated part of reserach (how it impacts dependent variable)
Dependent variable
Not manipulated, but recorded or measured
Intervening variable
Variable whose presence may impact relationshi between dependent and independent variables
Antecedent variables
Something that already exists that can affect study, even if it doesn't directly come up
Nominal variable (three rules of categorizing included)
Make up variable level that is identified b qualitative features. Rules for categorizing:
Categories must be mutally exclusive (one person in only one category)
All categories used to represent data must be equivalent
Must be exhaustive
Ordinal variable
Allows for us to rank order attributes with regards to which has more or less of an atribute (qualitative and quantitative - must be mutually exclusive, researcher has to follow logical ordering, and each category has to be balanced)
Interval variable (and the three types of scales)
Quantitative, in logical order representing equal distances between levels of each category.
Likert scale - participants presented with statements and then asked to respond based on pre-existing scale
Semantic differenetial - consists of series of opposing adjectives (good/bad) and then a continuum of possible choices between them as participants are asked to select number that represents their perception between the two adjectives
Ratio data
Mostly like an interval, but it can have an absolute zero starting point
Primary mission or research
The manipulation, measurement, and control of variables
Main units of analysis for communication
Individuals (temperament, personality, or communication traits, or specific behaviors)
Dyads (getting information on two people in an interpersonal relationship)
Groups (seeing conflicts and all go in groups)
Organizations (workplaces, anything that can study various groups)
Operationalization
Transplanting abstract concept into tangible, observable form in experiment (includes variations in stimulus conditions, levels or degrees, variations based on standardized tests, or "intact"/self-selected groups
Relate the terms concrete and abstract to dependent variables
Concrete - relatively fixed, unchanging
Abstract - dynamic, transitory (like mood, occupation)
Dichotomus variables
Like true/false, male/female
Ordered variables
Mutually exclusive categories with an order, sequence, or hierarchy (fall to winter to spring to summer)
Continuous variables
Includes constant increments or gradations that can be arithmetically compared (IQ scores, age, heart rate)
Unit of analysis
Specific entity being examined (the individual, dyad, group, organization, or culture)
Ecological fallacy
Drawing conclusions about individuals based on group data (a "sweeping generalization")
Mediating variable
2nd or 3rd variable that experimenter knows about, can increase relationship betwixt independent and dependent variable
Ethnocentrism
Concept that one's own nation is the center of the universe, and other nations are inferior
Measurement, plus its two most important considerations
Process of systematic obsrevation and assignment of numbers to phenomena according to rules (process is defined as a set of progressive, independent steps - we must find out what phenomenon we are trying to measure)
Two most important considerations: procedures employed in observationa nd rules employed in assignment of numbers
Conceptualization
Development and clarification of concepts or your germinal idea (when you take germinal idea and determine what it is you want to measure and if you can realistically measure it)
Latent, or hypohetical, variable
Variable reseracher cannot directly observe that is inferred from other observable phenomena
On constructing questions
Start with twice as many items as you need
Every item should reflect construct
Use concise, clearly worded, unambiguous items
Construct relatively short items
Pay attention to terminology in items
Avoid emotionally loaded items
Avoid leading and loaded items
Avoid double questions and questions with false premises
Avoid used always and never
Avoid double negatives/positives
Avoid hypothetical questions and ambiguous pronoun references
Consider recall issues for certain types of items
One-tailed vs. two-tailed hypothesis
Predicts specific nature of relationship or difference vs. one that predicts there is a significant relationship or difference without indicating specific nature
Null hypothesis vs directional research question vs nondirectional research question
NH has zero differences or zero relationships (we test it when we can prove statistic is false, as opposed to true, so it's the opposite of a real hypothesis), and is written as H0
A directional RQ is when a researcher asks if a there is a significant difference between two or more variables or positive or negative relationship between two or more variables
Nondriectional RQ - when a researcher asks if there is a difference or relationship between two or more variables
Difference between a hypothesis (define) and a research question
A hypothesis is a tentative prediction about nature of relationship between two or more variables that represents an educated guess about results of an experiment, always held tentatively
A research question is a hypothesis in question form
Experimental hypothesis
Prediction that there will be statistically significant findings, as in a significant correlation between the groups or variables
Phrasing a hypothesis
Avoid vague or nebulous wording, must be testable and falsifiable, should be fairly specific
Reliability, and scalar reliability
Accuracy that measure has in producing stable, constant measurements
Scalar reliability is the reliability of individual research scales
Ways in which reliability can be increased
Item construction - use more ambiguous items
Length of instrument - get more of an item
Administration of test - needs to be administered under standard, well-controlled situations
Most commonly used type of reliability
Cronbach's alpha reliability
Validity, face/content validity, criterion, and predictive
Degree to which instrument measures what it is intended to measure
Face/construct - subjective, examines content to see whether on its "face" it appears to be related to what research wants to measure
Criterion - how accurately a new measure can predict well-accepted criterion or previously validated concept
Predictive - whether or not person's score on new measure can predict future scores on another method
Concurrent, retrospective, and construct/factoral validity
Concurrent - researcher obtains score for new measure and score for criterion measure at same point and then determines correlation
Retrospective - occurs when researcher has previously measured criterion and then attempts to relate it to newly developed measure at later time
Construct/factoral - validity test of theoretical construct (also using known groups is another approach. Factoral validity is based on "factor analysis", the highly sophisticated statistical technique for examining how many items there are in an instrument that correlate with each other)
Threats to validity - describe them
Inadequate preoperational explication of concepts - when individual hasn't determined what scale is supposed to measure
Mono-operation bias - only measuring once
Interaction of different treatments - when subject is doing something else on the side to affect results
Interaction of testing and treatment - combined effect of being measured multiple times combined w/ receiving treatment that can alter scores as a result of test awareness
Restricted generalizability across constructs - how restricted measure is across different constructs
Confounding constructs & levels of constructs - whether or not you are measuring one or more constructs, or multiple levels of constructs
Social threats to validity
Hypothesis guessing - participants guess what research is attempting to measure and act accordingly
Evaluation apprehension - people get anxious when they know they're being evaluated
Experimenter expectancies - experimenter unknowingly influences score by encouraging certain responses
Social desirability bias - when participant changes answers/behavior to be seen in "better light"
Can a measure be reliable but not valid? What about the other way around?
A measure CAN be reliable but not valid, but it has to be reliable to be valid
Minor factors that can affect results
Faking, as in always saying yes or just acting differently to affect results
Social desirability
"Screw you" effect
Response set - any tendency that causes person to give different responses to test items than they would if presented in a different form
Bad items on measure
If I knew a really cool hot girl named Malyssa, I would...
ASK HER OUT!! :P