Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
100 Cards in this Set
- Front
- Back
method of tradition/tenacity
|
tradition - in a culture (i.e. holidays)
tenacious person - very determined person, persistent knowledge - we know its true because its always been true |
|
problems with everyday ways of knowing
|
illogical reasoning
inaccurate observation overgeneralization selective observation (only notice certain things) everyday ways of knowing can even lead to conflicting ideas about "truth" |
|
method of authority
|
parents
government doctors easy to find answers but sometimes biased |
|
method of intuition/logic
|
common sense, reason it out
--your common sense may be different then others "Platonic Idealism" - get to truth through logic, deep rational thought - hash it out in your mind -debate, play devils advocate |
|
experience/observation
|
personal experience
- it happened to you - could be an isolated event (BAD) Baconian empiricism -systematic and testing things -seeing if its true, asking others -EX: expired milk - smell it(experience/empiricism), taste it (empiricism), or throw it away (authority) |
|
Scientific Method
|
combines "platonic idealism" with "baconian empiricism"
- logic/intuition -> constructing theories - observation/experience -> gathering data |
|
communication science
|
uses empirical observations to test theories about communication provess
|
|
unique characteristics of science
|
scientific research is public - peer-reviewed journals; replicate studies
science is empirical - conscious, deliberate observations Science is "objective" -no bias -control/remove personal biases -explicit rules, standards, procedures science is systematic and cumulative - building on prior studies/theory - literature showing what is already known |
|
goals of scientific research
|
description
-what is explanation -why it is prediction -what will be science CANNT settle question of moral value - right/wrong, good/bad, etc. |
|
the wheel of science
DEDUCTION |
start with theories ->hypotheses (preditctions) -> observations (gathering the data)
this is DEDUCTION - traditional science; quantitative method |
|
the wheel of science
INDUCTION |
start with obsercaionts (Start with a general topic area and start gathering data to see what is going on) -> empirical generalizations (do i have any commonalities? patterns?) -> theories (comes out of what you have observed)
Induction - start with data and build to theory humanistic/interpretive QUALITATIVE methods subjective |
|
quantitative methods
|
quantity
adhere strongly to scientific goals and principles(objectivity, etc) careful, precise measures numbers, counting employ numerical measures and data analysis that we can analyze using statistics ex: surveys, experiments, content analysis |
|
qualitative methods
|
qualities - what are the qualities you are looking for in the observations you make
interpretive research or field studies humanistic form of social science ex: the humanities -arts, film and media studies; english departments -NOT about precise counting interpretations of what people are like value SOME aspects of science - empiricism - tons of data values research subjectivity, unlike quantitative methods ex: participant observation, depth interviewing, conversation analysis |
|
humanistic research
|
critical studies - rhetoric criticism, feminist analysis, cultural studies
not a social science method |
|
complete wheel of science
|
theories -> hypotheses->observations->empirical generalizations - > theories
two different methods, but it is a wheel -theories had to come from someone's observations |
|
basic research
|
theoretical
testing/building theories/conceptual ideas, advancing what we know about a topic just because it is worth knowing don't figure out what to directly do with it, just looking at the process - theories |
|
applied research
|
using research to solve practical problems
- testing effets of a cerain policy, school program, ad campaign/technique,etc data on specific issues |
|
blurring of basic and applied research
|
both use the same type of methods and are very rigorous
- even the most theoretical researc has practical value and the most applied research uses theoretical reasoning and arguments to form hypotheses |
|
theories
|
an attempt to explain some aspect of social life
a scholars idea about how/why events/attitudes occur includes set of concepts and their relationship |
|
scientific theories should be falsifiable
|
testable
able to be tested empirically; to be proven wrong |
|
theories are built on concepts
|
terms for things/ideas/parts for the theory
researchers must define them |
|
concepts are studied as variables
|
they have variations that can be measured
ex: gender - male or female ex:motivation - rewarded vs punished model usually have at least two |
|
hypotheses
|
derive from prior findings/theory
a specific testable predication about the relationship between variables |
|
research question
|
if theory or previous research does not lead to a specific prediction; if previous findings conflict/inconclusive
research question instead of hypotheses |
|
types of research questions or hypotheses
|
casual - state how one variable changes/influences another
correlational - state mere association between variables |
|
survey/correlational research
|
survey is a method for the entire research - not the questionnaire
test correlational hypotheses - mere relationship/association -measure some variables and relate them, compare existing groups, etc great for external validity poor for causality |
|
experimental research
|
tests causal hypotheses/predictions
-manipulate variables/grous, control everything else and measure effects great for internal validity - ability to establish that X causes Y (rules out other explanations) poor generalizability |
|
content analysis
|
use to study media messages themselves
tests correlational hypotheses/research questions about media (or other comm) content |
|
theories are built on concepts
|
terms for things/ideas/parts for the theory
researchers must define them |
|
concepts are studied as variables
|
they have variations that can be measured
ex: gender - male or female ex:motivation - rewarded vs punished model usually have at least two |
|
hypotheses
|
derive from prior findings/theory
a specific testable predication about the relationship between variables |
|
research question
|
if theory or previous research does not lead to a specific prediction; if previous findings conflict/inconclusive
research question instead of hypotheses |
|
types of research questions or hypotheses
|
casual - state how one variable changes/influences another
correlational - state mere association between variables |
|
survey/correlational research
|
survey is a method for the entire research - not the questionnaire
test correlational hypotheses - mere relationship/association -measure some variables and relate them, compare existing groups, etc great for external validity poor for causality |
|
experimental research
|
tests causal hypotheses/predictions
-manipulate variables/grous, control everything else and measure effects great for internal validity - ability to establish that X causes Y (rules out other explanations) poor generalizability |
|
content analysis
|
use to study media messages themselves
tests correlational hypotheses/research questions about media (or other comm) content |
|
variables in experimental research
|
causal hypotheses
one thing has an influenceo n something else independent variable dependent variable |
|
independent variable
|
variable manipulated by researcher
the "cause" in cause-effect relationship |
|
dependent variable
|
sometimes called dependent measure
variable affected/changed by the IV what happens to this depends on the independent variable the effect or outcome |
|
variables in survey/correlation research
|
cant be cause-effect
IV predictor variable DV criterion variable things that are being predicted |
|
conceptual definition
|
a working definition of what the concept means for purposes of investigation - usually based on theory/prior research
not a dictionary definition |
|
operational definition
|
how exactly the concept will be measured in a study
|
|
types of measures
|
physiological measures
behavioral measures self-report measures |
|
levels of measurement
|
nominal (categorical/discrete)
ordinal interval ratio |
|
nominal
|
categorical/discrete
named categories variable is measure merely with different categories categories must be mutually exclusive categories must be exhaustive |
|
ordinal
|
variable is measured with rank ordered categories
not common in communication research |
|
interval
|
variable is measure with successive points on a scale with equal intervals
|
|
ratio
|
interval measurement with a true, meaningful zero point
-time in seconds, weight in lbs, etc. |
|
measures should
|
capture variation - use continuous variables for DV's where possible
addresses the specific variables in hypotheses/research questions minimize order effects minimize potential "social desirability:effects" |
|
types of questions
|
open-ended
close-ended |
|
open-ended questions
|
respondents provide their own answers
allows in-depth responses allows for unforeseen types of answers good in "pilot" studies difficult to code and analyze could misinterpret responses |
|
close-ended questions
|
respondents select from list of choices
choices are exhaustive and mutually exclusive easier to process with continuous variables, can do more powerful statistical analyses may miss other possible responses or more complex attitudes |
|
close-ended formats
|
likert-type items
semantic differential scaling |
|
likert-type items
|
respondents indicate their agreement with a particular statement
-ex: parents should talk openly about sexuality with their parents- on a scale of strongly agree to strongly disagree with a 5 point scale in-between other response options are also possible |
|
semantic differential scaling
|
respondents make ratings between two bipolar adjectives
ex: my best friend is: warm---cold |
|
composite measures
|
use multiple items or one variable; combine those items into an index (aka "scale")
all items added into one overall score -> unidimensional index combine different items into different "sub-scales" -> multi-dimensional index |
|
uni-dimensional index
|
all items added (or averaged) into one overal score
|
|
multi-dimensional index
|
combine different items into different "sub-scales"
|
|
uni-dimensional credibility
|
add all items into one total credibility score
|
|
multi-dimensional credibility
|
knowledge + experience +competent = "expertise" dimension
trustworthy + honest+ unbiased = "trusthworthiness" dimension |
|
assessing reliability
|
inter-item reliability
inter-coder reliability intra-coder reliability |
|
inter-item reliability
|
good with questionnaire items -refer to each question on a questionnaire as an item -not all are questions, some are statements - if there is an anser to circle, it is an item
administer items more than once - test-retest; split-half look at internal consistency of items in a scale (cronbachs alpha) -use a bunch of different items, but theyhave the same idea - get an average and make a scale -want consistency -different connotations/variations of words to see a apttern -cronbachs alpha gives us a numerical value for that (0-1) most scales accept a reliability above .7 want high numbers |
|
inter-coder reliability
|
consistency between different coders in how they are coding the data
compare multiple coders good trainers of coders, and good definitions create likeliness of reliable coding |
|
coders
|
people (researchers) making judgements they observe
|
|
intra-coder reliability
|
consistency within the same coder
compare multiple observations of the same coder on the same material |
|
assessing validity
|
subjective types of validation
criterion-related validation construct validation |
|
subjective types of validation
|
arguments of whether or not things are good; how we evaluate
face validity content validity |
|
face validity
|
the measure looks/sounds good "on the face of it"
|
|
content validity
|
the measure captures the full range of meanings/dimensions of the concept
|
|
criterion-related validation
|
valid if it meets a certain criterion we are comparing it to
predictive validity - the measure is shown to predict scores on an appropriate future measure |
|
construct validation
|
the measure is shown to be related to measures of other concepts that should be related (and not to ones that shouldn't)
developed a scale or index - demonstrate that you have contruct validity - compare it to other scales (self-esteem scales get similar scores on someone else's self-confidence scale, they should be related) |
|
reliable but not valid?
|
can get reliability that is not valid
|
|
valid but not reliable?
|
needs to be reliable before it can be valid
reliability is the anchor - have to get |
|
triangulation of measurement
|
use several different measures of one variable, then compare them
measure something a different way, then you are able to look at different angles of it - different types of measure or different phrased scales can triangulate measures within one study or across different studies |
|
sample
|
a subset of the target population
|
|
sampling units
|
individual
groups social artifacts |
|
individual persons
|
measure each individual unit
person |
|
groups
|
married couples
unit is not the individual, unit would be the couple ex- couples, juries, organizations, countries |
|
social artifacts
|
ads, TV scenes/episodes
|
|
ecologically fallacy
|
making unwarranted assertions about individuals based on observations about groups
|
|
representative sampling
|
intended to be a mini version of the target population
typical of surveys (especially polls) and content analyses representative because of random selection - everyone/thing in population has equal chance of being included in sample |
|
non-representative sampling
|
not intended to generalize
typical of experimental designs and qualitative research |
|
sampling error
|
sample data will be slightly different from population because of chance alone
estimate this statistically (margin of error) larger sample size, the smaller the margin of error |
|
systematic error (sampling bias)
|
BAD if you are trying to use representative sampling
systematically over - or under-represent certain segments of population caused by: -improper weighting -very low response rate -wrong sampling frame -using non-representative sampling methods only bias when you meant for it to be a representative sample |
|
representative sampling techniques
|
simpele random sampling
systematic sampling stratified sampling |
|
simple random sampling
|
select elements randomly from population
-listed populations - random numbers table |
|
systematic sampling
|
only works if you have a list
from alist of the population, select every "nth" element, and must have a random start; cycle through entire list similar results as simple random sampling watch out for potential periodicity - happens if list is ordered in a way that cycles in periods that are equal |
|
stratified sampling
|
divide population into subsets (strata), then select randomly from each
usually stratify for demographic variables need prior knowledge of population proportion increases representativeness b/c reduces sampling eror (for the stratified variable) but more costly and time consuming |
|
multistage cluster sampling aka cluster sampling
|
first randomly sample groups (clusters), then randomly sample indicidual elements within each cluster
useful for populations not listed as individuals reduces costs SAMPLING ERROR - in each stage - higher sampling error -stratified sampling at each stage to reduce sampling error |
|
non-representative sampling techniques
|
convenience sample
purposive sample volunteer sample quota sample network/snowball sample |
|
convenience sample
|
cant use phrases like "students think" - generalizing - can't generalize
select individuals that are available/handy |
|
purposive sample
|
select certain individuals for special reason (their characteristics, etc.)
|
|
volunteer sample
|
people select themselves to be included..
not voluntary, all studies are voluntary, but tese people are volunteers - issue - people are usually a certain type of person who likes to have their opinions heard, etc |
|
quota sample
|
select individuals to match demographic proportion in population
like stratified, but this is non-random just filling the quota |
|
network/snowball sample
|
select individuals, who contact other similar individuals, and so on...
|
|
guidelines for using human subjects
|
participation must be voluntary
must obtain informed consent should protect subjects from harm should preserve right to privacy should avoid deception should not withhold benefits from a control group must get approval from the universities IRB |
|
must obtain informed consent
|
explain to participants
- purpose & procedures - possible risks & discomforts - ability to withdraw from study - how questions will be answered |
|
should protect subjects from harm
|
should not diminsh self-worth or cause stress, anxiety, or embarrassment
|
|
right to privacy
|
anonymity
confidentiality |
|
avoid deception
|
outright deception
concealment must be justified subjects myst be debriefed |