Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
117 Cards in this Set
- Front
- Back
Assign numbers to things: |
- lends precision - basis of all statistical analysis - make accurate discriminations - generalize with varying levels of confidence |
|
Numerals vs Numbers |
Numerals: labels for phenomena (2017=year) Numbers: assign value and relatively to phenomena (1-5 is increasing) Main advantage of numbers: ease of processing answers |
|
Measurement is.. |
Process of finding out whether people or media content have more or less of an attribute we’re interested in. |
|
Measure requires answers to 3 questions |
What exactly shall we measure and record? Does the measure capture what we’re interested in? Give our measures the same results when repeated? Are we sure of this |
|
Demographic info is not much more than labels -> low level measurement DIT NOG IN BOEK NALEZEN EN BEGRIJPEN!! |
- |
|
NOIR |
Ordinal - rank ordering Indicates some level of progressions. Eg: first, second, third. But how much better is first than second? |
|
NOIR |
Nominal - essentially labels / classification. Don’t measure anything. Eg: male/female. Only numbers we can generate are counts and percentages Coding: transforming into numbers for processing. Remains nominal. (Female 1, man 2) |
|
NOIR |
Interval - allow statistical calculations. Common in quantitative research. Equal levels between scale points. You can attach numbers to it and quantitatively analyse. Eg. Likert-scale and semantic differential scale |
|
NOIR |
Ratio - more sophisticated statistical operation. Contains a true zero point. |
|
Easy example: |
Nominal: parent, child, grandma Ordinal: child, adolescent, adult Interval: age: 0-4, 5-9, 10-4 Ratio: age in years ... |
|
Discrete vs. Continuos |
Discrete: Nominal & Ordinal Can’t calculate with it Jumps from category to category
Continuous: Interval & Ratio More precise measurements and therefore captures more subtle changes Change incrementally (age by day, year; income by cent, dollar) Discrete cannot be continuous. Continuous can be discrete |
|
Credible |
Trustworthy or believable, need for confidence |
|
Reliability |
Produce the same results consistently |
|
Checks for reliability: |
Test-retest Intercoder/observer reliability Inter-item reliability Established measures |
|
Test-retest |
Does it measure the same results at two different times? Within 1-2 weeks. All other things being equal. |
|
Intercoder / observer reliability |
Check if 2 observers record the same results at the same time of the same phenomena, all other things being equal. 2 typical research scenario’s that require this: - Observation of human interaction - Content analysis of news media 1: operationalize what must be observed 2: classify into two broad categories. Observers then classify their observations. 3: calculate correlation between observers’ coding. Reliability coefficients: name for the correlation scores. Ranges between 0 and 1.0. Anything over 0.95 is perfect, < 0.75 is questionable reliability |
|
Inter-item = internal reliability |
A check that individual questions in a question set are consistent in results and measure/operationalize the same concept. High correlation: if they have the same level of response. Split-half technique: correlation of half of the results is compared to the other half. It helps researchers modify their question sets to the point where they can be sure that each question addresses the same concepts but is not a duplicate. Can also compute correlation from random selected question pairs. |
|
Established measures |
Comparison of the results obtained from your own measure and a known, tested measure used for the same purpose. |
|
Improve reliability |
Rewording, adding or dropping questions Pretesting instructions for observers Training observers When repeat measures, under same conditions as possible |
|
Validity |
Measure what is supposed to measure |
|
Content validity (looks OK) |
If the measure covers all aspects of the concept under investigation. It’s a judgement call. Eg: “television” for “viewing video content” not suitable. |
|
Expert validity / panel validity |
If done by an expert, measures been peer approved |
|
Face validity |
Questions / objects appear to measure what they should measure. Can vary per group |
|
Construct validity (theoretically OK) |
Demonstrable agreement between concept or construct you’re measuring and other related concepts. Convergent validity: measure shows correlation with similar concept (eg.: different measures under umbrella “organization sfeer”.) Divergent validity: measure shows no correlation with dissimilar measure. |
|
Criterion validity (Tests OK) |
Related your measure to other specific measures in 2 ways. 1. High concurrent validity: if scores on your measure correlate highly with other measures designed to measure the same construct. 2. High predictive validity: if the measure predicts “real world outcomes”. Eg.: SAT scores should predict success in college. |
|
Likert scale |
Framed as a statement May vary between 5&7 points commonly 5 with always same options (strongly (dis)agree) Each answer is given a numeral value (+ recorded as a score) |
|
Semantic differential scale |
Opposite ideas toward concept/object and invites respondents to decide where their opinion lies. May be multiple word scales for each concept. Semantic differential scale can be more difficult to construct because words have to be found and pretested for meaning before use. “How do people see opposite of expensive?” |
|
- Assumption of equal distances between points is just an assumption. Psychologically distances may be different. - |
Question if complex feelings can ever be adequately captured using these scales. |
|
- Both scales have “steps” to which we can assign numbers. Allows us to make summary claims about data. - |
Interval measures allow us for the first time to make summary statements. ( “The average score was 3.2.” ) |
|
Population |
Contains every one of the units the researcher has elected to study |
|
We combine sampling and inferential statistics to help us make intelligent estimates from a sample when the exact size and nature are unknown or the population is too large |
- |
|
Population |
Contains every one of the units the researcher has elected to study |
|
We combine sampling and inferential statistics to help us make intelligent estimates from a sample when the exact size and nature are unknown or the population is too large |
- |
|
Census |
Study of every member in the population |
|
Sample |
Selected segment of population to represent population |
|
Non-probability sampling |
Based on judgment of the researcher Advantages: convenience & providing insight Disadvantages: doesn’t permit generalizations to a wider population |
|
Convenience sampling |
Based on convenience to the researcher Useful when pretesting a study or when it’s not for scholarly publication Just want some basis for inquiries + speed, cost - can’t generalize from sample to population |
|
Purposive / Judgmental sampling |
Specific person or media content will meet specific criteria the researcher may have + meets specific need of researcher - may not represent population |
|
Quota sampling |
Attempts to replicate features of the population in the sample. More readily done than random sampling. - may not represent population |
|
Purposive / Judgmental sampling |
Specific person or media content will meet specific criteria the researcher may have + meets specific need of researcher - may not represent population |
|
Quota sampling |
Attempts to replicate features of the population in the sample. More readily done than random sampling. - may not represent population |
|
Network / Snowball sampling |
Form of volunteer sampling when you rely on members of a network to introduce you to other members of the network. - depends on ability of researcher to network - sample may over- / underrepresent aspects of population - possible loss of diversity in sample |
|
Volunteer sampling |
Identifies willing participants the researcher might not otherwise be aware of. Difference between participants who simply agreed and participants who want to see how it influences the research findings - Agendas & Interests of volunteers may influence the research - Sample may over- / underrepresent aspects of the population |
|
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
|
Probability sampling |
Generated by randomly selecting the sample units. Reduces researcher’s bias Mechanism selects the sample, researcher has no control. Every unit has equal chance of being selected. + permits us to make statistical generalizations |
|
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
|
Probability sampling |
Generated by randomly selecting the sample units. Reduces researcher’s bias Mechanism selects the sample, researcher has no control. Every unit has equal chance of being selected. + permits us to make statistical generalizations |
|
Sampling franes |
List from which a probability sample is selected. We do this because we cannot identify every member of the population. |
|
Sampling units |
Units selected for the study. Can be individuals, couples, comics, teams, etc |
|
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
|
Postal sampling frames |
Postal addresses presuppose a residence which not everyone has < eliminating a specific group People may move and zip code demographics change -> may not reach individuals we want to reach Address-based sampling (abs) back since phone/landline user decreases - relatively slow turnaround and can be problematic in rural areas and P.O. boxes + helpful with in-person surveys and multimethod surveys |
|
Telephone sampling frames |
Random digit dialing (RDD): dialing computer generated random numbers ^ many numbers not put in use - less landline phones are used (particular group over represented) - unlisted phone numbers will not get into a sample from directory listings Mobile problematic because location may be lost - many people filter out survey numbers and join “no-call-lists” - not sure talking to the right person - sampling out lower incomes < phone needs paying - many people using no more phone > internet |
|
Internet sampling frames |
+ can recruit large number of participants + can recruit globally + SM may be effective for snowball sampling + SM can reach hidden populations such as drug users - available info may not accurately represent its overall data - spam and “bots” may masquerade as humans on internet - different behavior per online platform - cannot develop internet sampling frame -> don’t know what internet population consists of - internet samples might be overall but younger, high educated and more income - demographic differences between platforms Large samples can help reduce sampling bias and chance that few abnormal findings might bias the results |
|
Design-based approach |
You add to your online sample by recruiting from outside the internet (and provide them internet access) |
|
Model based approach |
Uses volunteers or opt-in panels of internet users and corrects such panels for any representational bias. With estimates of how the internet population differs from your research population, you can estimate what research results would have been had your actual research population been sampled |
|
Best (2014) suggests that... |
May be possible to check findings against the results from analogous surveys that used probability sampling Generalizability may be improved using population sampling as complement to purposive sampling Internet sampling should be limited to circumstances where there is clear evidence that the hypothesis being tested are uniformly applicable across the entire population Much depends on your need to generalize |
|
Special population sampling |
Can be hard because no / confidential lists or owners of lists actually happy after seeing it research design. Internet based snowball sampling can be effective to reaching special populations. List rentals industry can provide specifically targeted mailing lists |
|
Future of survey sampling |
Due to costs, era of traditional probability sampling may be over Appears to be no generally accepted method of sampling from the web |
|
Sample matching |
New sampling approach. Target sample from known sampling frame is selected and compared to different web panels. The closest match in web panel is then selected for research. |
|
Probability sampling |
Generated by randomly selecting the sample units. Reduces researcher’s bias Mechanism selects the sample, researcher has no control. Every unit has equal chance of being selected. + permits us to make statistical generalizations |
|
Sampling frames |
List from which a probability sample is selected. We do this because we cannot identify every member of the population. |
|
Sampling units |
Units selected for the study. Can be individuals, couples, comics, teams, etc |
|
Random sampling |
No predicting what specific names/numbers will be samples. Allows how big and how numbers are computed and presented. Use table of random numbers generator May not always be diverse + allows to generalize from sample to population - may eliminate individuals that should be in the sample |
|
Stratified random sampling |
“Force” subgroups in sample > set aside nr. of places in ur sample relative to the size of the groups in the population you are drawing from. > fill those places by random sampling from subgroups Need to identify and sample a sampling frame for each subgroup to be represented in the sample |
|
Systematic sampling |
Sampling every Nth person of a list. (Sampling interval) No control on which names get selected. - if pattern in population matches the sampling interval, may over-underrepresent features of the population + only one or two random number starting points are needed to start sampling |
|
Multistage cluster sampling |
first sample larger units, then smaller (states -> addresses) + Relative ease of identifying people - sample may over-underrepresent aspects of the population |
|
How big does my sample have to be? Size depends on |
1. Constraints of research limitations/resources/deadlines/money/time/reward/etc. 2. “Nature” of your study sample size less of an issue when piloting it when just informal study which will not be published 3. Level of confidence 100% census 4. Homogeneity of the population The more homogeneous, the smaller ur sample can be |
|
Standard error (std dvtn of the sample), homogeneity and sample size are related. If you know 2, 3 can be calculated. Helps researcher make trade off between level of confidence. |
- |
|
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
Volunteer: humans only Convenience, Purposive/Judgment and Quota can be used with non-humans (media content) |
|
Postal sampling frames |
Postal addresses presuppose a residence which not everyone has < eliminating a specific group People may move and zip code demographics change -> may not reach individuals we want to reach Address-based sampling (abs) back since phone/landline user decreases - relatively slow turnaround and can be problematic in rural areas and P.O. boxes + helpful with in-person surveys and multimethod surveys |
|
Telephone sampling frames |
Random digit dialing (RDD): dialing computer generated random numbers ^ many numbers not put in use - less landline phones are used (particular group over represented) - unlisted phone numbers will not get into a sample from directory listings Mobile problematic because location may be lost - many people filter out survey numbers and join “no-call-lists” - not sure talking to the right person - sampling out lower incomes < phone needs paying - many people using no more phone > internet |
|
Internet sampling frames |
+ can recruit large number of participants + can recruit globally + SM may be effective for snowball sampling + SM can reach hidden populations such as drug users - available info may not accurately represent its overall data - spam and “bots” may masquerade as humans on internet - different behavior per online platform - cannot develop internet sampling frame -> don’t know what internet population consists of - internet samples might be overall but younger, high educated and more income - demographic differences between platforms Large samples can help reduce sampling bias and chance that few abnormal findings might bias the results |
|
Design-based approach |
You add to your online sample by recruiting from outside the internet (and provide them internet access) |
|
Model based approach |
Uses volunteers or opt-in panels of internet users and corrects such panels for any representational bias. With estimates of how the internet population differs from your research population, you can estimate what research results would have been had your actual research population been sampled |
|
Best (2014) suggests that... |
May be possible to check findings against the results from analogous surveys that used probability sampling Generalizability may be improved using population sampling as complement to purposive sampling Internet sampling should be limited to circumstances where there is clear evidence that the hypothesis being tested are uniformly applicable across the entire population Much depends on your need to generalize |
|
Special population sampling |
Can be hard because no / confidential lists or owners of lists actually happy after seeing it research design. Internet based snowball sampling can be effective to reaching special populations. List rentals industry can provide specifically targeted mailing lists |
|
Future of survey sampling |
Due to costs, era of traditional probability sampling may be over Appears to be no generally accepted method of sampling from the web |
|
Sample matching |
New sampling approach. Target sample from known sampling frame is selected and compared to different web panels. The closest match in web panel is then selected for research. |
|
Probability sampling |
Generated by randomly selecting the sample units. Reduces researcher’s bias Mechanism selects the sample, researcher has no control. Every unit has equal chance of being selected. + permits us to make statistical generalizations |
|
Sampling frames |
List from which a probability sample is selected. We do this because we cannot identify every member of the population. |
|
Sampling units |
Units selected for the study. Can be individuals, couples, comics, teams, etc |
|
Random sampling |
No predicting what specific names/numbers will be samples. Allows how big and how numbers are computed and presented. Use table of random numbers generator May not always be diverse + allows to generalize from sample to population - may eliminate individuals that should be in the sample |
|
Stratified random sampling |
“Force” subgroups in sample > set aside nr. of places in ur sample relative to the size of the groups in the population you are drawing from. > fill those places by random sampling from subgroups Need to identify and sample a sampling frame for each subgroup to be represented in the sample |
|
Systematic sampling |
Sampling every Nth person of a list. (Sampling interval) No control on which names get selected. - if pattern in population matches the sampling interval, may over-underrepresent features of the population + only one or two random number starting points are needed to start sampling |
|
Multistage cluster sampling |
first sample larger units, then smaller (states -> addresses) + Relative ease of identifying people - sample may over-underrepresent aspects of the population |
|
How big does my sample have to be? Size depends on |
1. Constraints of research limitations/resources/deadlines/money/time/reward/etc. 2. “Nature” of your study sample size less of an issue when piloting it when just informal study which will not be published 3. Level of confidence 100% census 4. Homogeneity of the population The more homogeneous, the smaller ur sample can be |
|
Standard error (std dvtn of the sample), homogeneity and sample size are related. If you know 2, 3 can be calculated. Helps researcher make trade off between level of confidence. |
- |
|
Surveys |
Series of formatted Q's delivered to a defined sample with the expectation that their responses will be returned between immediately and a few days Questionnaire = Specific set of Q's the respondents answer |
|
Advantages and Disadvantages survey |
+ Participants can answer a lot of questions rapidly + Large number can be questioned at the same time + Ability to make generalizations - Limited question format - Doesn't allow to asses causal relationships - Unwillingness of consumer to participate - Decide on validity |
|
Cross-sectional surveys |
Capture what is going on at one point in time. Expecting no (great) differences at other point. Problem? Use longitudinal studies which track changes in knowledge, attitude and behavior over time. |
|
Trend studies |
Measure the same items over time. but with different samples + Sample size stays the same as people can be replaces + Tracks shifts in public opinion - No assurance that people in different samples do not differ |
|
Panel studies |
Group of individuals is samples & same individuals are questioned over time + No variation in composition of the sample - The sample size changes |
|
Cohorts |
Defined groups, typically because they have an event in common. Aims at assessing broad shifts in the nature of a population Sampling takes place each time -> Individuals in each sample will vary |
|
Cross-lagged surveys |
Measure a dependent and an independent variable at two points in time. Allows conclusions about causality! (only survey design that does) |
|
A successful survey.. |
Captures (useful) information from the highest possible percentage of respondents. Most surveys seek to find out these 4 things: 1. Demographic data 2. Knowledge of issue 3. Attitude towards issue 4. Behavior towards issue |
|
Function question format |
Clarify both the question ad the response options. Giving researchers relevant categories of answers to help analyzing results |
|
Open ended questions |
+ Gives insight that you might not get through structured questions - Time consuming to code and analyse, compared to multiple choice |
|
Dichotomous questions |
Forces respondents to select one of 2 possible answers. Only appropriate when they provide a clear "either/or" option and respondents will not be looking for a "missing" third option. + Simplifies data coding and analysis - Life is rarely yes/no simple |
|
Multiple choice questions |
Provides several possible answers and asks to select one or more or to rank order* them. *gets around the problem of respondents checking every possible answer. |
|
Likert-scake |
Asks to locate level of agreement. Varies between strongly agree - strongly disagree. Always presented as statements. Never questions. Can check plagiarism using similar statements. |
|
Semantic differential scale |
Presents a concept followed by scales with opposite meanings (e.g. strong, weak). Considerable work can be involved in assuring that the words capture what you want to capture + important to assure that opposite words have opposite meaning. |
|
Misinterpretation |
Primarily because a question has been poorly worded and/or has not been pretested on possible misinterpretations. Minimize misinterpretation possibilities: check Q's at draft + Pilot study |
|
Common problems with wording |
Leading questions: Lead the respondents to an answer that may not be true. "Why do you think campus is unethical?" Solutions: -Describe campus (open), Campus is ethical (agree/disagree), what position describes best (semantic dif.). Double barreled question: asks for one response while 2 questions are asked. Negative wording: Phrasing questions as negative (better not do this) The double negative: Negative wording AND double-barreled |
|
In a global world, languages and dialects need to be considered in question design. |
Local consultant can ensure that subtle shades of meaning are translated appropriately. |
|
Funnel & Inverted funnel |
Funnel: start from broad to specific questions Inverted: start from specific to broad questions With mail, people can choose their own order, but with phone and face-to-face, need to establish trust before going into sensitive topics plus need to get attention Questions with the same theme should be together. Sometimes a question "sandwiched" between unrelated questions to see if questions on te same topic are answered consistently |
|
Problems of survey on the web |
+speed, low cost, geographic coverage, multimedia content, rapidly experiment with survey design - We don't know who the population is (no universal list from which to draw) - Internet users are different from non-internet users. 2 basic answers to this problem: recruit participants by traditional means, estimate what the results would have been had the defined research population been sampled - Web surveys may have a lower response rate than more traditional means |
|
Control |
Less control may increase dropping out. Web-based surveys may indirectly control the time spent on open-ended Q's by limiting length of answers. Can also control not letting them click away from video before they've seen it all |
|
Improving response rates |
Need to be reminded More likely to respond to credible source than marketing Perceived sponsor can affect respondents' attitudes towards survey Prefer to complete survey at own time, own convenience Want to be assured that sensitive info is kept confidential Assured that not penalized when dropping out Might want to see results Small "thank you" and/or reward |
|
Mobile surveys (small screen) |
+ Users may feel less pressured to respond socially desirable - Multitasking - Must have ability to hear audio, pay sufficient attention - Make effort to complete - Take longer to complete surveys (tussendoor andere dingen) - Need consider layout |
|
People might see as junk mail |
- Use a preliminary postcard/letter to tell about survey and ask for participants - Include phone number that respondents can use to call and verify - When call, send letter to announce your call - Follow-up reminder postcards |
|
Respondent's willingness depends on |
- Level of interest in the topic - Perceived benefit - Survey sponsor - Interviewer's characteristics - Any incentives - Ability (time, educational, cultural) |
|
PHONE |
+ Large samples in short time + Most households have phones + Potential to assist respondents - Limited to a few short Q's - Consumer resistance - Barriers of "Caller ID" & "Not call" lists - 'Cell-only' differ demographically from 'landline' - Respondents' multitasking |
|
|
+ Resp. have more time to consider answers and can answer in any order + Good for delivering Q's on complex issues that require thought + Suited to asking personal lifestyle Q's (confidential required) + May remain in front of respondents as a reminder + May be seen as more legitimate than phone er e-mail - Low response rate - Don't know who completed survey - Can only target literate respondents |
|
WEB, EMAIL, SOCIAL MEDIA |
+ Administered Quickly, Inexpensively, Flexibly + Own time + Target special interests groups + Present multimedia content + Engage respondents in "real-time chat" + May elicit more sensitive info + Can be analyzed in real time as data comes in + Surveys can be e-mailed or posted to a website + Can detect respondents' patterns + Software makes design and collecting easy + Facilitates pretesting and experimenting (web) + Geographic and demographic coverage + High speed + Snowball sampling
- Results may not be generalizable - Can't control survey representation - May need other modes to drive to survey website - Don't know who completed - E-mail invitations can be seen as spam - Requires more effort than to phone call - Respondents' multitasking - Problematic sampling frames - Low response rates - Difficult for surveys to stand out from others - Decreased participation willingness - Different types of user per platform |
|
FACE-TO-FACE |
+ Respondents may be less likely to refuse + Potential to assist respondents + Able to assess nonverbal responses + No technology required + Some control over timing and pacing of interview - Time consuming - Expensive - Repeat-visits if resp. not home - Resp. may feel less confidential - Some geographical areas may be hazardous - Interviewers need to be trained and skilled |
|
Capturing and processing survey data |
Raw data can be typed in statistical software Op-scan forms (fill in bubbles), can read by optical scanners Web-based survey's: respondents enter data themselves Hybrid version of phone survey is "press one": enter data themselves |
|
Using other people's surveys: wise to check whether or not some answers already exist before launching own survey (literature review) |
3 potential problems of using other people's surveys: - May be proprietary - May be form a source that has an agenda in the research - May not be able to process the data to meet your own needs An intermediate step between using a publicly available survey and design your own, is to hire a professional survey firm to do the work for you |