Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
112 Cards in this Set
- Front
- Back
Reasons for Research in Public Relations
|
Demonstrate to clients that the public relations efforts produce impact on audiences - efficacy
Understand the expectations and needs of the clients and audiences Adjust public relations efforts that were not successful to perform better in future |
|
What is research?
|
Collection of data
Data – points of observation/information units we collect via some methodology Formal and informal methods - both types have advantages and disadvantages |
|
Formal research
|
Data Collection
Controlled Objective Data Assesment Systematic observation Reliable measurements Validity can be measured Deductive interpretation Outcomes Description Understanding Prediction Control |
|
Informal research
|
Data Collection
Uncontrolled Subjective Random/purposeful observation Data Assesment Unreliable measurements Validity is assumed, not measured Inductive interpretation Outcomes Description Understanding |
|
Theoretical
|
seeks to provide the underlying framework for the study of public relations.
It creates a “body of knowledge” about public relations - the concepts of interest and importance , the relationships between those concepts, the outcomes as they might be applied in actual practice |
|
Applied
|
seeks to use theory-driven research in business world situations.
It is strategic research to develop a campaign/program to be implemented in practice. |
|
Types of research questions
|
RQ of definition
RQ of fact RQ of value RQ of policy |
|
RQ of definition
|
seek to define what we will observe/study
|
|
RQ of fact
|
(quantitative, empirical questions) – seek to compare across or between groups.
|
|
RQ of value
|
(both quant/qual) ask “how good” or “how well” something is
|
|
RQ of policy
|
strategic, “what should be done?” Management domain, not applied researchers’.
|
|
Public Relations’ function
|
To identify avenues for survival and advancement of the entity (organization, group, or individual),
Establish communication programs or campaigns that enhance the organizations advancement (and thus survivability), and Maintain those programs against competitors. |
|
Types of publics
|
Belonging to organization
External Internal Intervening Involvement Active Passive Ignorant |
|
public relations
|
management function that conducts research about an organization and it's publics to establish mutually beneficial relationships through communication
|
|
Targets
|
Target population
Target public Target audience |
|
Target population
|
demographics, lifestyle
|
|
Target public
|
shared self-interest and communication (who their gate-keepers are)
|
|
Target audience
|
activists or trend setters within the target public, opinion leaders
|
|
Identifying publics
|
Geographic location
Demographics and psychographics Power, position, reputation Organizational membership, role in decision-making Behavior (latent, aware, or active) |
|
Programmatic research
|
On-going process
Continuing cycle of data gathering and analysis, Both formal and informal in nature Starting point/benchmark is important In practice, practitioners focus on pressing problems (budget/expertise constraints) |
|
PR Research Assumptions
|
the decision-making process is basically the same in all organizations
all communication research should (1) set objectives, (2) determine a strategy that defines those objectives, and (3) implement tactics that bring those strategies to life . research can be divided into three general phases: (1) development – secondary research; (2) refinement; and (3) evaluation of the program communication research is behavior-driven (applied) and knowledge-based (informed by theory). |
|
all communication research should
|
(1) set objectives,
(2) determine a strategy that defines those objectives, and (3) implement tactics that bring those strategies to life |
|
research can be divided into three general phases:
|
(1) development – secondary research
(2) refinement (3) evaluation of the program |
|
Establishing a Research Program
|
1. Defining public relations problems – Situation Analysis: What is happening now?
2.Planning and Programming – Strategy: What should we do and say, and why? 3.Taking action and Communicating – Implementation: How and when do we do and say it? 4. Evaluating the program – Assessment: How did we do? Pre-evaluation – environmental monitoring/scanning to create a body of knowledge about an issue/client/organization that detects and explores potential concerns. |
|
Goals
|
General outcome expected upon completion
Long-term Directional |
|
Objectives
|
The more specific, the better
Based on projected and actual program outputs (tactics) Evaluated according to specific outcomes (effects of the tactics) |
|
Types of Objectives
|
Informational
Motivational Behavioral |
|
Informational
|
establish what knowledge should be provided or is needed by the publics, specify informational tactics to be used
|
|
Motivational
|
test whether or not the information is having an effect and tactics are having an impact on future behavior
|
|
Behavioral
|
aim at a certain behavioral outcome, action by the publics
|
|
Writing Objectives
|
Be as specific as possible, remember the SMART criteria for well-formulated objectives:
Specific Measurable Attainable Reasonable (region-bound) Time-bound |
|
Evaluation Strategies
|
Success can be relative, but always measured against the objectives
Monitor and track developments, design a research strategy that Has been pretested Takes account of the relationships between outputs and outcomes Continually monitors progress toward the goals and correct when necessary. |
|
Principles of Ethical Research
|
Overall principle: minimize harm, maximize benefits
Respect for person’s autonomy Beneficence (risk assessment, secure well-being) Justice |
|
Researcher-subject relationships
|
Researchers must
Provide accurate information Establish trust Respect the subjects |
|
Vulnerable subjects
|
Belmont Report, Title 45, Code of Federal Regulations, Part 46
Subpart A – “common rule” – capacity and voluntariness B – fetuses, pregnant women, in-vitro fertilization C – prisoners D – children |
|
Informed Consent Procedure
|
Provide information on the study and its purpose
Disclose risks, benefits, alternatives, and procedures of the study Answer questions Enable the informed decision to participate Must be received prior to the study, not in the aftermath |
|
Informed Consent Elements
|
Competent participants
Researcher discloses relevant information Participant comprehends information Participant agrees to take part in the study Participant’s agreement is voluntary Participant can withdraw at any time of the study |
|
Types of Risks for Research Participants
|
Physical (medical research)
Psychological Social Legal Economic In some countries and under some circumstances - political |
|
Historical and Secondary Research
|
Environmental scanning/monitoring, systematically searching out available research through a search strategy
No original data are produced Gathering and analyzing data and findings produced and analyzed previously Results cannot be generalized to larger population – informal research |
|
Important Questions for Search Strategy
|
What am I looking for (question of definition)?
Where do I begin? When do I end my search? |
|
Primary sources
|
actual documents as written by the researchers themselves, contain original data produced by the research study (research reports and articles)
|
|
Secondary sources
|
report on the findings of the primary source (textbooks)
|
|
Tertiary sources
|
– summary of the secondary source’s report on the primary report , must be used with caution
|
|
Books
|
most trustworthy, in-depth analysis of a particular subject. May be outdated by the time published
|
|
Periodicals
|
published on a particular cycle (journals, magazines, annual reports, newsletters, newspapers)
|
|
Databases
|
– sets of documents available through computers (Lexis/Nexis, EBSCO, government databases)
|
|
Unpublished papers
|
“white/position papers,” conference presentations
|
|
Websites
|
should be used with particular caution
|
|
Critical standards for document evaluation
|
Are the main issues/points clearly identified?
Are the writer’s underlying assumptions or arguments generally acceptable? Is the evidence presented adequate, evaluated clearly, and supportive of writer’s conclusions? Is there a bias and does the writer or publisher address that bias? How well is the document written/edited? |
|
Content analysis
|
systematic, objective, and quantitative method for researching messages
What types of messages? Documents, speeches, media messages Video content and scripts, photographs Interviews, focus groups, etc. |
|
Qualitative analysis of texts
|
– discourse /textual /rhetorical analysis
|
|
Advantages of Content Analysis
|
Ability to objectively and reliably describe a message through use of statistics
Provides logical and statistical bases for understanding how messages are created Fully controlled by the researcher |
|
Limitations of Content Analysis
|
Requires access to actual messages
Time-consuming Requires reliability tests with multiple coders Will never tell the researcher how the messages are perceived |
|
Manifest content
|
is what you actually see and count, direct meaning of the message (denotative meaning)
|
|
Latent content
|
more qualitative, interpretative (connotative), deals with the underlying or deeper meanings of the message. Implies judgment and requires a scale or another measuring system.
|
|
Units of analysis
|
Things you are actually counting/analyzing
Berenson's five units of analysis: Symbols/words (company name, logo) Characters (race, occupational/stereotypical roles) Time/space measures (story placement, airtime, size of pictures) Items (advertisement, editorial, etc) Themes and frames (latent, must be operationally defined) |
|
Category Systems – Guidelines
|
The categories must reflect the purpose of your research
The categories must be exhaustive All categories must be mutually exclusive Placement of instances in one category must be independent of other categories All categories must reflect one classification system |
|
Simple random sampling
|
random selection, similar to a lottery; must return the selected message into the pool after the drawing, may not be representative
|
|
Systematic random sampling
|
every nth message is selected; better representation but requires complete listing of items
|
|
Systematic stratified random sampling
|
every nth message in a subpopulation is selected; better representation from a known population
|
|
Coding
|
Identification and placement of units of analysis into the category system, quantification of messages
Most coding involves “nominal” data that only identify differences in categories Latent content suggests use of ordered or ranked categories that require judgments regarding an underlying themes and may create validity and reliability problems |
|
Validity
|
Are you really coding what you claim to be coding?
What are the units of analysis and how they are defined – operational definition What is the category system? Sampling may compromise validity if some types of messages are left out or if the messages were selected around unusual events – spike in coverage |
|
Reliability
|
Amount of error coders make when placing content into categories
If only one coder is involved – intra-coder reliability (code twice after a period of time With multiple coders – inter-coder reliability. Simple reliability coefficient: Reliability = 2M/(N1 + N2), where M – total coded items agreed upon, N – number of items coded by each coder |
|
Analysis
|
Simple counting – descriptive statistics
Inferential statistics – generalizations (ANOVA, t-tests, Chi-square tests, logistic regressions, etc.) Qualitative content analysis – Nvivo, QDA Miner, and other special packages |
|
Simple counting
|
descriptive statistics
|
|
Inferential statistics
|
generalizations (ANOVA, t-tests, Chi-square tests, logistic regressions, etc.)
|
|
Qualitative content analysis
|
– Nvivo, QDA Miner, and other special packages
|
|
Measurement Levels
|
Categorical
continuous |
|
Categorical
|
show classes to be counted
Nominal (indicate differences among classes) Ordinal (differences + order) |
|
continuous
|
data are based on a continuum
Interval (order + equal distance between points) Ratio (equal distance + true zero point) |
|
Nominal
|
indicate differences among classes
|
|
Ordinal
|
differences + order
|
|
Interval
|
order + equal distance between points
|
|
Ratio
|
equal distance + true zero point
|
|
reliability
|
Ability to measure the same thing comparably over time
Looks at the error found in measurement: instrument errors and application errors Key: maximize systematic error (known error) and minimize random (unknown) error |
|
Ways to increase reliability
|
Repeated use (test-retest reliability)
Split half (internal consistency) reliability |
|
Face validity
|
obvious, surface meaning, based on knowledge, authority, and credibility
|
|
Content validity
|
– involves other experts assessing the measure
|
|
Criterion-related validity
|
measure is related to other established measures and successfully predicts behavior
|
|
Construct validity
|
how the measure relates to an underlying concept (construct) and how it is used in statistical analysis (factor analysis)
|
|
Measuring what you cannot see
|
Behavior (highest interest in Public Relations) can rarely be observed
Measuring attitudes – “predisposition to behavior” and opinions (fleeting attitudes) based on: Knowledge about the behavior Feeling about the behavior Possible behavior before actual behavior |
|
Attitude scales
|
General rules for construction of scales
Assume interval level of measurement Must have bipolar ends Must have a neutral point Must include at least two items per scale or subscale to ensure reliability |
|
Thunstone scale
|
requires multiple steps and a number of experts/judges, high on reliability; good for measuring new concepts
|
|
Likert-type scale
|
measure reaction to several items on a continuum (5/7-point scale)
|
|
Semantic differential scale
|
measure meaning associated with attitudes and beliefs, have no pre-designated responses to respond to.
|
|
Sampling
|
The science of systematically drawing a valid group of objects from a population reliably
|
|
Three ways to sample messages and people
|
Census
Non-probability (convenience) sample Probability (scientific) sample – based on random selection; enables to generalize to larger populations |
|
Universe
|
– general concept of who or what will be sampled
|
|
Population
|
clearly specified and described part of the universe
|
|
Sampling frame
|
list of all messages/people to be surveyed
|
|
Sample
|
actual messages/people chosen for research
|
|
Completed sample
|
– selected messages and people who responded to the survey
|
|
Coverage error
|
results from not having an up-to-date sampling frame
|
|
Sampling error
|
– results when you do not sample from all the members of the sampling frame. Can be estimated only for scientific samples
|
|
Measurement error
|
found when people misunderstand to incorrectly respond to questions (in sampling people, not messages)
|
|
Reduce Coverage error
|
– verify and assess the quality of the list, know how it is maintained and updated
|
|
reduce Sampling error
|
– select proper sampling procedures, appropriate sample size and define the level of confidence
|
|
Reduce Measurement error
|
pre-test your instrument (questionnaire), make sure questions are understandable for the subjects in a consistent way
|
|
Census
|
All elements in the population are included
Universe, population , sampling frame, and sample are the same Actual conclusions are inferences If at least one element has been missed, we can only infer conclusions to the population – this lack of confidence is called “bias” The fewer the elements missing, the smaller is the bias and the more accurate are the results In practice, possible only with small known populations |
|
Convenience sampling
|
selecting available subjects
|
|
Quota sampling
|
selecting available subjects that meet a particular population distribution
|
|
Purposive sampling
|
selecting subjects based on researcher’s knowledge of the population and goals of research
|
|
Volunteer sampling
|
selecting subjects who agree to be a part of the study, often self-nominated
|
|
Snowball sampling
|
– selecting participants based on other participants’ recommendations
|
|
Probability
|
Allows for generalizations to the population it was drawn from
Based on random selection – every element in the population has an equal chance of being chosen In terms of time, can be of two types: Cross-sectional sample (taken at a particular point of time, a snapshot sample) Longitudinal sample (taken from the population over time) |
|
Trend sample
|
different people from the population at different points of time to track the dynamic developments (trends) in the population
|
|
Panel sample
|
– follow randomly selected sample from the population over time, high “mortality” rates
|
|
Cohort sample
|
– sample of different people who meet certain characteristics taken over time
|
|
Normal Curve Features
|
Perfectly symmetrical (50% of all sample means are below the mean, 50% - above)
68% of all sample means are within 1 SD from the population mean 95% of all sample means are within 2 SDs 99.9% of all sample means are within 3 SDs Z-scores are associated with probability levels; for 95% confidence interval Z= 1.96, for 99% - 2.58, for 99.9% - 3.09 |
|
Simple random sampling
|
similar to hat-drawing, must have a complete sampling frame
|
|
Systematic random sampling
|
(simple or stratified) – selection is based on a system (every kth element, known strata)
|
|
Cluster sampling
|
two waves: first, randomly select a number of clusters, then sample out from each cluster
|