• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key

image

Play button

image

Play button

image

Progress

1/140

Click to flip

140 Cards in this Set

  • Front
  • Back
research process
i. problem formulation/research question->research design development->collect data (sampling and survey methods)->process and analyze data->interpret the results->write it up.
philosophical paradigms
positivism
constructivism/relativism
positivism
(nature of reality- there are universal, essential truth universality of reality/facts, reality/facts are prior to and independent of theory, relationship between knower to known-they are independent. Knower can objectively assess the reality through observation and experiment.)
what are the criticisms of positivism?
observation is fallible and theory-laden. Cannot ignore the role of culture. So much of what was regarded universal truth has been proven wrong. It was used to justify colonialism. Cannot predict anyway because world never operates under conditions of complete closure like in labs or in hypothetical situations.
What are the strengths of positivism?
this approach might work relatively better in the field of “hard science.” It is difficult to deny that this approach made significant contribution in the advancement of knowledge.
constructivism/relativism
(nature of reality-there is no universal, essential truth. Reality is socially constructed. Our observation is also influenced by what we already know. Relationship between the knower and the known-they are never independent but very dependent. What knowers know prior to observation causes/influences/controls what they observe and how they interpret.
what are the criticisms of constructivism/relativism?
there are materials prior to human existence, cannot answer what the criteria/standards of truth are.
what are the strengths of constructivism/relativism?
it made ‘hard science’ pause and provided a new way of thinking which contributed a lot to post-positivism, realism, pragmatism, and critical theory in terms of the influence of culture on science. It could liberate some members of the society who feel inadequate, by helping them understand that it is not them, but it is the society. This particular point of view helps social workers to be non-judgmental.
what are specific research methods?
quantitative and qualitative
quantitative methods
objective, deductive (theory->hypothesis->observation->confirmation), forms hypotheses prior to data collection. To produce precise and generalizable findings.
qualitative methods
subjective, inductive logical process (observation->pattern->hypothesis->theory) , seeks to generate hypotheses, more flexible than quantitative methods, deeper understandings of the meanings of human experience.
deductive process
Deductive: theory->hypothesis->observation->confirmation.
inductive process
Inductive: observation->pattern->hypothesis->theory.
hypothesis
: need to distinguish concepts, attributes, variables, and constants. Should be value-free, narrow, specific, clear, has more than one possible answer, can be answered by observable evidence (testable), addresses the decision-making needs of agencies or practical problems in social welfare, significance for guiding social welfare policy or social work practice.
types of variables
IV, DV, control, mediating, moderating
independent variable
exogenous variables, influence, cause, or affect the phenomenon being studied, proceed the dependent variable in time, not determined within the system under investigation; the causes of them lie outside that system. Classic e.g., background characteristics of individuals, such as race/ethnicity, sex, age, region.
dependent variable
endogenous variables, variables most interesting to researchers, influenced by independent variables, determined within the system under investigation, e.g. student achievement scores, patient blood pressure, etc.
mediating variable
intervening variable, a variable that comes between IV and DV in the causal chain. A variable that comes between IV and DV in the causal chain. Conceptually, it is the transformation process/change mechanism when stimuli effects behavior.
moderating variable
not influenced by IV but that can affect the strength or direction of the relationship between the IV and DV. There is an interaction between IV and Mod V on DV; Mod V moderates the relationship between IV and DV. When a variable moderates the relationship between two other variables, that variable is said to be a moderator. The moderator variable and the variable whose relationship it moderates with the dependent variable are also called interacting variables.
control variable
did you control for…conclusions can be radically different when confounding or lurking variables are controlled (in other words, “held constant,” or “partialled out”).
types of relationships between variables
no relationship, correlational, causal, linear, non-linear, curvilinear
correlational v. causal
correlation between the number of roads built in Europe and the number of births in the US is not causal. There could be a common cause for both. What could it be? World economy?
linear v. non-linear
positive and negative, curvilinear)
limitations of correlational analysis
i. Limitations of correlation analysis: curvilinear relationship truncated or restricted range, bivariate outliers.
degree of control
experimental, quasi-experimental, non-experimental
number of subjects in a study
single vs. group
nature of data
quantitative, qualitative, mixed
quantitative
used mostly in quantitative methods. Things that are quantifiable (things you can count, rank-temperature, depression score, age etc.)
qualitative
data used in qualitative methods. Data that are not easily reduced to numbers. Meanings. E.g. transcript of a focus group, face-to-face in-depth interviews.
mixed
a design for collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies to understand a research problem. Improves genearlizability (representativeness) with deep and contextual understanding of the phenomenon of interest. (classification according to emphasis-qual, quant, equal; according to sequence-qual first, quant first, concurrent).
time-dimension
cross-secitonal
longitudinal
purpose of study
exploratory, explanatory, descriptive, evaluative
exploratory
to provide beginning familiarity with the topic/area. Typical when a researcher is examining a new interest, when the subject is relatively new and unstudied, usually with a small sample, non-representative (insufficient to provide conclusive answers to research questions, e.g. a survey via unstructured, open-ended interviews with recent alumni and their agency supervisors and administrators to obtain preliminary sense of the extent to which their graduates were utilizing research in their practice.
descriptive
to describe situations or events (US census).
explanatory
takes the description further and explains why. (why a battered woman return or not return to a violent partner).
evaluative
to evaluate social policies, programs, and interventions. Could be done via exploration, description, and explanation.
construct an instrument
to develop an test measurement instruments that can be sued by other researchers or by practitioners as part of the assessment or evaluation aspects of their practice. The question at hand is whether a measurement instrument is useful and valid to be applied in practice and research.
external validity
generalizability= to draw a general conclusion from particular instances. Dealt with sampling. How do we know what we learn from a sample of people can be applied to the whole population.
internal validity
: logical validity of the causal inferences, dealt with research design.
what is sampling
i. Selecting a group of people/objects (sample) from those who we want to study (population) in order to find out population characteristics (parameter) based on sample characteristics (statistic).
when is a sample not necessary?
ii. A sample is not necessary when we can ask everybody the question we want answered, then, we are dealing with the whole population and a sample is not necessary. If everyone were alike, a sample would not be necessary.
population
(those who we want to study, those who we want to generalize the findings to),
sample
(a special subset of the population, those who are actually in the study, we observe them for purposes of making inferences about the nature of the total population itself. Chief criterion of quality-the degree to which it is representative, the extent to which the characteristics of the sample are the same as those of the population from which it was selected),
parameter
(population characteristics, summary description of a given variable in a population),
statistic
(sample characteristics, summary description of a variable in a sample),
sampling frame
(difference between the true population parameter and the estimated population parameter, even the most carefully selected sample won’t provide a perfect representation of the population from which it was selected. there will always be some degree of sampling error. even if you are able to identify perfectly the population of interest, you may not have access to all of them, even if you do, you may not have a complete and accurate enumeration or sampling frame from which to select, even if you do, you may not draw the sample correctly or accurately, even if you do, they may not all come and they may not all stay)
how to decrease sampling error
i. One way to decrease sampling error is to increase the n of your study. Negative relationship/association between sampling error and sample size.
probability sampling
(most effective method for selection of study elements. avoids researchers biases in element selection. permits to estimate sampling error).
random selection
each element has an equal chance of selection independent of any other event in the selection process. Procedure: use table of random numbers, computer random number generator or mechanical device.
logic of probability sampling
probability theory enables researchers 1) to calculate the parameters and 2) to make judgments of how likely it is that the estimated parameters will accurately represent the actual parameters. Chief principle: every member of the total population must have some known nonzero probability of being selected into the sample. Purpose: to select a set of elements from a population in such a way that descriptions of those elements actually portray the total population from which the elements are selected. Random selection is the key in this process.
types of probability sampling
simple random, systematic, stratified, multi-stage cluster
simple random
random selection (each element has an equal chance of selection independent of any other event in the selection process. procedure: use table of random numbers, computer random number generator or mechanical device).
systematic
assign a number to every unit in sampling frame (1 to N), decide sample size that you want or need (n), N/n=k the interval size, randomly select a number from 1 to k, then take every kth unit, assumes that the population is randomly ordered, if not, by accident, we might draw a non-probability sample, not much different from the simple random sampling, advantages-easy.
stratified
AKA, quota RS, to insure representation of each strata—(sometimes oversample smaller population groups). Involves dividing the population into smaller subgroups, called strata, and then drawing separate or random or systematic samples from each stratum. Addresses sampling error on certain variable used for stratification: e.g., in a sample stratified by ethnicity, the sampling error on this variable is zero.
multi-stage cluster
AKA multi-stage RS, area sampling. the final units to be included in the sample are obtained by first sampling larger units (clusters) in which smaller units are contained. divide population into clusters, randomly sample clusters, randomly sample individual units from selected clusters.
1. first stage: select a sample of areas, e.g. census tracts
2. intermediate stage(s): select sample of smaller areas within each area selected in first stage.
3. final stage: select sample of units from each area selected in previous step.
non-probability sampling
(the use of procedures to select a sample that does not involve random selection, main issues-likely to misrepresent the population, may be difficult or impossible to detect this misrepresentation).
reliance on available subjects
“man on the street,” college psychology majors, available or accessible clients, volunteer samples, problem-risky. we have no evidence for representativeness->generalization suffers, still useful.
purposive or judgmental
AKA, judgmental sampling, investigator uses judgment and prior knowledge to choose people for the sample who would best serve the purpose of the study. Deliberately sampling an extreme group. deviant case sampling, studying cases that don’t fit into fairly regular patterns of attitudes and behaviors.
quota
quota (or stratified) RS without random selection. dividing a population into various categories and setting quotas on the number of elements to be selected from each category. once a quota is reached, no more elements from that category are put in sample. interviewers select the sample—interview people until they have met all of the quotas on each variable.
snowball
one person recommends another, who recommends another, who recommends another, etc. good way to identify hard-to-reach populations, for example, homeless persons, undocumented immigrants.
survey research methods
self-administered questionnaires, online surveys, interview surveys, telephone surveys
self-administered questionnaires
i. Self-Administered Questionnaires (Mail distribution and return, cover letters, Follow-up mailings, Response rates, Ways to improve response rates): return rates(=response rate): the higher the response rate, the less significant response bias (acceptable is 50%, good 60%, very good 70%).
1. self-administered questionnaires; drop-off- how the dorm dwellers at your university feel about the lack of availability of vegetarian cuisine in their dorm dining halls. you expect a low response rate. you might want to tell them about your research in person, the benefits that might come from their responses, and to answer their questions about your survey. you would pick a time of day when you are sure that the majority of the dorm residents are home, and then work your way from door to door.
2. improve response rate: well written cover letter, follow-up mailings, offering to share the survey results, pay respondents, raffle, incentives should not coerce participation.
online surveys
3. Tips: via email with a link, through a website (survey monkey).
interview surveys
3. higher response rates than mail surveys, minimizes “don’t know” or “no answers” responses, allows interviewers to observe respondents while asking questions. Interviewing doesn’t automatically mean collection of qualitative data.
advantages of online surveys
1. Advantages: quick and inexpensive (no need for manual data entry), ideal for some populations.
disadvantages of online surveys
2. Disadvantages: no representativeness, technological problems, could be regarded as spam mail (lowering the response rate).
the role of the survey interviewer
1. role: ask questions orally and record/mark the answers
telephone surveys
iv. Telephone Surveys (Computer-assisted telephone interviewing)
1. CATI: interviewer uses computer to read questions, and to enter responses (data).
strengths of survey research
i. Strengths: useful in describing the characteristics of a large population, make large samples feasible, flexible-many questions can be asked on a given topic (many variables), flexibility in analysis, same questions asked of all respondents.
weaknesses of survey research
ii. Weaknesses: often relies on self-reported answers, standardization may yield superficiality, doesn’t deal with context, many respondents are forced to select certain categories that are not most appropriate, often cannot be modified in the field, cannot establish causality without doubt (difference from experiments)
what to consider when deciding a survey method
i. Best method for describing a population that is too large to observe directly: attitudes and orientation, e.g. public polls.
ii. cost, time, and geographic restriction, sensitivity of research topic (confidentiality and anonymity), complexity of questionnaire, respondents (ability to read and understand the questionnaire), possibility of interviewer bias, response rate.
measurement instrument types
a. Types (Open-ended and closed-ended, contingency, when to use them)
sources of data
i. Secondary Analysis (Advantages & limitations of secondary analysis)
ii. Historical records
iii. Systematic observations (structured, unstructured)
iv. Surveys
measurement levels
nominal, ordinal, interval, ratio
quality of measurement (measurement error)
i. Systemic measurement error
ii. Random measurement error
reliability
g. Reliability (closely related to the concept of consistency, does a measurement produce the same results under various conditions? is it dependable, repeatable, consistent? if the person/object hasn’t changed, a reliable measure produces the same results, the more reliable, the less random error.
types of reliability
test-retest, interobserver, coefficient alpha, split half, understanding SPSS output of reliability analysis
test-retest reliability
1. Test-retest reliability: to assess the consistency of the measure from one time to another.
a. Measure instrument at two times for multiple persons, computer correlation between the two measures, assumes there is no change in the underlying trait between time 1 and time 2.
inter observer reliability
2. Interobserver reliability: to assess the degree to which different raters/observers give consistent estimates of the same phenomenon.
a. are different observers consistent? can establish this outside of your study in a pilot study can look at percent of agreement (especially with category ratings), can use correlation (with continuous ratings).
coefficient alpha
3. Coefficient alpha: to assess the consistency of results across items within a test.
split half reliability
4. Split half reliability: to assess the consistency of the results of two subtests of a test.
validity
(the degree of fit between the construct and the instrument/method we are using to measure it. the extent to which an empirical measure adequately reflects the real meaning of the concept under consideration. a measure can be reliable without being valid, but it cannot be valid without being reliable.
types of validity
face, content, criterion, construct
face
does it seem reasonable? the measure seems valid “on its face,” a judgment call, probably the weakest form of measurement validity.
content
check the measure against the relevant content domain. e.g. math skills. also a judgment call. check whether the measure covers the entire range of meanings within a concept, can make this relatively rigorious with a systematic check, what constitutes the content domain? (e.g., self-esteem, parent attentiveness).
construct
4. construct: convergent validity and divergent (discriminant) validity. the degree to which a measure corresponds to an external criterion that is known concurrently (cross-sectional). how well it correlates with previously validated measures. e.g. correlation of a new depression scale with BDI. how much a measure relates to other variables as expected within a system of theoretical relationships and as reflected by the degree of its convergent and discriminant validity. based upon a principle that measures of constructs that are related to each other should be strongly correlated.
criterion
concurrent validity and predictive validity. check the measure against relevant criterion, e.g. predictive validity, concurrent validity, known groups validity.
a. predictive: look at measure’s ability to predict something it should be able to predict (something in the future) (longitudinal). e.g. high SAT scores-> success in college.
discriminant
a. discriminant validity: based upon a principle that measures of different constructs should not correlate highly with each other.
relationship between reliability and validity
i. Relationship Between Reliability and Validity (a measure can be reliable without being valid, but it cannot be valid without being reliable.
types of statistics
a. Types of statistics (descriptive, inferential)
i. descriptive: describe the data collected for a study which is the sample.
ii. inferential: allows us to draw conclusions concerning a population based only on sample data.
iii. we use sample statistics to make inferences, draw conclusions about, a larger population. inferential statistics are draw from samples; they estimate population parameters. we use latin letters (x, s, b) to describe sample statistics, and greek letters to describe population parameters (
predictive validity
a. predictive: look at measure’s ability to predict something it should be able to predict (something in the future) (longitudinal). e.g. high SAT scores-> success in college.
what are the types of statistics
descriptive and inferential
descriptive stats
describe the data collected for a study which is the sample.
inferential stats
allows us to draw conclusions concerning a population based only on sample data.
sample stats
iii. we use sample statistics to make inferences, draw conclusions about, a larger population. inferential statistics are draw from samples; they estimate population parameters. we use latin letters (x, s, b) to describe sample statistics, and greek letters to describe population parameters
descriptive univerivariate analyssi
describe: categorial data (frequency distribution), continuous data (measures of central tendency and variability/dispersion, deviation, variance, standard deviation.
relationships among variables
bivariate, multivariate
effect size
(a statistic that portrays the strength of association between variables, thus enabling us to compare the effects of different interventions across studies using different types of outcome measures. might refer to the difference between the means of two groups divided by the standard deviation).
substantive significance
(practical significant, clinical significance, how significant, meaningful, important a finding is from a practical standpoint.
inferential analysis (confidence interval)
f. Inferential Analysis (confidence interval): sample statistics (known)->inference->population parameters (unknown, but can be estimated from sample evidence).
hypothesis testing
(testing group mean differences, testing association using the Pearson’s r- bivariate correlation: relationship between two variables, Pearson’s r: range from -1 to 1
testing contributions of independent variables as predictors of a dependent variable using linear and multiple regression
(multivariate-one dependent variable, combines different tests: f test (variability) and t-test (comparison of means), calculates the linear relationship between independent variable and dependent variable. calculate each relationship while controlling for the effects of other variables,
testing group differences of categorical data
(chi-square-categorical data, one independent variable and one dependent variable. logistic regression: similar to multivariate regression but independent variable is categorical, multiple independent variables on one dependent variable)
six primary approaches to qualitative research
(Phenomenology, Ethnography, Case Studies, Narrative, Participatory Action, Grounded theory)
phenomenology
focus on subjective experiences (the lived experience of a phenomenon), use prolonged immersion, study participants are individuals who share a particular life experience (e.g. cancer survivors, crime victims, adoptive parents). Analyses of interview data are conducted to find the “essence” or common themes in their experiences. often uses interview data with smaller number of participants (6-10). begins with broad, open-ended questions. multiple interviews with each participant is needed to achieve needed depth. readers should feel as if they have “walked a mile in the shoes of the participants.”
ethnography
oldest qualitative approach, focus on an entire cultural group, researcher describes and interprets the shared and learned patterns of values, behaviors, beliefs, and language of a culture-sharing group, holistic perspective: the researcher views all aspects of the phenomenon under study as parts of an interrelated whole. cultural relativism: cultures must be understood on their own terms, not judged by the beliefs and values of other, more powerful cultures.
case studies
single unit examination, information about an individual case is systematically organized and presented in the form of a narrative summary, case studies in clinical education is to illustrate the application of clinical theories in individual cases. case studies in qualitative research is to develop knowledge requiring systematic processes of data collection and analysis.
narrative
speaking and writing are forms of meaning-making.
v. focus on the stories told by individuals, how something is said and what is said. typically uses in-depth interviewing to encourage respondents to talk freely about their lives. analyzing data by repeatedly listening to a tape of the interview and scrutinizing the transcript to identify “stories.” from the stories, structural components are then delineated. analyst may examine how respondents “voice” themselves and others indicating social relationships and the meanings attached to them. narrative analytic methods: conversation analysis and discourse analysis
participatory action
: action research and community-based participatory research: committed to community empowerment and egalitarian partnerships. partnership of equals among researchers and community participants. active participation of all parties. moved away from academic-based, controlled trials to “real-world” interventions in communities. natural fit for qualitative researchers in social work.
grounded theory
popular, widely accepted across a wide range of disciplines, inductive process of discovering theory from data (data is collected, then hypotheses that are grounded in empirical observations) are developed. more observations are made to see if hypotheses stand; hypotheses are refined, then more observations. process of constant comparisons. typically involves interviews with a moderately sized sample of carefully selected persons (about 20-30). begins with open coding of interview transcripts then gradually create a parsimonious conceptual framework. along the way, researchers employ constant comparative analysis to examine contrasts across respondents, situations, and settings.
conversation analysis
analyzing the aspects of conversation (e.g. sequencing, turn taking, holding the floor, interruption) that reveal how social roles and identities are manifested during the talk. e.g. analysis of audio-taped transcriptions of conversations between parents and children to offer clues to how interpersonal communication both shapes and reflects social interaction.
discourse analysis
a technique to identify the social meanings reflected in talk and text. meaning can be ascertained from a variety of indices (e.g., choice of words and idioms, speaking rhythm and cadence, inflection, intonation, gestures, nonverbal utterances).
sampling qualitative studies
random selection is rarely used, getting a representative sample is NOT a goal, choosing a recursive process, the choices of whom to study are product of what is being found, not the initial plan.
theoretical sampling
1. theoretical sampling: select similar cases until exhaust the theme and a different type of cases (related to grounded theory) selection of new interviewees should continue until a saturation point is reached, that is, until new interviews yield little additional information.
critical incidents sampling
2. critical incidents sampling: best success or worst failure
maximum variation sampling
3. maximum variation sampling: all possible variations
data collection
i. detailed open-ended interviews (not highly structured or limited responses).
ii. direct observation (or essentially direct via video).
iii. focus group
iv. written documents (work with words and visual data, not numbers).
v. tape recording
vi. various notes by researcher
client logs
focus group
1. goal: learn about beliefs, attitudes, and opinions about social/psychological/cultural characteristics.
2. capitalizing on “synergistic group effect”
3. inexpensive: staffing and equipment vs. paper and pencil
4. format: groups (no more than 7) of unrelated individuals, group discussion of a topic, 1-2 hours.
5. facilitators: asks specific questions and guides the discussion.
6. low external validity: analytical generalization: connecting case study findings to a particular theory)
7. subjectivity: biases and points of view
8. dependent on researcher’s personal attributes and skills
9. participation can change the social situation.
various notes of researcher
vi. various notes by researcher: field notes (a running account of what happens or transcriptions of video or audio tapes) personal notes (personal reactions, how you feel, self- reflection, memories, and impressions. methodology notes (description of methods used, reasons for using those methods, ideas for possible changes in methodology. theoretical notes (emergent trends, hypotheses).
various roles of the observer
i. complete participant: an insider, become full participant in activity, helps minimize distinction/ difference between researcher and participants.
ii. complete observer: an outside view, a fly on the wall
iii. variations in between: start as an outsider and move to membership. or change to outsider role at end to verify hypothesis generated as a participant.
emic perspective
: trying to adopt the beliefs, attitudes, and other points of view shared by the members of the culture being studied.
etic perspective
maintaining objectivity as an outsider and raising questions about the culture being observed that wouldn’t occur to members of that culture.
recording observations
written documents (work with words and visual data, not numbers), tape recording, various notes by researcher-field notes, personal notes, methodology notes, theoretical notes.
qualitative vs. quantitative research
i. qualitative: inductive, holistic, subjective/insider centered, process oriented, relativistic paradigm, goal is to understand actor’s view, discovery oriented, relative lack of control. exploratory and explanatory.
ii. quantitative: hypothetical/deductive, particularistic, objective/outsider centered, outcome oriented, positivistic paradigm/worldview, goal is to find facts and causes, verification oriented, attempt to control variables, confirmatory.
qualitative data analysis
(introspection: examine your own thought. and feelings, natural and crucial process; critical reasoning).
coding
(categorizing behaviors into a limited number of categories; computer assisted analysis). open coding, development of a codebook, axial coding, develop categories, subcategories.
logical pitfalls of codign
provincialism, going native: losing own sense of identity, emotional reactions, hasty conclusion: any alternatives?, suppressed evidence.
strengths of coding
: depth of understanding (superficial vs. in depth), understanding world view of respondents, attempts to avoid pre-judgments, flexibility and openness, inexpensive: staffing and equipment vs. paper and pencil.
weaknesses of coding
generalizability? (analytical generalization: connecting case study findings to a particular theory).
subjectivity of coding
biases and points of view. dependent upon researcher’s personal attributes and skills. participation can change the social situation.
trustworthiness of coding
reactivity (researchers presence in the field distorts the naturalism of the setting and consequently the things being observed. researcher biases: what researchers perceive or how they selectively observe. respondent biases: social desirability.
minimizing the threats of coding
strategies for rigor- prolonged engagement: reactivity and respondent’s biases, triangulation: corroboration between two or more sources, peer debriefing and support, auditing, member checking, negative case analysis.
similarities between coding and social work practice
: begin where the client is. focus on perceptions of the informant. attempt to understand environmental context. try to avoid imposing preconceived ideas or theories.
creating codes
: analysis began with open coding, which is the examination of minute sections of text made up of individual words, phrases, and sentences. the language of the participants guided the development of code and category labels, which were identified with short descriptors, known as in vivo codes, for survival and coping strategies. these codes and categories were systematically compared and contrasted, yielding increasingly complex and inclusive categories.
memoing
use of analytic memos (questions, musings, and speculations about the data and emerging theory). analytic memos compiled and an analytic journal was kept for cross-referencing codes and emerging categories.
discovering patterns
axial coding, selective coding, creating codes and categories
axial coding
puts data “back together in new ways by making connections between a category and its subcategories.”-> from this process, categories emerged and were assigned in vivo categories labels.
selective coding
the integrative process of “selecting the core category, systematically relating it to other categories, validating those relationships [by searching for confirming and disconfirming examples], and filling in categories that need[ed] further refinement and development.”
codes and categories
3. codes and categories were sorted, compared and contrasted until saturated (i.e. until analysis produced no new codes or categories and when all of the data were accounted for in the core categories).
grounded theory method
a qualitative research approach that begins with observations and looks for patterns, themes, or common categories.