• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/123

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

123 Cards in this Set

  • Front
  • Back

quantitative approaches

research methods that emphasize numerical precision; a detached, aloof stance on the researcher’s part (i.e., the avoidance of overidentification); and, often, a hypothetico- deductive approach. Quantitative researchers tend to prefer gathering similar structured data across large samples as this will facilitate their ability to engage in statistical analysis of their data to iden- tify broader patterns across individuals. (p. 3)

qualitative approaches

research methods char- acterized by an inductive perspective, a belief that theory should be grounded in the day-to-day re- alities of the people being studied, and a prefer- ence for applying phenomenology to the attempt to understand the many “truths” of reality. Such approaches tend to be constructionist. Qualitative researchers tend to be cautious about numbers, believing that the requirements of quantification distance us even further from phenomenological understanding we should embrace. Qualitative re- searchers tend to engage most commonly in case study analysis. (p. 3)

mixed methods approaches

defined by Johnson et al. (2007) as “the type of research in which a researcher or team of researchers combines ele- ments of qualitative and quantitative research ap- proaches (e.g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration” (123). (p. 3)

epistemology

deals with the question of how we know what we know and what criteria we bring to the evaluation of whether something is “true” or not. (p. 4)

positivism

a school of thought marked by a real- ist perspective, an emphasis on quantitative pre- cision, the belief that effective research requires avoiding overidentification, and a search for gen- eral truths unearthed through the gathering and analysis of aggregated data. Chapter 1 focuses on “orthodox” or classic positivism, which developed in the 19th and early 20th centuries. (p. 4)

realist perspective.

the idea that a reality exists out there independently of what and how we think about it. Contrasts with constructionism and idealism. (p. 12)

direct realism

is the episte- mological position that holds that there is a (i.e., one) reality out there that exists independent of us that can be understood and awaits our discov- ery. An implication of this view is that, if reality involves a singular truth that exists independent of the observer, it should be able to be understood by different observers in exactly the same way. (p. 4)

naive realism

is the episte- mological position that holds that there is a (i.e., one) reality out there that exists independent of us that can be understood and awaits our discov- ery. An implication of this view is that, if reality involves a singular truth that exists independent of the observer, it should be able to be understood by different observers in exactly the same way. (p. 4)

metaphysical

speculation about the nature of truth and being that goes beyond directly observ- able truths and into the realm of speculation and abstraction. Positivists, for example, would eschew metaphysics. (p. 4)

theory

a set of concepts and a description of how they’re interrelated that, taken together, purport to explain a given phenomenon or set of phenomena. The word “theory” is also sometimes used more broadly to refer simply to abstractions; thus, when I look at your behaviour and call you studious, we are making the jump from the concrete (your ob- servable behaviour, i.e., the number of hours per week you spend studying) to the theoretical (the concept of studiousness). (p. 5)

go native

a term used by positivists to mean a re- searcher’s taking on the values and perspectives of the group being studied, so that he or she cannot maintain the detached, analytical stance required, according to positivists, for effectively studying the world. Positivists believe that getting too “in- volved” with the people we study will destroy our objectivity. (p. 5)

overidentify

a term used by positivists to mean a researcher’s taking on the values and perspectives of the group being studied, so that he or she cannot maintain the detached, analytical stance required, according to positivists, for effect- ively studying the world. Positivists believe that getting too “involved” with the people we study will destroy our objectivity. (p. 5)

aggregated data

data from more than one case that have been combined for analysis. For exam- ple, suppose the students in your class each re- ceive a score on the final exam. If we combine (“aggregate”) all those scores, we can investigate their distribution, for example, their mean, vari- ability, and so on. If we have other aggregated data on the same people (e.g., how many hours they spent studying or how much emphasis they place on grades), we can look for patterns in the relation- ships among these bits of information. (p. 6)

social facts

life’s “big” realities (e.g., the legal and economic system), which wield significant influ- ences on people and are beyond people’s control in any direct sense. They are important to positivists because such social realities are believed to exert their effects no matter what we think about them. (p.

rate data

data that are expressed as a frequency per some unit of population, for example, birth rates, crime rates, death rates, unemployment rates, and infant mortality rates. For example, Vancouver’s murder rate is currently about 6 per 100,000 per year. Specifying rates rather than raw numbers allows researchers to compare rates in a single location over time or to compare two or more locations despite differences in their popula- tion size. (p. 6)

hypothetico-deductive method

the long name given to the process of deduction in social science research. The prefix “hypothetico” points to the role that the a priori (before-the-fact) specification of hypotheses plays in this brand of inquiry. Con- trasts with inductive approaches. (p. 7)

deductive method

the long name given to the process of deduction in social science research. The prefix “hypothetico” points to the role that the a priori (before-the-fact) specification of hypotheses plays in this brand of inquiry. Con- trasts with inductive approaches. (p. 7)

classic experiment

a type of research design, of- ten conducted in a controlled environment such as a laboratory, where the researcher is able to iso- late causal relationships between the dependent variable and one or more independent variables. (p. 7)

hypotheses

an unambiguous statement about the results that you expect to occur in a situation if the theory that guides your work is true. Hypotheses are generally associated with deductive inquiry, which believes that “good research” should begin with theory and should be directed toward testing theory. Stating your hypothesis before beginning your research is a bit like placing your bets ahead of time, so that you can’t come back later and say, “Oh yes, I knew that was going to happen.” If you knew, you should have said so. (p. 7)

phenomenologism

an approach to understand- ing whose adherents assert that we must “get inside people’s heads” to understand how they perceive and interpret the world. According to theorists such as Weber, phenomenological un- derstanding is a virtual prerequisite for achieving verstehen. (p. 8)

verstehen

a German word, first used in the social sciences by Max Weber, that refers to a profound understanding evidenced by the ability to appreci- ate a person’s behaviour in terms of the interpret- ive (i.e., phenomenological) meaning he or she attaches to it. (p. 9)

rapport

the development of a bond of mutual trust between researcher and participant that is considered to be the foundation upon which ac- cess is given and valid data are built. (p. 10)

inductive approaches

Research perspectives characterized by the belief that research should begin with observation, since it is only on that basis that grounded theory will emerge. Thus, researchers observe, induce empirical generalizations based on their observations, and then, through analytic induction, attempt to develop a full-blown theory that adequately reflects the observed reality. Some- times known as “bottom-up” approaches; contrasts with deductive (or “top-down”) approaches. (p. 11)

case study analysis

analysis of a single case. (p. 11)

constructionism

the view that we actively con- struct reality on the basis of our understandings, which are largely, though not completely, culturally shared. It thus becomes important to understand people’s and society’s constructions of things, be- cause those constructions will have implications for how we study and make sense of the world. For ex- ample, men and women have been “constructed” as active and passive, respectively, for many years. This construction has even spread to our understandings of sexual intercourse and conception. We once en- visioned active spermatozoa, released when the man ejaculates, swimming to the woman’s ovum, which passively awaits fertilization. As our conceptions of women have changed in recent years, so, too, has our conception of conception. Now researchers bring a more egalitarian perspective to their under- standing of the fertilization process: although the spermatozoa are still characterized as swimming to the ovum, the ovum is now considered to play a more active role in “send[ing] out messages to the sperm, participating actively in the process, until sperm and egg find each other and merge” (Flint 1995: D8). (p. 12)

de-construct

taking apart a piece of text or im- age or a concept in order to expose hidden mean- ings or assumptions. (p. 13)

critical realism

critical realism, or the critical realist perspective, can be seen as a midway resolution that acknowledges some truth in both realist and social constructionist perspectives. Like the constructionists, critical realists acknowledge that "reality"is indeed constructed and negotiated, but they also assert that reality is not completely negotiable, that is, all explanations are not equally viable. In other words, we can be "wrong". But, if we can be "wrong", there must be a reality out there that exists independent of our opinions of it. (p. 20)

consensus model

consensus theoretical models assume that there is general agreement in society about right and wrong, the boundary between criminal and non-criminal activity, the role of government , and so on. They are generally distinguished from conflict models, which assume that views about these issues are not shared and in fact are often hotly contested, which raises questions about just whose interests are being served by government, the laws that are created, and so on. (p. 20)

academic freedom

The canadian association of university teachers defines academic freedom as "the right to tech, learn, study and publish free of orthodoxy or threat of reprisal and discrimination" as long as one doesn't violate the ethical standard's of one's discipline. Free inquiry is seen as the foundation upon which innovation, creativity and theoretical development can grow. Concern about academic freedom is growing these days as the intervention third parties into the research process-government, corporations, special interest groups-seems to be increasingly placing constraints on what researchers can do, and universities and colleges administrations themselves seem to be engaging more frequently in micromanagement of their faculty

ceteris paribus

A latin phrase meaning "all else being equal." This phrase-often explicitly and always implicitly-underliestheoretical statements; that is, variables X and Y are related to each other, ceteris paribus. It's also the cornerstone of the experimentalist methods discussed in chapters 9 and 10. The true experiment embodies the ceteris paribus assumption by testing the effects of certain variables on other variables under conditions in which, overall, all other variables are equalized. (p. 45)

convergent validity

The degree to which your measure is related to other measures to which it is supposed to be related. For example, a measure of how " in love" people are should be related to other measures of affection, intimacy, and commitment. If we show that our measure is related to other measures of affection, intimacy and commitment. If we show that our measure is related to those other indicators, we've demonstrated convergent validity. (p. 56)

concurrent validity

A type of validity that involves correlating responses on our measure to some other criterion. Suppose our measure of romantic love involves asking two people the question "are you in love?" We must show that responses to that question are tied to some other independent measure of "love". For example, Zick Rubin's (1973) research on this topic shows that people who say they're in love tend to gaze into each other's eyes more often and for longer periods than than do couples who do not say they are in love. If we take those measures at approximately the same time, we're engaging in concurrent validation. "concurrent" means at the same time; temporal closeness between two measures defines concurrent validation. contrasts with predictive validity.

conceptual mapping

The act of graphically diagramming the relationships between several theoretically relevant concepts. (p. 58)

confidentiality

The ethical right of people to keep information about themselves private or to share it only with those whom they trust safeguard it. (p. 71)

confidentiality certificates

A certificate issued by the national institutes of health in the united states that provides statutory protection for researchers to ensure that they cannot be forced to disclose identifying research information about participants to any civil, criminal, administrative, legislative or other proceeding. See also Privacy certificates. (p. 72)

common law

law based on legal precedents as established through judicial decisions as opposed to legislative decree. (p. 74)

Anonymize

The process of taking research data and deleting all names and other identifying information that could be used to identify the source of the data

class privilege

A relationship such as that between a lawyer and his or her client- that is recognized and protected under law so that the normal obligation we have to testify what we know when subpoenaed is set aside. A relationship of class privilege means that the court is prepared to assume it exists without the need for the parties of the relationship prove they deserve to have the confidence of their communications protected every time their confidences are challenged. In such relationships the onus of proof is on those challenging the protected nature of the communications to demonstrate what compelling reasons exist for the privilege to be set aside. (p. 76)

adaptive questioning

Where answers to specific questions influence the subsequent questions asked. For example, a question early in a survey might ask you to identify your favourite sport. If you respond that it is "hockey", then instead of subsequent questions asking you about "your favourite sport" an adaptive questioning strategy would simply refer to "hockey" given that you have already identified that as your favourite sport (p. 148)

adaptive questioning

Where answers to specific questions influence the subsequent questions asked. For example, a question early in a survey might ask you to identify your favourite sport. If you respond that it is “hockey,” then instead of subsequent questions asking you about “your favourite sport” an adaptive questioning strategy would simply refer to “hockey” given that you have already identified that as your favourite sport. (p. 148)

case-by-case privilege

A common law method of claiming privilege whereby the researcher argues that the specific relationships that he or she forms with his or her participants within the context of the research is one that demands special protections of communications between the researcher and participant and that s/he should not be be obliged to give evidence in court that would identify individual research participants or the data a specific individual supplied. In such cases the researcher must case-by-case privilege A common law method of claiming privilege whereby the researcher argues that the specific relationships that he or she forms with his or her participants within the context of the research is one that demands special protections of communications between the researcher and participant and that s/he should not be be obliged to give evidence in court that would identify individual research participants or the data a specific individual supplied. In such cases the researcher must demonstrate that confidentiality was crucial to their specific research project. See also: class privilege (p. 77)categorical response items Questionnaire response options where respondents may placethemselves in predefined categories. The simplest type of categorical questionis the dichotomous item, types of categorical response option with only tworesponse alternatives (eg. The categories of pass or fail). (p. 166)

categorical response items

Questionnaire response options where respondents may place themselves in predefined categories. The simplest type of categorical question is the dichotomous item, types of categorical response option with only two response alternatives (eg. The categories of pass or fail). (p. 166)

caveat emptor ethic

caveat emptor is a latin phrase that means “let the buyer beware. The classic example is when you buy a used car: as long as the seller does not mislead you, it is up to you to do whatever testing or assessment that needs to be done to ensure you are not buying a lemon. In the realm of research ethics, it refers to those who believe that by warning people about what they might encounter by participating in research, you have fulfilled your ethical obligations and hence that, if problems arise, you can simply say Gee, that’s too bad but I told you that could happen. As such, it downloads responsibility to the participant, trying to ensure that nothing adverse happens to them as a function of their participation, and taking responsibility in the event anything does happen. (p. 82)

computer assisted telephone interviewing

A form of telephone interviewing where the researcher uses a computerized script to aid him or her in the administration of the interview and the recording of participant responses. (p. 146)

computer-assisted social research(CASR)

The practice of using digital or or network technologies in order to enhance conventional approaches to observation and data collection. (p. 117)

contingency questions

A question or a subset of questions in an interview or questionnaire that require the respondent to answer only if he or she has answered in a particular way to previously asked questions. (p. 145)

definitional operationism

A play on the term “operational definition” that refers to a type of mono-operationism in which researchers engage in the tautology of operationally defining their variables of interest by definition. For example, a researcher may develop a measure of stupidity but instead of showing that the measure possesses convergent and divergent validity with respect to other measures, may simply state that the measure is its own definition; that is, what we mean by stupidity is whatever our test measures. Needless to say, that’s stupid. (p. 57)

demographic variables

Information about an individual or social group that helps contextualize the person or group, usually in relation to social facts. It includes variables such as age, sex, race, socioeconomic status, education, and gender. (p. 52)

dichotomous item

a type of categorical response item that contains only two response alternatives. For example, the question “what is your sex?” has only two possible responses for most situations: you can be male or female (p. 166)

Dichotomy

Any division into two parts; an especially popular and appropriate term with respect to classification. A sample might be dichotomized into males and females or into an experimental and a control group. But dichotomies can also be less concrete; qualitative and quantitative approaches to science, for example, represent a dichotomy of research perspectives. (p. 166)

digital technologies

the full range of computer software, hardware, and architecture that comprise the digital universe. Examples of digital technologies include cellphones, personal computers, digital recorders, digital audio players, gaming systems, wireless access points, and LCD projectors (p. 22)

disproportional stratified random sample

A probability sampling technique that is used when the researcher is primarily interested in comparing results between strata rather than in making overall statements about the population or when one or more of the subgroups are so small that a consistent sampling ratio would leave sample sizes in some groups too small for adequate analysis. The researcher begins by stratifying the population into subgroups of interest and taking a random sample within each stratum, so that equal numbers of units of analysis end up in each of the strata samples. (p. 107)

divergent validity

Involves showing that your measure is not related to other measures to which it’s not supposed to be related. For example, a measure of how “in love” people are should not be related to independent and different concepts like respect or tolerance (since each of these can exist without love being present). If we show that our measure is independent of (i.e. not correlated with) measures of those other indicators, we’ve demonstrated divergent validity. (p. 56)

double barrelled item

When two questions are presented as a single itemor question in a questionnaire or interview. (p. 176)

Empirical

An approach to the generation of knowledge that maintains that our understanding of the world should come not from philosophizing or speculation, but from data that comes from interacting with and observing the world we seek to understand. (p. 17)

epistemic relationship

How well your nominal definition and your operational definition demonstrate goodness of fit; that is, how wellyour operational definition gets at or asseses what your nominal definition says you’re interested in. (p.50)

exhaustive

Covering all possible alternatives. A term often used in relation to response options provided in a questionnaire or structured interview such as a CATI. (p. 167)

experimental operational definitions

creating an operational definition of a construct through the implementation of an experimental condition. (p. 51)

fallibilist realism

a philosophical position that maintains that human beings can be wrong about our beliefs and understandings of the world as a result we mist open to evidence that contradicts our beliefs and understandings. (p. 20)

funneling

A technique used in questionnaires and interviews when open ended and closed or structured questions on the same issue are mixed together. Funnelling involves starting with more general questions and gradually becoming more and more specific. (p. 170)

gatekeeper

A person who controls access to research participants or other research data such as archival content. Gatekeepers often hold a position of power or status relative to the individuals, groups, organizations, or social artifacts the the researcher is interested in accessing. (p.86)

generalizability

The ability to extend the results or findings of the research beyond its original context (i.e., sample) to a more general context (eg. the population), other people, situations, or times. (p. 97)

information sheet

A clearly written and understandable written document that is presented to a research participant prior to their participation in a study that outlines what the nature of of what is expected them as participants, any risks involved, and any promises and safeguards the researcher offers. (p.69)

informed consent

an ethical principle that suggests you should not do things to people unless they say it's alright to do stand only when their consent is given on the basis of knowing all aspects of the situation and the possible outcomes that might affect their willingness to participate. Consent cannot be considered binding unless it's given on an informed basis. (p. 69)

inter-rater reliability

The degree to which two or more people, using the same coding scheme and observing the same people, produce essentially the same results. Inter-rater agreement must normally be higher than 80 percent in order to be considered acceptable. compare test-test reliability (p. 56)

intranets

Refers to privately constructed and maintained computer networks that can be accessed only by authorized persons within the company, organization, or institution. Often connected to the internet, securities maintained by the use of firewalls. Because of their universality of access within the organization, intranets can be used by authorized researchers to gain access to the entire population of the organization, or representative or targeted samples thereof. (p. 118)

limited confidentiality

When the guarantee of confidentiality that the research provides her or his participants limited by law as opposed to disciplinary standards. under limited confidentiality the researcher is saying s/he will divulge information provided by a participant in confidence if s/he is compelled by a court or other legal body to do so. even in such circumstances, however, the researcher cannot simply wash his/her hands of the situation, because our obligation to minimize harm to our participants remains. To do otherwise would be to engage in caveat emptor ethics (p. 81)

manipulation checks

a common element in experimental designs that involve manipulated independent variables. Because the question being addressed is a validity question, the issue is whether you have indeed created the variable you thought you were creating and/or whether you are assessing the effect of the variable you think you are assessing. (p. 51)

measured operational definitions

These are contrasted with experimental operational definitions that are created by an experimenter. In the case of measured operational definitions, the research might develop an instrument that simply assesses the extent to which some construct of interest is present. For example, we might operationalize authoritarianism by developing an attitude scale that measures authoritarianism, as, for example, Altenmeyer (1981) did. (p. 50)

mono-operationism

Two potential problems that can be produced when researchers become overly reliant on, respectively, a particular measure of a construct or a particular way of measuring it. Mono-operationism(sometimes called mono-operation bias) refers to the problem that develops when we use only one operational definition of a variable (e.g. if we use only IQ tests to measure intelligence, never trying to measure it any other way). Mono-Method bias refers to reliance on only one method (eg. self report interviews) instead of investigating a phenomenon in a number of different ways (eg. also incorporating observational and/or archival research). see also definitional operationism (p.52)

multistage cluster sampling

A probability based sampling technique that is employed when no sampling frame is available. This technique involves randomly sampling clusters within clusters until one reaches the desired unit of analysis. (p. 108)

mutually exclusive

The property of a questionnaire item where the categories of response provided do not overlap with one another so that a participant can realistically select only one response option. (p. 167)

nominal definitions

A statement of what a concept means to the researcher; much like a dictionary definition. Expressing nominal definitions for the key concepts or variables involved in your research allows other researchers to consider whether they would agree with your definition of the term. See also epistemic relationship. (p. 48)

non-probabilistic sampling

A set of sampling techniques in which the probability of selecting each sampling unit is unknown or unknowable. These techniques are optimal when a sampling frame is unavailable, when creative means must be used to locate closer samples, and/or when the research objectives would be best fulfilled by a strategically chosen sample. Contrasts with probabilistic sampling. (p. 98)

operational definitions

the way we actually define the variables of interest within the confines of the research project. Suppose you're interested in looking at romantic love. How will you determine whether any two people in your research are actually in love? If they say yes, you will consider them in love. Their response to the question are you in love has become the operational definition of the concept of love in your research. Contrasts with nominal definition. (p. 48)

panel studies

A type of longitudinal research in which you identify a particular group (or panel) of people and return to those very same people again and again over time. This contrasts with a trend study, for example, where you'd return to the same population each time to take a new sample each time. (p. 150)

periodicity

A phenomenon produced by the cyclical nature of some lists. Periodicity causes a problem systematic sampling with random start when the list's cyclical nature becomes confounded with the sampling ratio or interval. (p. 105)

pilot study

A study that takes place prior to the actual study where the researcher is able to test our features of her or his design such as sampling and recruitment strategies or research instruments. (p. 174)

population

An aggregation of all sampling elements, that is, the total of all the sampling units that meet the criterion for inclusion in a study. The sampling frame, if available, defines the population. (p. 100)

predictive validity

A type of validity that involves assessing the extent to which your measure does in fact predict whatever it's supposed to predict. For example, if you develop tests like the LSAT (Law school Admission Test) or GRE (Graduate Records Exam), which are supposed to predict success in law school and graduate school, respectively, then demonstrating the test's predictive validity would require you to show that scores on the test do indeed relate to later success in law school or graduate school, respectively. See also concurrent validity. (p. 57)

privacy certificates

A certificate issued by the national institutes of justice in the united states that provides statutory protection for researchers to ensure that they cannot be forced to disclose identifying research information about participants to any civil, criminal, administrative, legislative, or other proceeding. See also confidentiality certificate. (p. 72)

privilege

A term describing the a state of the relationship that exists between researcher and participants whereby the persons in the relationship are exempt from the normal requirement that all of us have to testify when asked to do so in a court of law when and if information discussed in the context of that relationship becomes of interest to the court. The lawyer-client relationship is protected by a privilege, for example, so that you can go and talk freely to your lawyer and seek legal advice without fearing that s/will get subpoenaed and be on the witness stand the next day giving evidence against you. (p. 74)

probabilistic sampling

A group of sampling techniques that meet two criteria: the probability of sampling a given individual is known (or at least is theoretically knowable), and each sampling element in the population has an equal probability of being selected. contrasts with non-probabilistic sampling. (p.98)

proportional stratified random sampling

A type of stratified random sampling where the number of elements sampled in each stratum are proportionate to their numbers in the wider population. For example, if women account for 65 percent of students enrolled in your research methods class and males account for 35 percent drawing a proportionate stratified random sample of students from your class would ensure that 65 percent of your sample are female and 35 percent are male. (p. 107)

Pseudonyms

A fictitious name used in order to conceal the real source of an interviewer other research data. (p. 75)

purposive or strategic sampling

a general class of sampling techniques that are based on an acknowledgement that the parameters of the population are unknown. Instead of attempting to acquire a statistically representative sample, purposive samples are drawn to achieve a particular theoretical , methodological, or analytical purpose. (p. 113)

quota sampling

The non-probability equivalent of stratified random sample where the researched purposively selects participants to fill predetermined quota of substantively relevant groups. (p. 116

random digit dialing

A technique for selecting and contacting participants for telephone survey research where the unique identifier portion the phone number (ie. the numbers not associated with the area code or exchange) are dialled at random. (p. 146)

random error

Refers to errors that have no systematic biasing effect on a study's results. (p. 102)

random selection

A type of probabilistic sampling that meets two criteria: (1) nothing but chance governs the selection process, and (2) every sampling element has an equal probability of being selected. If those criteria are met, the resulting sample will be representative of the population included in the sampling frame, within the limits of sampling error. (p. 102)

reactive bias or reactivity

Th degree to which (if at all) the researcher's presence causes research participants to react by changing from their usual or normal behaviour patterns because they know they're being observed. (p. 151)

Reflexive

A process whereby the researcher remains consciously and critically aware of the multiple influences s/he has on the research process while also acknowledging how the research process also influences her or him. (p. 42)

reliability

The degree to which repeated observation of a phenomenon - the same phenomenon at different times, or the same instance of the phenomenon by two different observers - yields similar results. Underlying this concept are scientific beliefs about the importance of stability and repeatability in the generation of understanding. Two types of reliability are inter-rater reliability and test-retest reliability. (p. 55)

Replication

The efforts of one researcher to repeat the procedures another researcher has followed in order to ensure that the other researcher's findings are reliable. (p. 32)

representative samples

A sample is considered representative when the distribution of characteristics in the sample mirrors the distribution of those characteristics in the population. The priority attached to achieving formal representativeness is influenced by your research objectives. (p. 98)

researcher participant privilege

The state of the relationship that exists between researcher and participants whereby the researcher is exempt from the normal requirement that all of us have to testify when asked to do so in a court of law when and if information discussed in the context of that relationship becomes of interest to the court. (p. 73)

rival plausible explanation

Alternative factors that might also have accounted for the results you observe. Threats to internal validity, for example, are all rival plausible explanations. Sound research design aims to minimize these. (p. 17)

sample

A subset of the population that the researcher wishes to research. (p. 97)

sampling elements

Units or elements about which information will be gathered, they are the things you wish to study. Sampling elements can include individuals groups, organizations or social artifacts. (p. 100)

sampling error

The degree to which the distribution of characteristics in a sample deviates from the distribution those characteristics in the population from which the sample s drawn. Estimates of the sampling error can be computed only when random sampling has been done. The two types of sampling error are systematic error and random error. (p. 100)

sampling frame

A complete list of all the sampling elements of the population we wish to study. For example, if we want to sample voters in an upcoming election. the voter's lists represent a sampling frame of all eligible voters who have been enumerated. The availability of a sampling frame can influence a researcher's choice of sampling techniques. (p. 101)

sampling ratio

A way of expressing what proportion of a populations actually sampled. For example, if the population numbers 1,000 people and if you sample 200 of them, your sampling ratio is 200:1,000 (ie. 200 out of 1,000), or 1:5 (ie. 1 in 5). (p. 104)

secondary data

Primary research data collected by another researcher for their own purposes that is subsequently made available to other members of the research community for their own purposes. (p. 58)

selective deposit

a term referring to the fact that some people, groups, and processes have a higher likelihood than others of having their views, lives, and so on made a part of the historical record. Historians can study history based only on what's in the record; thus, our understanding of history is influenced by the factors that influence selective deposit into that record. (p. 157, 219)

self anchoring scale

Refers most commonly to a type of scale used by Hadley Cantril (see chapter 6) in which the need points of the scale are defined by the participant. For example, in his famous international study about people's perceptions to their quality of life, he would first ask participants to think of the worst life situation they could imagine for themselves, and to call that a 1, and then the best situation they could imagine themselves in, and to call that a 10, after which he would ask them to place themselves the scale as their life right now. (p. 169)

semantic differential type item

a type of rating scale questionnaire item originally developed by Osgood, Suci, and Tannenbaum (1957) to assess the meaning associated with particular attitude objects using a set of bipolar adjectives (or dimensions) upon which any given attitude object could be described (eg. respondents are asked to assess the fairness (fair vs. unfair) of being punished for a variety of different behaviours). (p. 170)

simple random sampling

The probability sampling technique that allows a researcher to minimize sampling error and allows him or her to calculate the degree of sampling error that probably exists. To perform simple random sampling, in most cases you must have a sampling frame in which every sampling element is listed once and only once. Simple random sampling is then accomplished by merely choosing sample elements at random from the list. This can be sone by putting all the names (or whatever) into a hat or drum and pulling them out at random, by numbering all elements in the sampling frame and then using a table of random numbers or random number generator to guide your selection, or using a computer program such as microsoft excel or open office calc to select randomly from the sampling frame. (p. 101)

single response item

A survey response item format that requires the participant to indicate their response to a question in an empty space provided (eg. In what year were you born?______). (p. 166)

snowball sampling

Also referred to as network, chain referral, respondent driven, or multiplicity sampling is a purposive sampling technique that involves starting with one or two people and then using their connections' connections, to generate a large sample. This technique is especially useful if your target population is a deviant or closet population, or isn't particularly well-defined or accessible. (p. 115)

sociology of knowledge

The study of the social origins and social consequences of the processes in a society by which knowledge is constructed. (p. 64)

Stakeholders

In relation to social and health research can be any individuals who have a direct interest or concern with a particular aspect of the design, implementation, or outcome of the research. Stakeholders can include research participants, research ethics board members, the individuals, or organizations that sponsor or fund the research, and the researchers themselves. (p. 42)

statute based protection

Legislative protections for the confidentiality of identifiable information that ensure that research data cannot be used in any legal proceeding without the permission of the participant. In Canada, the only research participants whose information enjoys statute based protection are those who participate in research conducted by statistics Canada. The statistics act gives Statistics Canada employees a privilege to ensure that identifiable information unearthed by by research by one portion of government (Statistics Canada) cannot be used by any other branch of government or in any court or other proceedings in a manner that would violate the confidence of any individual respondent. (p. 76)

stratified random sample

A probabilistic sampling technique where the researcher divides the population into groupings (or strata)of interest and then samples randomly within each stratum. This technique is used when there is some meaningful grouping variable on which the investigator wishes to make comparisons and where the probabilities of group membership are known ahead of time. (p. 105)

systematic error

One of two types of sampling error. Systematic error occurs when aspects of your sampling procedure act in a consistent, systematic way, to make some sampling elements more likely to be chosen for participation than others. For example, Chapter 4 describes a 1992 call-in survey in which an American TV programs viewers expressed their opinions about a presidential speech. To participate, respondents had to among other things, own a touch tone phone, be interested in and able to understand a TV show dealing with political analysis, and be motivated enough to take the time to express their opinion. These factors created a systematically biased sample: wealthier people, more educated people, and people who wanted to complain were more likely to be represented in that sample than poorer people (who might be unable to afford a touch tone phone), less educated people (who might be less able to follow the political analysis), and apathetic, uncertain, or contented people (who might have less motivation to call). Contrasts with random error. (p. 102)

systematic sample with random start

A probabilistic sampling technique where the researcher selects a randomly determined starting point within a sampling frame and samples every nth element (based in a sampling ratio) until the desired sample size is reached. (p. 104)

test-retest reliability

The degree to which a measure shows reliability (i.e. consistency)by producing similar results when a test is administered on two successive occasions; that is, where the same group of people are tested and retested. If the test (or scalar other type of measure) is reliable, the two sets of scores should correlate highly: people who score high (or low) on one occasion should also score high (or low) on the second occasion. Compare inter-rater reliability. (p. 56)

units of analysis

The units or elements about which information will be gathered, they are the things you wish to study. Units of analysis can include individuals, groups, organizations or social artifacts. (p. 100)

universe

In the context of sampling a theoretical aggregation of all possible sampling elements. contrasts with population. (p. 100)

usability

A term employed by graphic and web designers to refer to the importance of understanding the user interface instead of the system upon which the interface is run. In CASR usability refers to how user-friendly a particular data collection interface such as a browser based survey is. (p. 147)

validity

A term that refers, in the most general sense, to whether research measures what the researcher thinks is being measured. This text discusses many kinds of validity, including predictive validity, internal validity, external validity and ecological validity. All relate to whether you are indeed accomplishing what you think you are. (p. 56)

Variables

Stated most simply, anything that varies. For example, in your methods class, sex is probably a variable, since the students in your class (unless you go to a sex-specific university or college) probably include both males and females. Variables contrast with constants, which are things that do not vary. (p. 42)

Wallace's wheel

A circular diagram in which Wallace attempted to go beyond the debates about whether inductive or deductive approaches were best and to show how the two were actually complementary parts of the same underlying empirical practice of working back and forth between theory and dats. (p. 27)

Wigmore criteria

A set of four criteria used by Canadian and U.S. courts to evaluate whether communications in certain relationship (eg. researcher-participant, doctor-patient, priest-penitent) should be considered privileged and exempt from the normal requirement to testify in a court of law. The criteria require that (1) there be a shared expectation of confidentiality by those in the relationship; (2) confidentiality be essential to the relationship; (3) the relation be an important and socially valued one; and (4) the damage that would be done the relationship by disclosure be greater than the damage to the case at hand by nondisclosure. (p. 74)