• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/30

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

30 Cards in this Set

  • Front
  • Back
Hypotheses consist of
independent variables(The postulated explanatory variables) and dependent variables( the variables being explained)

Relationships between variables can be

positive, negative, or curvilinear.
Extraneous, or control, variables may be examined

to see if the observed relationship is misleading.
A spurious relationship is one that

no longer exists when a third variable is controlled

Mediating variables are



intervening mechanisms by which independent variables affect dependent variables
Moderating variables influence

the strength or direction of relationships between independent and dependent variables.
Concepts are
mental images we use as summary devices for bringing together observations and experiences that seem to have something in common


It is possible to measure the things that our concepts summarize.





Conceptualization

is the process of specifying the vague mental imagery of our concepts, sorting out the kinds of observations and measurements that will be appropriate for our research.
Operationalization

is an extension of the conceptualization process.

In operationalization



, concrete empirical procedures that will result in measurements of variables are specified.
Operationalization

is the final specification of how we would recognize the different attributes of a given variable in the real world.
In determining the range of variation for a variable, be sure to consider the opposite of the concept. Will it be sufficient to measure religiosity from very much to none, or should you go past none to measure anti religiosity as well?


Operationalization begins in study design and continues throughout the research project, including the analysis of data.


Additional ways to operationalize variables involve the use of direct behavioral observation, interviews, and available records.


.Qualitative studies, rather than predetermining specific, precise, objective variables and indicators to measure
begin with an initial set of anticipated meanings that can be refined during data collection and interpretation.


Measurement error can be systematic or random. Common systematic errors pertain to social desirability biases and cultural biases.


Random errors have no consistent pattern of effectsake measurement inconsistent, and are likely to result from difficulties in understanding or adminstering measures.





Alternative forms of measurment include
written self-reports, interviews, direct behavioral observation, and examining available records, Each of the options is vulnerable to measurement error.

the principle of triangulation-

by using several different research methods to collect the same information, we can use several imperfect measurement alternatives and see if they tend to produce the same finding.

Reliability concerns athe amount of random error in a measure and measurement consistency. It refers to the likelihood that a given measurement consistency. It refers to the likelihood that a given measurement procedure will yield the same description of a given phenomenon if that measurement is repeated. For instance, estimating a person’s age by asking his or her friends would be less reliable than asking the person or checking the birth certificate.

Different types of reliability include interobserver reliability, test-retest reliability, parallel forms reliability, and internal consistency reliability.


Validity refers to the extent of a systematic error in measurement- the extent to which a specific measurement provides data that relate to commonly accepted meanings of a particular concept. There are numerous yardsticks for determining validity: Face validity, content validity, criterion-related validity, and construct validity. The latter two are empirical forms of validity, whereas the former are based on expert judgments.


Two subtypes of criterion-related validity are predictive validity and concurrent validity.


The difference between these subtypes has to do with whether the measure is being tested according to ability to predict a criterion that will occur in the future or its correspondence to a criterion that is known concurrently.
Construct validation

involves testing whether a measure relates to other variables according to theoretical expectations. It also involves testing the measure’s convergent validity and discriminant validity.
A measure has convergent validity

when its results correspond to the results of other methods of measuring the same construct.
A measure has discriminant validity when

its results do not correspond as highly with measures of other constructs as they do with other measures of the same construct and when its results correspond more highly with the other measures of the same construct than do measures of alternative constructs.
Factorial validity.

refers to how many different constructs a scale measures and whether the number of constructs and items making up those constructs are what the researcher intends
Reliability and validity are defined and handled differently in qualitative research than they are in quantitative research . Qualitative researchers disagree about definitions and criteria for reliability and validity, and some argue that they are not applicable at all to qualitative research. These disagreements tend to be connected to differing epistemological assumptions about the nature of reality and objectivity.


termed the sensitivity of an instrument.

The ability to detect subtle differences between groups or subtle changes over time within a group
Known-groups validity is another subtype of criterion-related validity.

It assess whether an instrument accurately differentiates between groups known to differ in respect to the variable being measured.