• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/34

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

34 Cards in this Set

  • Front
  • Back
basic science
accumulates empirically verifiable facts about the world and its inhabitants
applied science
puts a premium on knowledge that is more immediately useful
3 basic ethical principles
respect for persons, beneficence, and justice

the commision found a way to apply all 3:
1. the requirement of full and informed consent
2. a formal assessment of the risk and benefits of research participation
3. a system of distributive justice in the selection of research participants (which includes types of groups)
respect for persons (ethical principle 1)
people are capable of forming their own opinions, making choices and carrying out actions...treat everyone as an autonomous agent with freedom of action

Investigators should provide additional protection to individuals with special circumstances
beneficence (ethical principle 2)
investigators are obliged to avoid harming individuals who participate in research
In cases where the research involves some level of risk, it is the investigators duty to minimize these risks and to max all possible benefits for the participant
justice (ethical principle 3)
investigators must take care to distribute the advantages and disadvantages of research participation in a balanced manner, so that no particular group reaps greater benefits or suffers greater burdens
Validity
How well measure or design does what it says it does
Validity is about logic of study: does it make sense?
Reliability
A measure is stable or consistent
Reliability is more numerical: do measures meet established standards in the field?
Construct validity
are the variables operationalized appropriately?

measures what it’s supposed to measure
statistical validity
are the data analyzed properly?
internal validity
is the independent variable actually the cause of differences in the dependent variable?
external validity
does this study generalize to other people, situations, locations, etc.?
test-retest reliability
temporal stability

the variation in measurements taken by a single person or instrument on the same item and under the same conditions. A less-than-perfect test-retest reliability causes test-retest variability. Such variability can be caused by, for example, intra-individual variability and intra-observer variability. A measurement may be said to be repeatable when this variation is smaller than some agreed limit.

is an estimate of the degree of fluctuation of the instrument, or
of the characteristic it is designed to. measure, from one administration to another.
alternate-form reliability
equivalence

An alternate form reliability is the authenticity stablished by carrying out two different forms of the same test to the same individuals. This method is convenient to avoid the problems that come from the test-retest method. With the alternate form reliability method, an individual is tested on one form of the test, and then again on a comparable second form, the inbetween time is about one week. This method is used more than the test-restest method because it has fewer related problems, including an abundance reduction in practice effects

In this experiment, two roles were given to the participants, either the role of a prisoner or a prison guard. If both roles were given to the same participant then this would have been an example of an alternate form reliability, but since the roles are given to different participants, the term does not apply to the experiment. If this concept is used correctly, a participant would be given the roles of both a prison guard and a prisoner, later, that individuals behavior during that role would be compared to the other. Since both roles are not predicted to be similar, this is the reason why alternate form reliability was not used in this experiment.
convergent validity
matches

, a parameter often used in sociology, psychology, and other behavioral sciences, refers to the degree to which two measures of constructs that theoretically should be related, are in fact related.[1] Convergent validity, along with discriminant validity, is a subtype of construct validity. Convergent validity can be established if two similar constructs correspond with one another, while discriminant validity applies to two dissimilar constructs that are easily differentiated.
discriminant validity
different

tests whether concepts or measurements that are supposed to be unrelated are, in fact, unrelated
content validity
samples relevant material, e.g. quizzes

refers to the extent to which a measure represents all facets of a given social construct. For example, a depression scale may lack content validity if it only assesses the affective dimension of depression but fails to take into account the behavioral dimension. An element of subjectivity exists in relation to determining content validity, which requires a degree of agreement about what a particular personality trait such as extraversion represents. A disagreement about a personality trait will prevent the gain of a high content validity
criterion validity
correlated with outcome criteria

a measure of how well one variable or set of variables predicts an outcome based on information from other variables, and will be achieved if a set of measures from a personality test relate to a behavioral criterion on which psychologists agree.
Concurrent validity
present

is demonstrated where a test correlates well with a measure that has previously been validated. The two measures may be for the same construct, or for different, but presumably related, constructs
Predictive validity
future

the extent to which a score on a scale or test predicts scores on some criterion measure
Known groups validity
inter-rater?
face validity
does it appear to measure what it should?

the term simply means whether the test seems on the surface
(or "face") to be measuring something relevant. It should not be confused
with content validity, as face validity refers not to what the tesr measures but only
to how it looks. The idea of face validity is that if a test does not appear to be relevant,
some respondents may not take it seriously
how all the items in the test hang together
its internal-consistency reliability),

reliability of components
systematic error
is the name for fluctuations that are not random
but are slanted in a particular direction (thus, another name for systematic
error is btas)
random error
(often described as noise) is the name for chance fluctuations, or haphazard errors.
temporal stability
or dependability in this example,
we mean that those who scored high initially scored high on retest, and
that those who scored low initially scored low on retest.
item to item reliability
Think of this value as the estimate of the reliability of
any single item on average.
questionnaire
the physical instrument used for data collection
survey
a type of study designed to collect information from a sample in order to generalize to a population
The quality of a survey is defined by its sampling
What makes a good sample?
How important is representativeness?
How important is sample size?
simple random sample
a subset of individuals (a sample) chosen from a larger set (a population).
stratified random sample
when subpopulations within an overall population vary, it is advantageous to sample each subpopulation (stratum) independently. Stratification is the process of dividing members of the population into homogeneous subgroups before sampling. The strata should be mutually exclusive: every element in the population must be assigned to only one stratum. The strata should also be collectively exhaustive: no population element can be excluded. Then simple random sampling or systematic sampling is applied within each stratum. This often improves the representativeness of the sample by reducing sampling error. It can produce a weighted mean that has less variability than the arithmetic mean of a simple random sample of the population.
cluster sampling
used when "natural" but relatively homogeneous groupings are evident in a statistical population. It is often used in marketing research. In this technique, the total population is divided into these groups (or clusters) and a simple random sample of the groups is selected. Then the required information is collected from a simple random sample of the elements within each selected group.
response order effects: auditory
recency
response order effects: visual
primacy