• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/35

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

35 Cards in this Set

  • Front
  • Back
Alternative forms reliability?
Involves using two instruments, both measuring the same variables and relating or correlating the scores for the same group of individuals to the two instruments.
Alternative forms of test-retest reliability?
is an approach to reliability in which the researcher administers the test twice and also uses an alternate form of the test from the first administration to the second.
Attitudinal measure?
seek to assess affect or feelings toward educational topics (assessing positive or negative attitudes toward giving students a choice of school to attend).
Behavioral observations?
Consist of selecting an instrument to record a behavior, observing individuals for that behavior, and checking points on a scale that reflect the behavior. (Eg. Behavioral checklists).
Factual information or personal documents?
consist of numeric data in public records of individuals. This data can include grade reports, school attendance records, student demographic data, and census information.
Instrument?
tools for measuring, observing, or documenting quantitative data. Researchers identify these instruments before they collect data, and they may include a test, a questionnaire, a tally sheet, a log, an observational checklist, an inventory, or an assessment instrument.
Interrater reliability?
means that two or more individuals observe an individual’s behavior and record scores, and then the scores of the observers are compared to determine whether they are similar.
Internally consistent?
means that scores from an instrument are reliable and accurate if they are consistent across the items of the instrument.
Interval Scales?
Provide “continuous” response options to questions that have presumably equal distances between options.
Convenience Sampling?
is a quantitative sampling procedure in which the researcher selects participants because they are willing and available to be studied.
Modifying an instrument?
means locating an existing instrument, obtaining permission to change it, and making changes to fit the participants.
Multistage Cluster Sampling?
is a quantitative sampling procedure in which the researcher chooses a sample in two or more stages because the populations cannot be easily identified or they are extremely large.
Nominal Scale?
Provide response options where participants check one or more categories that describe their traits, attributes, or characteristics.
Nonprobability Sampling?
is a quantitative sampling procedure in which the researcher chooses participants because they are available, convenient, and represent some characteristic the investigator seeks to study.
Operational definitions?
are the specification of how variables will be defined and measured (or assessed) in a study.
Ordinal scales?
are response options in which participants rank order from best, or most important, to worst, or least important, some trait, attribute, or characteristic.
Performance measures?
assess an individual’s ability to perform on an achievement test, an intelligence test, an aptitude test, an interest inventory, or a personality assessment inventory.
Population?
is a group of individual who comprise the same characteristics. For example, all teachers would make up the population of teachers, and all high school administrators in a school district would make up the population of administrators.
Probability Sampling?
is a quantitative sampling procedures in which the researcher selects individuals from the population so that each person has an equal probability of being selected from the population.
Ratio scales?
is a response scale in which participants check a response option with a true zero and equal distances between units.
Reliability?
means that individual scores from an instrument should be nearly the same or stable on repeated administrations of the instrument and that they should be free from sources of measurement error and consistent.
Representative Sample?
refers to the selection of individuals from a sample of a population such that the individuals selected are typical of the population under study, enabling the researcher to draw conclusions from the sample about the population as a whole.
Sample?
is a subgroup of the target population that the researcher plans to study for the purpose of making generalizations about the target population.
Sampling error?
is the difference between the sampling estimate and the true population score.
Scales of measurement?
are response options to questions that measure (or observe) variables in nominal, ordinal, or interval/ratio units.
Simple Random Sampling?
is a quantitative sampling procedure in which the researcher selects participants (or units, such as schools) for the sample so that any sample of size N has an equal probability of being selected from the population. The intent of simple random sampling is to choose units to be sampled that will be representative of the population.
Spearman Brown Formula?
is a formula for calculating the reliability of scores using all questions of an instrument. Because the split half test relies on information from only half of the test, a modification in this procedure is to use this formula to estimate full test reliability.
Snowball Sampling?
is a sampling procedure in which the researcher asks participants to identify other participants to become members of the sample.
Stratified sampling?
is a quantitative sampling procedure stratify the population on some specific characteristic (eg. Gender) and then sample, using simple random sampling from each stratum of the population.
Test re-test reliability?
examines the extent to which scores from one sample are stable over time from one test administration to another.
Systematic Sampling?
is a quantitative sampling procedure in which the researcher chooses every “nth” individual or site in the population until the desired sample size is achieved.
Validity?
is the development of sound evidence to demonstrate that the intended test interpretation (of the concept or construct that the test is assumed to measure) matches the proposed purpose of the test. This evidence is based on test content, responses processes, internal structure, relations to other variables, and the consequences of testing.
Target population?
sometimes called the sampling frame, is a group of individuals with some common defining characteristic that the researcher can identify with a list or set of names.
Unit of analysis?
refers to the unit (eg. Individual, family, school, school district) the researcher uses to gather the data.
What steps are involved in the process of data collection in quantitative research?
-Identify participants
-Get permissions
-List options for collecting info
-Locate, select and assess instrument
-Collect data