• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/28

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

28 Cards in this Set

  • Front
  • Back

CHAPTER 1

INTRODUCTION


BACKGROUND OF THE STUDY


RRL


THEORETICAL FRAMEWORK CONCEPTUAL FRAMEWORK SOP


HYPOTHESIS


SIGNIFICANCE OF THE STUDY


SCOPE AND DELIMITION


DEFINITION OF TERMS

CHAPTER 2

RESEARCH


DESIGN


SETTING


RESPONDENTS


SAMPLE SIZE and SAMPLING TECHNIQUES


INSTRUMENTATION


DATA GATHERING


STATISTICAL TREATMENT

Data Gathering ProcedureQualitative

OBSERVATION


INTERVIEW


FOCUS GROUP DISCUSSION (FGD)


ASSESSMENT OF PERFORMANCE

Data Gathering Procedure Quantitative

SURVEY QUESTIONNAIRES


TEST QUESTIONNAIRES


OBSERVATION


INTERVIEW

, as defined by Barrot (2017), refers to the degree to which an instrument measures what it is supposed to measure.

Validity

Types of Validity

FACE VALIDITY


CONTENT VALIDITY


CONSTRUCT VALIDITY


CRITERION VALIDITY

- An instrument has face validity when it “appears” to measure the variables being studied. Hence, checking for it is a subjective process. It does not ensure that the instrument has actual validity.

Face Validity

refers to the degree to which an instrument covers a representative sample (or specific elements) of the variables to be measured. Similar to face validity, assessing it is a subjective process which is done with the help of a list of specifications. This list of specifications is provided by experts in your field of study.

Content Validity

It is the degree to which an instrument measures the variables being studied as a whole. A construct is often an intangible or abstract variable such as personality, intelligence, or moods. If your instrument cannot detect this intangible construct, it is considered invalid.

Construct Validity

refers to the degree that an instrument predicts the characteristics of a variable in a certain way. This means that the instrument produces results similar to those of another instrument in measuring a certain variable. Therefore, a correlation between the results obtained through this instrument and another is ensured. Hence, it is evaluated through statistical methods.

Criterion Validity

TYPES OF CRITERION VALIDITY

CONCURRENT VALIDITY


PREDICTIVE VALIDITY

. Is the test able to predict results similar to those of a test already validated in the past? . Admission Test vs NAT

I.CONCURRENTVALIDITY

. Does the test produce results similar to those of another instrument that will be employed in the future? . College admission tests vs future perfiormance in Math

2.PREDICTIVEVALIDITY

refers to the consistency of the measures of an instrument. It is an aspect involved in the accuracy of the measurement.


Reliability

Types of Reliability

I.Test-retest reliability2. Equivalent forms reliability 3. Internal consistency reliability4. I nter-rater reliability


- is achieved by administering an instrument twice to the same group of respondents/participants and then computing the consistency of scores. It is often ideal to conduct the retest after a short period of time (e.g., two \~'eeks) in order to record a high correlation between the variables tested in the study.

Test-retest reliability

- is measured by administering two tests identical in all aspects except the actual \wording of items. In short, the two tests have the same coverage, difficulty level, test type, and formal. An example procedure involving equivalent forms reliability is administering a pre-test and post-test.

2. Equivalent forms reliability

- is a measure of hon ' well the items in two instruments measure the same construct.

Internal consistency reliability

- measures the consistency of scores assigned by two or more raters on a certain set of results.

Inter-rater reliability

Overview of the design used for the study The plan or structure for conducting a study whether it is experimental, quasi- experimental, correlational, case study, exploratory, descriptive, phenomenology, ethnography,etc. Summarizes the set of procedures that the researcher will use to obtain data to answer the research problems

Research Design

Included only if the setting is of particular significance or importance


Setting

Includes the number and relevant characteristics of the respondents as well as the sampling plan and technique The term “Respondents” is more appropriate when the method to be used is Survey; “Participants” if Interview or FGD

Respondents/ Participants

The term is more appropriate when the method to be used is Survey;

Respondents

if Interview or FGD

Participants

This section is where the researcher discusses the process used in coming up with the specific number of respondents for the study and how the individual respondents from the population will be selected from the population

Sample Size and Sampling Technique

This section discusses the data gathering tool that isused in the study Discuss how many sections parts, what are the parts, how many questions etc.

Instrumentation

Contains the process used when conducting the actual study Includes the step-by-step “recipe” beginning with how the subjects were contacted all the way to how the data were collected Should also contain the Ethical Considerations applied in thestudy (e.g.informed consent, debriefing procedures, and so forth)

Data Gathering Procedure

Describes the procedure on how the data are to be (or were) analyzed for Quantitative–Statistical Treatment/Analysis for Qualitative Thematic/ Content Analysis forMixed–Statistical and Thematic or Content Analysis

Statistical Treatment/Data Analysis