• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/43

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

43 Cards in this Set

  • Front
  • Back

Indecies and Scales Typology

Quantitative data analysis requires the construction of two types of measures of variables--indices and scales. These measures are frequently used and are important since social scientists often study variables that possess no clear and unambiguous indicators--for instance, age or gender. Researchers often centralize much of work in regards to the attitudes and orientations of a group of people, which require several items to provide indication of the variables. Secondly, researchers seek to establish ordinal categories from very low to very high (vice-versa), which single data items can not ensure, while an index or scale can.


Typologies- the classification *typically nominal of observations in terms of their attributes on two or more variables. THe classification of newspapers as liberal-rural conervative-urban, or conservative liberal would be an example.

More general information

the two have in common various factors.
Both:


* are ordinal measures of variables
* can order the units of analysis in terms of specific variables
* are composite measures of variables (measurements based on more than one one data item)

Index

Index -summarizes and rank-orders several specific observations and represents some more general dimension. An index is an accumulation of scores from a variety of individual items.E

Examply of and Index

let’s say we are interested in measuring job satisfaction and one of our key variables is job-related depression. This might be difficult to measure with simply one question. Instead, we can create several different questions that deal with job-related depression and create an index of the included variables. Let’s say we have four questions to measure job-related depression, each with the response choices of "yes" or "no":

* "When I think about myself and my job, I feel downhearted and blue."
* "When I’m at work, I often get tired for no reason."
* "When I’m at work, I often find myself restless and can’t keep still."
* "When at work, I am more irritable than usual."

To create our index of job-related depression, we would simply add up the number of "yes" responses for the four questions above. For example, if a respondent answered "yes" to three of the four questions, his or her index score would be 3, meaning that job-related depression is high. If a respondent answered “no” to all four questions, his or her job-related depression score would be 0, indicating that he or she is not depressed in relation to work.

Scale

A scale is a type of composite measure that is composed of several items that have a logical or empirical structure among them. That is, scales take advantage of differences in intensity among the indicators of a variable. The most commonly used scale is the Likert scale, which contains response categories such as "strongly agree," "agree," "disagree," and "strongly disagree." Other scales used in social science research include the Thurstone scale, Guttman scale, Bogardus social distance scale, and the semantic differential scale.

Similarities

they are both ordinal measures of variables. That is, they both rank-order the units of analysis in terms of specific variables. For example, a person’s score on either a scale or index of religiosity gives an indication of his or her religiosity relative to other people.


Both scales and indexes are composite measures of variables, meaning that the measurements are based on more than one data item. For example, a person’s IQ score is determined by his or her responses to many test questions, not simply one question.

Differences

First, they are constructed differently. A scale is constructed simply by accumulating the scores assigned to individual items. For example, we might measure religiosity by adding up the number of religious events the respondent engages in each month. A scale, on the other hand, is constructed by assigning scores to patterns of responses with the idea that some items suggest a weak degree of the variable while other items reflect stronger degrees of the variable. For example, if we are constructing a scale of political activism, we might score "running for office" higher than simply "voting in the last election." "Contributing money to a political campaign" and "working on a political campaign" would likely score in between. We would then add up the scores for each individual based on how many items they participated in and then assign them an overall score for the scale.

Types of Scales

The Likert scale is one of the most commonly used scales in the research community. The scale consists of assigning a numerical value to intensity (or neutrality) of emotion about a specific topic, and then attempts to standardize these response categories to provide an interpretation of the relative intensity of items on the scale. Responses such as “strongly agree,” “moderately agree,” “moderately disagree,” and “strongly disagree” are responses that would likely be found in a likert scale, or a survey based upon the scale.


The semantic differential scale is similar to Likert scaling, however, rather than allowing varying degrees of response, it asks the respondent to rate something in terms of only two completely opposite adjectives.


An example of a scale used in real-life situations is the Bogardus Social Distance Scale. This scale, developed by Emory Bogardus is used to determine people’s willingness to associate and socialize with people who are unlike themselves, including those of other races, religions, and classes.


Thurstone scaling is quite unlike Bogardus or Likert scaling. Developed by Louis Thurstone, this scale is a format that seeks to use respondents both to answer survey questions, and to determine the importance of the questions. One group of respondents, a group of “judges,” assign various weights to different variables, while another group actually answers the questions on the survey


Guttman scaling, developed by Louis Guttman, is the type of scaling used most today. Guttman scaling, like the Thurstone scale, recognizes that different questions provide different intensities of indication of preferences. It is based upon the assumption that the agreement with the strongest indicators also signifies agreement with weaker indicators. It uses a simple “agree” or “disagree” scale, without any variation in the intensities of preference.

Validity

Accuratly measure what you say you are measureing.


Face vailidity


Criteria Validity


Predictive Validity


Face Validity

do your questions make logical sense in regards to your research questions concept and definition.


Face validity is a measure of how representative a research project is ‘at face value,' and whether it appears to be a good project.

Criteria Validity

Criterion Validity assesses whether a test reflects a certain set of abilities.


* Concurrent validity measures the test against a benchmark test and high correlation indicates that the test has strong criterion validity.
* Predictive validity is a measure of how well a test predicts abilities. It involves testing a group of subjects for a certain construct and then comparing them with results obtained at some point in the future.

Contstruct Validity

Construct Validity


Construct validity defines how well a test or experiment measures up to its claims. A test designed to measure depression must only measure that particular construct, not closely related ideals such as anxiety or stress.

Reliability

Yielding the same or compatible results in different clinical experiments


For maintaining reliability internally, a researcher will use as many repeat sample groups as possible, to reduce the chance of an abnormal sample group skewing the results.

More reliability

Reliability and validity are often confused, but the terms actually describe two completely different concepts, although they are often closely inter-related. This distinct difference is best summed up with an example:


A researcher devises a new test that measures IQ more quickly than the standard IQ test:

* If the new test delivers scores for a candidate of 87, 65, 143 and 102, then the test is not reliable or valid, and it is fatally flawed.
* If the test consistently delivers a score of 100 when checked, but the candidates real IQ is 120, then the test is reliable, but not valid.
* If the researcher's test delivers a consistent score of 118, then that is pretty close, and the test can be considered both valid and reliable.

Reliability is an essential component of validity but, on its own, is not a sufficient measure of validity. A test can be reliable but not valid, whereas a test cannot be valid yet unreliable.


Reliability, in simple terms, describes the repeatability and consistency of a test. Validity defines the strength of the final results and whether they can be regarded as accurately describing the real world.


How do we handle issues with reliability?


use vetted questions use other peoples surveys


be celar with deffinitions


get lots of responses


train surveyors

Testing Reliability

In the social sciences, testing reliability is a matter of comparing two different versions of the instrument and ensuring that they are similar. When we talk about instruments, it does not necessarily mean a physical instrument, such as a mass-spectrometer or a pH-testing strip.

Test-Re test Method

The Test-Retest Method is the simplest method for testing reliability, and involves testing the same subjects at a later date, ensuring that there is a correlation between the results. An educational test retaken after a month should yield the same results as the original.


The difficulty with this method is that it assumes that nothing has changed in that time period. Staying with education, if you administer exactly the same test, the student may perform much better because they remember the questions and have thought about the questions.

Variables

1. Contiunuous or Qualitative Variables


2. Descrete or Qualitative Variables

Quantitative Variables

Three Categories


Interval scale data has order and equal intervals. Interval scale variables are measured on a linear scale, and can take on positive or negative values. It is assumed that the intervals keep the same importance throughout the scale. They allow us not only to rank order the items that are measured but also to quantify and compare the magnitudes of differences between them. We can say that the temperature of 40°C is higher than 30°C, and an increase from 20°C to 40°C is twice as much as the increase from 30°C to 40°C. Counts are interval scale measurements, such as counts of publications or citations, years of education, etc.


Ordinal Variables - They occur when the measurements are continuous, but one is not certain whether they are on a linear scale, the only trustworthy information being the rank order of the observations. For example, if a scale is transformed by an exponential, logarithmic or any other nonlinear monotonic transformation, it loses its interval - scale property. Here, it would be expedient to replace the observations by their ranks.


Ratio-


These are continuous positive measurements on a nonlinear scale. A typical example is the growth of bacterial population (say, with a growth function AeBt.). In this model, equal time intervals multiply the population by the same ratio. (Hence, the name ratio - scale).


Ratio data are also interval data, but they are not measured on a linear scale. . With interval data, one can perform logical operations, add, and subtract, but one cannot multiply or divide. For instance, if a liquid is at 40 degrees and we add 10 degrees, it will be 50 degrees. However, a liquid at 40 degrees does not have twice the temperature of a liquid at 20 degrees because 0 degrees does not represent "no temperature" -- to multiply or divide in this way we would have to use the Kelvin temperature scale, with a true zero point (0 degrees Kelvin = -273.15 degrees Celsius). In social sciences, the issue of "true zero" rarely arises, but one should be aware of the statistical issues involved.

Qualitative Variables

1. Nominal Variables


2. Ordinal


3. Preference Variables


3. Miltipul Response

Nominal Variables

Nominal variables allow for only qualitative classification. That is, they can be measured only in terms of whether the individual items belong to certain distinct categories, but we cannot quantify or even rank order the categories: Nominal data has no order, and the assignment of numbers to categories is purely arbitrary.


ex Gender: 1. Male 2. Female


Ordinal

A discrete ordinal variable is a nominal variable, but its different states are ordered in a meaningful sequence. Ordinal data has order, but the intervals between scale points may be uneven. Because of lack of equal distances, arithmetic operations are impossible, but logical operations can be performed on the ordinal data. A typical example of an ordinal variable is the socio-economic status of families. We know 'upper middle' is higher than 'middle' but we cannot say 'how much higher'. Ordinal variables are quite useful for subjective assessment of 'quality; importance or relevance'. Ordinal scale data are very frequently used in social and behavioral research. Almost all opinion surveys today request answers on three-, five-, or seven- point scales. Such data are not appropriate for analysis by classical techniques, because the numbers are comparable only in terms of relative magnitude, not actual magnitude.


Consider for example a questionnaire item on the time involvement of scientists in the 'perception and identification of research problems'. The respondents were asked to indicate their involvement by selecting one of the following codes:


1 = Very low or nil
2 = Low
3 = Medium
4 = Great
5 = Very great

Preference Variables

reference variables are specific discrete variables, whose values are either in a decreasing or increasing order. For example, in a survey, a respondent may be asked to indicate the importance of the following nine sources of information in his research and development work, by using the code [1] for the most important source and [9] for the least important source:

Miltipul REspone Variable

Multiple response variables are those, which can assume more than one value. A typical example is a survey questionnaire about the use of computers in research. The respondents were asked to indicate the purpose(s) for which they use computers in their research work. The respondents could score more than one category.

Conceptutalzation and Operalizationsm


A conceptual definition tells you what the concept means, while an operational definition only tells you how to measure it. A conceptual definition tells what your constructs are by elplaining how they are related to other constructs. This explanation and all of the constructs it refers to are abstract. On the other hand, your operational definitions describe the variables you will use as indicators for youor constructs and the procedures you will use to observe or measure the variables.

Sampling

The two main methods used in survey research are probability sampling and nonprobability sampling. The big difference is that in probability sampling all persons have a chance of being selected, and results are more likely to accurately reflect the entire population. While it would always be nice to have a probability-based sample, other factors need to be considered (availability, cost, time, what you want to say about results). Some additional characteristics of the two methods are listed below.

Probability Sampling

You have a complete sampling frame. You have contact information for the entire population.

You can select a random sample from your population. Since all persons (or “units”) have an equal chance of being selected for your survey, you can randomly select participants without missing entire portions of your audience.

You can generalize your results from a random sample. With this data collection method and a decent response rate, you can extrapolate your results to the entire population.

Can be more expensive and time-consuming than convenience or purposive sampling.

Non probability Sampling -Non-probability sampling is a sampling technique where the samples are gathered in a process that does not give all the individuals in the population equal chances of being selected.

Used when there isn’t an exhaustive population list available. Some units are unable to be selected, therefore you have no way of knowing the size and effect of sampling error (missed persons, unequal representation, etc.).

Not random.

Can be effective when trying to generate ideas and getting feedback, but you cannot generalize your results to an entire population with a high level of confidence. Quota samples (males and females, etc.) are an example.

More convenient and less costly, but doesn’t hold up to expectations of probability theory.


Why is sampling important?

When we are interested in a population, it is often impractical and sometimes undesirable to try and study the entirepopulation. For example, if the population we were interested in was frequent, male Facebook users in the United States, this could be millions of users (i.e., millions of units). If we chose to study these Facebook users using structured interviews (i.e., our chosen research method), it could take a lifetime. Therefore, we choose to study just a sample of these Facebook users.

Representative Sampling

When conducting a study, a researcher selects a relatively small group of participants (a sample) from an entire population of all possible participants (for example, selecting college students at a couple of colleges from all college students in the world). Ideally, the researcher would have participants with characteristics that closely match the characteristics of the whole population - this is called having a Representative Sample.


This is important if you want to extend the findings of the study to a larger group of people, not just those in the study. For example, imagine you are at the supermarket picking out grapes. There are red, green, small, large, and globe grapes. In a representative sample you would have an equivalent number of each type of grape. You could then taste them all and make generalizations about all grapes just from tasting these few because your sample represents the larger population.

Bias

bias = unknown or unacknowledged error created during the design, measurement, sampling, procedure, or choice of problem studied

* bias is so pervasive because we want to confirm our beliefs
* science is organized around proving itself wrong not right
* attempts to eliminate bias by quantitative researcher
* explicit acknowledgement of bias by qualitative researchers

Sampling

Probability Sampling-


1. Simple Random Sampling


2. Systematic Sampling


3. Stratified sampling


4.Multistage cluster sampling


5.Probability Proportionate to Size


Non-Probability Samplng-


1.Purposive


2. Snowball


3. Quota


4. Informants

Probability Sampling

Probability sampling is a sampling technique wherein the samples are gathered in a process that gives all the individuals in the population equal chances of being selected.



In this sampling technique, the researcher must guarantee that every individual has an equal opportunity for selection and this can be achieved if the researcher utilizesrandomization.


The advantage of using a random sample is the absence of both systematic andsampling bias. If random selection was done properly, the sample is therefore representative of the entire population.


The effect of this is a minimal or absent systematic bias which is the difference between the results from the sample and the results from the population. Sampling bias is also eliminated since the subjects are randomly chosen.

Simple Random Sampling

Simple random sampling is the easiest form of probability sampling. All the researcher needs to do is assure that all the members of the population are included in the list and then randomly select the desired number of subjects.


There are a lot of methods to do this. It can be as mechanical as picking strips of paper with names written on it from a hat while the researcher is blindfolded or it can be as easy as using a computer software to do the random selection for you.


Advantages-


the ease of assembling the sample. It is also considered as a fair way of selecting a sample from a given population since every member is given equal opportunities of being selected.


Another key feature of simple random sampling is its representativeness of the population. Theoretically, the only thing that can compromise its representativeness is luck. If the sample is not representative of the population, the random variation is called sampling error.


An unbiased random selection and a representative sample is important in drawing conclusions from the results of a study. Remember that one of the goals of research is to be able to make conclusions pertaining to the population from the results obtained from a sample. Due to the representativeness of a sample obtained by simple random sampling, it is reasonable to make generalizations from the results of the sample back to the population.


Disadvantages- one of the most obvious limitations of simple random sampling method is its need of a complete list of all the members of the population. Please keep in mind that the list of the population must be complete and up-to-date. This list is usually not available for large populations. In cases as such, it is wiser to use other sampling techniques.

Stratifed Sampling

Stratified random sampling is also known as proportional random sampling. This is a probability sampling technique wherein the subjects are initially grouped into different classifications such as age, socioeconomic status or gender.


Then, the researcher randomly selects the final list of subjects from the different strata. It is important to note that all the strata must have no overlaps.


Researchers usually use stratified random sampling if they want to study a particular subgroup within the population. It is also preferred over the simple random sampling because it warrants more precise statistical outcomes.


Advantages-

* Stratified random sampling is used when the researcher wants to highlight a specific subgroup within the population. This technique is useful in such researches because it ensures the presence of the key subgroup within the sample.
* Researchers also employ stratified random sampling when they want to observe existing relationships between two or more subgroups. With asimple random sampling technique, the researcher is not sure whether the subgroups that he wants to observe are represented equally or proportionately within the sample.
* With stratified sampling, the researcher can representatively sample even the smallest and most inaccessible subgroups in the population. This allows the researcher to sample the rare extremes of the given population.

Systematice Sampling

Systematic random sampling can be likened to an arithmetic progression wherein the difference between any two consecutive numbers is the same. Say for example you are in a clinic and you have 100 patients.

1. The first thing you do is pick an integer that is less than the total number of the population; this will be your first subject e.g. (3).
2. Select another integer which will be the number of individuals between subjects e.g. (5).
3. You subjects will be patients 3, 8, 13, 18, 23, and so on.

There is no clear advantage when using this technique.


Advantages -

* The main advantage of using systematic sampling over simple random sampling is its simplicity. It allows the researcher to add a degree of system or process into the random selection of subjects.
* Another advantage of systematic random sampling over simple random sampling is the assurance that the population will be evenly sampled. There exists a chance in simple random sampling that allows a clustered selectionof subjects. This is systematically eliminated in systematic sampling.
* -The process of selection can interact with a hidden periodic trait within the population. If the sampling technique coincides with the periodicity of the trait, the sampling technique will no longer be random and representativeness of the sample is compromised.

Cluster Sampling

Cluster random sampling is done when simple random sampling is almost impossible because of the size of the population. Just imagine doing a simple random sampling when the population in question is the entire population of Asia.

1. In cluster sampling, the research first identifies boundaries, in case of our example; it can be countries within Asia.
2. The researcher randomly selects a number of identified areas. It is important that all areas (countries) within the population be given equal chances of being selected.
3. The researcher can either include all the individuals within the selected areas or he can randomly select subjects from the identified areas.

Mixed/Multi-Stage Random Sampling


Difference Between cluster sampling and stratified sampling


The main difference between cluster sampling and stratified sampling lies with the inclusion of the cluster or strata.


In stratified random sampling, all the strata of the population is sampled while incluster sampling, the researcher only randomly selects a number of clusters from the collection of clusters of the entire population. Therefore, only a number of clusters are sampled, all the other clusters are left unrepresented.


Advantages and Disadvantages-



Advantages and Disadvantages of Cluster Sampling

* This sampling technique is cheap, quick and easy. Instead of sampling an entire country when using simple random sampling, the researcher can allocate his limited resources to the few randomly selected clusters or areas when using cluster samples.
* Related to the first advantage, the researcher can also increase his sample size with this technique. Considering that the researcher will only have to take the sample from a number of areas or clusters, he can then select more subjects since they are more accessible.
* From all the different type of probability sampling, this technique is the least representative of the population. The tendency of individuals within a cluster is to have similar characteristics and with a cluster sample, there is a chance that the researcher can have an overrepresented or underrepresented cluster which can skew the results of the study.
* This is also a probability sampling technique with a possibility of high sampling error. This is brought by the limited clusters included in the sample leaving off a significant proportion of the population unsampled.

Multistage clustering

This probability sampling technique involves a combination of two or more sampling techniques enumerated above. In most of the complex researches done in the field or in the lab, it is not suited to use just a single type of probability sampling.


Most of the researches are done in different stages with each stage applying a different random sampling technique.

Proportionate vs Disproportionate stratified randome sampling

* Stratified random sampling is used when the researcher wants to highlight a specific subgroup within the population. This technique is useful in such researches because it ensures the presence of the key subgroup within the sample.
* Researchers also employ stratified random sampling when they want to observe existing relationships between two or more subgroups. With asimple random sampling technique, the researcher is not sure whether the subgroups that he wants to observe are represented equally or proportionately within the sample.
* With stratified sampling, the researcher can representatively sample even the smallest and most inaccessible subgroups in the population. This allows the researcher to sample the rare extremes of the given population.

NON PROBABILITY SAMPLING Purposeive, snowball, quota, informants

Purposive Smapling - In this type of sampling, subjects are chosen to be part of the sample with a specific purpose in mind. With judgmental sampling, the researcher believes that some subjects are more fit for the research compared to other individuals. This is the reason why they are purposively chosen as subjects.


Snowball Sampling- Snowball sampling is usually done when there is a very small population size. In this type of sampling, the researcher asks the initial subject to identify another potential subject who also meets the criteria of the research. The downside of using a snowball sample is that it is hardly representative of the population.


Quota Sampling -


Quota sampling is a non-probability sampling technique wherein the researcher ensures equal or proportionate representation of subjects depending on which trait is considered as basis of the quota.


For example, if basis of the quota is college year level and the researcher needs equal representation, with a sample size of 100, he must select 25 1st year students, another 25 2nd year students, 25 3rd year and 25 4th year students. The bases of the quota are usually age, gender, education, race, religion and socioeconomic status.

When to use no probabilit sampling

When to Use Non-Probability Sampling

* This type of sampling can be used when demonstrating that a particular trait exists in the population.
* It can also be used when the researcher aims to do a qualitative, pilot or exploratory study.
* It can be used when randomization is impossible like when the population is almost limitless.
* It can be used when the research does not aim to generate results that will be used to create generalizations pertaining to the entire population.
* It is also useful when the researcher has limited budget, time and workforce.
* This technique can also be used in an initial study which will be carried out again using a randomized, probability sampling.

Purposive Sammpling -is a non-probability sampling technique where the researcher selects units to be sampled based on their knowledge and professional judgment.

Purposive sampling is used in cases where the specialty of an authority can select a more representative sample that can bring more accurate results than by using other probability sampling techniques. The process involves nothing but purposely handpicking individuals from the population based on the authority's or the researcher's knowledge and judgment.


Example - n a study wherein a researcher wants to know what it takes to graduate summa cum laude in college, the only people who can give the researcher first hand advise are the individuals who graduated summa cum laude. With this very specific and very limited pool of individuals that can be considered as a subject, the researcher must use judgmental sampling.


When to use- udgmental sampling design is usually used when a limited number of individuals possess the trait of interest. It is the only viable sampling technique in obtaining information from a very specific group of people. It is also possible to use judgmental sampling if the researcher knows a reliable professional or authority that he thinks is capable of assembling a representative sample.


Setbacks-


The two main weaknesses of authoritative sampling are with the authority and in the sampling process; both of which pertains to the reliability and the bias that accompanies the sampling technique.


Unfortunately, there is usually no way to evaluate the reliability of the expert or the authority. The best way to avoid sampling error brought by the expert is to choose the best and most experienced authority in the field of interest.


When it comes to the sampling process, it is usually biased since no randomizationwas used in obtaining the sample. It is also worth noting that the members of thepopulation did not have equal chances of being selected. The consequence of this is the misrepresentation of the entire population which will then limit generalizationsof the results of the study.

Snowball SamplingSnowball sampling is a non-probability sampling technique that is used by researchers to identify potential subjects in studies where subjects are hard to locate.


Researchers use this sampling method if the sample for the study is very rare or is limited to a very small subgroup of the population.

Advantages-


Advantages of Snowball Sampling

* The chain referral process allows the researcher to reach populations that are difficult to sample when using other sampling methods.
* The process is cheap, simple and cost-efficient.
* This sampling technique needs little planning and fewer workforce compared to other sampling techniques.
* The researcher has little control over the sampling method. The subjects that the researcher can obtain rely mainly on the previous subjects that were observed.
* Representativeness of the sample is not guaranteed. The researcher has no idea of the true distribution of the population and of the sample.
* Sampling bias is also a fear of researchers when using this sampling technique. Initial subjects tend to nominate people that they know well. Because of this, it is highly possible that the subjects share the same traits and characteristics, thus, it is possible that the sample that the researcher will obtain is only a small subgroup of the entire population.
* Example- For example, if obtaining subjects for a study that wants to observe a rare disease, the researcher may opt to use snowball sampling since it will be difficult to obtain subjects. It is also possible that the patients with the same disease have a support group; being able to observe one of the members as your initial subject will then lead you to more subjects for the study.

Quota Sampling-


Quota sampling is a non-probability sampling technique wherein the assembled sample has the same proportions of individuals as the entire population with respect to known characteristics, traits or focused phenomenon.


In addition to this, the researcher must make sure that the composition of the final sample to be used in the study meets the research's quota criteria.

Advantages-

* he main reason why researchers choose quota samples is that it allows the researchers to sample a subgroup that is of great interest to the study. If a study aims to investigate a trait or a characteristic of a certain subgroup, this type of sampling is the ideal technique.
* Quota sampling also allows the researchers to observe relationships between subgroups. In some studies, traits of a certain subgroup interact with other traits of another subgroup. In such cases, it is also necessary for the researcher to use this type of sampling technique.