The main aim of this paper is to discuss what are statistics, quantitative methods, and data analysis, and why do we need them. We will also discuss the importance of patterns and variations within data and how they can be analyzed to be presented in a way that affects real life products such as public policy. Though an established science, some quantitative method practices are criticized through epistemological considerations, particularly, the positivism position. The concerns raised by this position will also be considered in this paper as it raises important views on the subject. Lastly, we will set an example on how data analysis could affect public policy, prior to the conclusion.
“I don’t want to become a statistic!” …show more content…
The most important thing in quantitative method is not the data itself, but the way it is analyzed and portrayed. In designing the research methods, one must try to establish a causal connection between variables, which is not always easy because one cannot be certain that a change in one variable, the independent variable, causes a change in the dependent variable. The first of five tools of quantitative methods, experimental research, tries to address this issue without inferring causality, “If we can determine that the two variables or phenomena vary together then we can say that we have established covariation”(Henn et al. 2006, p.119) . For example, we can find a covariance between men reaching fifty and the purchase of an expensive car, however, one cannot easily, if at all, establish the fact that reaching fifty was the reason that caused the purchase. However, if adequately conducted, this type of quantitative research method is useful to aid in fields where an experimental and control group are tested which could produce a clear finding, “because it engenders considerable …show more content…
We go back again to statistics as they form the basic building block of creating a sense out of data, on the other hand, data analysis is “concerned with sensitizing social researchers to the use, interpretation and evaluation of relevant data” (Rose and Sullivan 1998, p.4). In other words, statistics can be thought of as the output of plenty of data, however, data analysis is how one can select a pattern of data and try to analyze it to infer a finding, be it wrong or right, with regards to a case of relevance to him/her. The researcher will look for irregularities in the data, which are also known as variations. As mentioned in the description of the cross sectional method, variables can be represented in many forms such as race, age and height. To set a simple example, two identical twins could have different heights, hence, their features vary despite initially looking similar, and therefore, each of their heights becomes a variable. It is critical for the researcher to find a pattern to detect regularities and irregularities, which is primarily done through data analysis. There are several ways that could aid us in explaining the variance and making sense of it, the most common of which are the mean, median, mode and standard deviation of a set of variable. The most basic example of which is the mean. The mean seeks to identify the average of a set