Case Study: Descriptive Statistics

Great Essays
Register to read the introduction… List the probability value for each possibility in the binomial experiment that was calculated in MINITAB with the probability of a success being ½. (Complete sentence not necessary)
P(x=0) P(x=6)
P(x=1) P(x=7)
P(x=2) P(x=8)
P(x=3) P(x=9)
P(x=4) P(x=10)
P(x=5)
4. Give the probability for the following based on the MINITAB calculations with the probability of a success being ½. (Complete sentence not necessary)
P(x?1) P(x<0) P(x>1) P(x?4)
P(4 5. Calculate the mean and standard deviation (by hand) for the MINITAB created binomial distribution with the probability of a success being ½. Either show work or explain how your answer was calculated. Mean = np, Standard Deviation =
Mean:
Standard deviation:
6. Calculate the mean and standard deviation (by hand) for the MINITAB created binomial distribution with the probability of a success being ¼ and compare to the results from question 5. Mean = np, Standard Deviation =
Mean:
Standard deviation:
Comparison:
7. Calculate the mean and standard deviation (by hand) for the MINITAB created binomial distribution with the probability of a success being ¾ and compare to the results from question 6. Mean = np, Standard Deviation =
8. Explain why the coin variable from the class survey represents a binomial
…show more content…
We also want to calculate the median for the 10 rolls of the die. Label the next column in the Worksheet with the word median. Repeat the above steps but select the radio-button that corresponds to Median and in the Store results in: text area, place the median column.
Calculating Descriptive Statistics
? Calculate descriptive statistics for the mean and median columns that where created above. Pull up Stat > Basic Statistics > Display Descriptive Statistics and set Variables: to mean and median. The output will show up in your Session Window. Print this information.
Calculating Confidence Intervals for one Variable
? Open the class survey results that were entered into the MINITAB worksheet.
? We are interested in calculating a 95% confidence interval for the hours of sleep a student gets. Pull up Stat > Basic Statistics > 1-Sample t and set Samples in columns: to Sleep. Click the OK button and the results will appear in your Session Window.
? We are also interested in the same analysis with a 99% confidence interval. Use the same steps except select the Options button and change the Confidence level: to 99.
MATH 221 FINAL

Related Documents

  • Decent Essays

    Ecet 340 Week 4

    • 1985 Words
    • 8 Pages

    ECET 340 Week 4 HomeWork 4 Purchase here http://devrycourse.com/ecet-340-week-4-homework-4 Product Description Determine the conversion time for an ADC0804 (8-bit), where 66 clocks per bit are required, if its clock frequency is 50 kHz. 2. If an 8-bit SAR has Vref = 10 V, what is the binary value for an input voltage of 7.28 V? 3. What is the percent error for the binary answer found in Problem #2?…

    • 1985 Words
    • 8 Pages
    Decent Essays
  • Improved Essays

    the value of R2 is the indicator for the successful of the process. Whereas, R2 values are close to 1, high value, the best adsorption process[20,21]. The Pseudo first order is given in equation:. ......................................................................................(7) Where qe CV or MB amount that adsorbed onto unit weight of adsorbent at equilibrium (mg/gm). qt CV or MB amount that adsorbed onto unit weight of adsorbent at any time t (mg/g) K1 is the rate constant for pseudo-first-order model (L/min).…

    • 1209 Words
    • 5 Pages
    Improved Essays
  • Great Essays

    Serial Correlation Essay

    • 1588 Words
    • 6 Pages

    Several methods exist for dealing with serial correlation. Here, we will deal exclusively with batch means, replication/deletion, and the Mean Squared Error Reduction (MSER) technique. The goal of these methods is to produce valid confidence intervals (CI’s) in the presence of serial correlation. In our analysis, we will use the lag k autocorrelation to find a point at which the observations are…

    • 1588 Words
    • 6 Pages
    Great Essays
  • Improved Essays

    For area the expected was 1.52, the volume 4.23, and for density 4.29. Finally I found the expected uncertainty buy using the formula (original value)*(expected percentage uncertainty)/100. The values for area, volume, and density…

    • 976 Words
    • 4 Pages
    Improved Essays
  • Superior Essays

    The safety stock consists of k times the standard deviation σ. In the basic newsvendor case the optimal k is defined by the inverse of the assumed demand distribution of the critical fractile cf (underage cost divided by underage plus overage cost). B= μ+SS (1) B= μ+kσ (2) k=F^(-1) (cf) (3) cf=b/(b+h) (4) With only sample historical demand observations at hand, in addition to distributional assumptions about f, estimations for the mean and the standard deviation have to be made in order to determine the order amount according to the formulas (1) - (4). This can be done by estimating mean and standard deviation directly out of the sample with the method of moments or as a function of hypothetical explanatory variables using ordinary least squares (OLS) regression. The Small Data-Driven Newsvendor In practice, the true underlying…

    • 1657 Words
    • 7 Pages
    Superior Essays
  • Improved Essays

    In this section, the analysis of validity includes three parts: validity of CSR measurement scale, validity of brand image measurement scale and validity of brand loyalty measurement scale. Based on the output of SPSS (details in Appendix F), we observed that there are three main steps to examine the validity. First, looking at the values of KMO and Bartlett's Test. Then, based on the Total Variance Explained table to determine the numbers of extracted factors. In the end, based on the Rotated Component Matrix table to observe the value of each variable factor loadings (a rule of thumb that factor loadings should greater than 0.30) to evaluate whether or not the extracted factors are consistent with the dimensions of the scales.…

    • 984 Words
    • 4 Pages
    Improved Essays
  • Improved Essays

    Average score 100 is fixed, with 2/3 of scores lying in the normal range 85 to 115. This scoring method has become the standard technique in intelligence testing and is also used in the modern revision of the Stanford-Binet test. Categories of Intelligence Quotient score IQ Scores are divided into main three categories are low, average and high. The average score on an IQ test is 100. Sixty-eight percent of IQ scores fall within one standard deviation of the mean.…

    • 395 Words
    • 2 Pages
    Improved Essays
  • Great Essays

    Data Extraction Behavioural measures such as omission errors to Go stimuli (i.e. a missed press to a Go stimulus) and commission errors (i.e. a press to a No-go stimulus) were calculated. ERPs were derived at Fz (the frontal midline site). ERPs were calculated to Go and No-go stimuli separately, and the N1 and N2 components in each ERP was quantified in terms of amplitude (relative to a 100 millisecond baseline).…

    • 2444 Words
    • 10 Pages
    Great Essays
  • Improved Essays

    Research variables Observed variables Hidden variables involving compatibility adaptability apostolate Organizational culture administrative factor political factor economic factor social factor Administrative health reliability success orientation Work conscience Confirmatory factor analysis Before fitting the model, we use a confirmatory factor analysis. A confirmatory factor analysis indicates whether the designed scale (questionnaire) is valid for data collection or not. The confirmatory factor analysis is a method for calculating structural validity. Therefore, by using the confirmatory factor analysis, the general structure of the research questionnaire is content validity. In Table 1, the results of the confirmatory factor analysis of the research variables were obtained by the LISREL software.…

    • 944 Words
    • 4 Pages
    Improved Essays
  • Improved Essays

    Enkb Analysis

    • 781 Words
    • 4 Pages

    We simulated the first-order autoregressive model with $a=0.62$. Figure ( ef{figure3}) shows the plot of the the estimate at different instants of time. As seen from the figure the estimates converges to close to the actual value in about the first $10$ time instants and the final estimate $0.6272$ is close to the actual value. egin{figure}[htpb] %epsfxsize=linewidth epsffile{figure.eps, width=15.5cm} centerline{epsfig{figure=ConstCoeff.eps,width=9.5cm}} caption{Estimate of constant coefficient $a$ as a function of time.} label{figure3} end{figure} section{Conclusions} label{Conc} In this paper, we have presented that ensemble Kalman filter can be an alternative method to particle filter algorithm for the estimation of state vectors of a low-order nonlinear state-space models.…

    • 781 Words
    • 4 Pages
    Improved Essays