Serial correlation can be defined as a relationship between elements within a time series. It can affect the variance of our estimators, and cause us to incorrectly estimate our true mean, Y ̅. To properly study and analyze a covariance stationary time series, we need to know something about the correlation/covariance structure. Several methods exist for dealing with serial correlation. Here, we will deal exclusively with batch means, replication/deletion, and the Mean Squared Error Reduction (MSER) technique. The goal of these methods is to produce valid confidence intervals (CI’s) in the presence of serial correlation. In our analysis, we will use the lag k autocorrelation to find a point at which the observations are …show more content…
Our final analysis method was the MSER technique. MSER seeks to determine an optimal truncation point that helps mitigate bias without losing precision in our results. Using the data generated from the replication/deletion method, we generated the MSER statistics using the Ruby script, mser.rb. The script created a data file with 10 different means (x ̅) and the remaining observations after truncation (n), outlined in Table 2. We will refer to n as our sample size for each run.
The MSER technique used various truncation values (d*), minimizing standard error, for each of the 10 runs. The resultant point estimates are independent, but based on various sizes of n. To prevent introducing bias into our results, we weighted each estimate by n. A histogram was generated using x ̅, weighted by n, seen in Figure 7.
Note that our standard deviation was extremely high. To minimize the variance of our mean, we calculated a new weighted average (w), using the following formula:
N denotes the total number of simulation runs; in this case, N = 10. A new histogram was generated, with the mean weighted by w, seen in Figure 8. All of summary statistics remained the same, with the exception of our standard deviation, which was reduced to equal the standard