*…show more content…*

The fitness had been known by the k-fold cross validation (CV) technique in this study. In k-fold cross validation, the training data set is randomly splitted into K mutually exclusive subsets (folds) of approximately similar size. With a given set of parameters, the regression function had been built, using (K-1) subsets as the training set. By the mean abstract percent error (MAPE) on the last subset (testing set) The efficacy of the parameter set is measured. It had been repeated Ktimes the above procedure, so each subset is used once for testing. Averaging the MAPE over the Ktrials. (〖MAPE〗_CV) gives a rate of the predicated generalization error for testing on sets of size (K-1/K)×l in which l is the number of testing data sets. Lastly, the optimal performing parameter set had been specified. Conventionally, the testing error of k-fold cross validation is applied to evaluate the generalization error (where k=5 )[22]. Therefore, the fitness function is defined

*…show more content…*

The solution with a smaller〖〖 MAPE〗^ 〗_CV of the testing data set has a lesser fitness value, and so has a better chance of surviving in the successive generations. Our proposed WOA-SVR model

A new method known as WOA–SVR is proposed in this study, which at the same time optimizes kernel parameters of RBF (Gaussian)dynamically optimizing all values of kernel ’s parameters through WOA evolutionary process, and utilizing obtained kernel parameters to build optimized SVR model in order to proceeded forecasting. Fig. 1 explains the algorithm process of the WOA–SVR model. Our proposed WOA–SVR details are illustrated as next: Preprocess:

1.1. Remove non-numerical value from features value to obtained the correct result.

1.2. Data scaling. The major trait of scaling is to bypass attributes in major numeric amplitude dominating those in lesser numeric amplitude. Generally, every feature could be linearly scaled to the range [-1, 1] or [0, 1].