Discipline: Technology and Engineering
Subcategory: Electrical Engineering
Messan Anato - University of the District of Columbia
Co-Author(s): Devan Newman, Dr. Nian Zhang, and Dr. Lara Thompson, University of District of Columbia, Washington, DC
Forecasting future values of observed time series plays an extremely important role in nearly all fields of science and engineering such as smart energy, environment, biomedical engineering, and homeland security, and it has attracted huge interest in various research communities for a few decades. Recent advanced development of data collection technology, sensor networking, and data storage takes us into a Big Data world, where we can easily collect massive time series data with large dimensionality, and can monitor the dynamic changes of the complex systems. However, traditional forecasting tools cannot handle the large size, speed, and complexity inherent in Big Data structure, and thus couldn’t follow the pace of data exploration from both accuracy and computation speed aspects. A good example is the existence of a vast amount of data on earthquakes, but the lack of a reliable model that can accurately predict earthquakes. Therefore, it’s imperative to develop advanced Big Data approaches to predict time series evolutions of complex systems. Therefore, it is very important to explore the application of the novel deep learning algorithms for the problem of time series prediction. Deep architecture allows us to construct complex models that have high VC (Vapnik–Chervonenkis, a measure of the capacity (complexity, expressive power, richness, or flexibility)) dimension and able to describe complex time series. We start with traditional statistical approaches with lower VC dimension like Autoregressive Integrated Moving Average (ARIMA). We further increase VC dimension of models by using Machine Learning techniques such as Multi-Layer Perceptron (MLP) and Support Vector (SV) Regression for time series forecasting. Then we go further and apply deep architecture such as Conditional Restricted Boltzmann Machine (CRBM). Finally, the results of ARIMA, MLP and SV regression are compared to the case of Conditional Restricted Boltzmann Machine. The experimental results show that the root-mean-square error (RMSE) of the proposed Conditional Restricted Boltzmann Machine approach is 758.07, which is better than the ARIMA (805.25), SV Regression (833.58), and MLP (766.54) approaches. Our findings demonstrate that the deep neural network is able to accurately predict time series. The prediction accuracy can be further improved by tuning parameters of CRBM. [This study was supported by a grant from the University of the District of Columbia (NSF/HBCU-UP/ HRD #1505509, HRD #1533479, and NSF/DUE #1654474), Washington, D. C. 20008]
Funder Acknowledgement(s): [This study was supported by a grant from the University of the District of Columbia (NSF/HBCU-UP/ HRD #1505509, HRD #1533479, and NSF/DUE #1654474), Washington, D. C. 20008]
Faculty Advisor: Nian Zhang, firstname.lastname@example.org
Role: I was responsible for performing the experiments on obtaining the root-mean-square error (RMSE) of the proposed Conditional Restricted Boltzmann Machine (CRBM), the Autoregressive Integrated Moving Average (ARIMA), Multi-Layer Perceptron (MLP), and the Support Vector (SV) Regression.