The competition was funded as a research grant "Automatic Modelling and Forecasting with Neural Networks – A forecasting competition evaluation" for the 2005/2006 SAS & International Institute of Forecasters grant to support research on principles of forecasting, by S. F. Crone and K. Nikolopoulos.
Despite over 15 years of research and more than 2000 publications on artificial Neural Networks (NN) for forecasting across various disciplines (Crone and Graffeille, 2004), NN have not yet been established as a valid and reliable forecasting method in forecasting competitions. The results of the M3 competition (Makridakis and Hibon 2000) have indicated the poor performance of NN (Haykin, 1999) for forecasting a large number of empirical time series. Following an initial interest by various NN research groups (Hibon, 2005), only (Balkin and Ord, 2000) successfully submitted NN results to the competition. However, their modelling approach outperformed only few of the more than twenty approaches that provided forecasts. Despite optimistic publications indicating superior performance of NN on single time series (Adya and Collopy, 1998, Zhang et al., 1998) or a small subset (Hill et al., 1996), their performance on batch forecasting monthly data (Table 15, Makridakis and Hibon, 2000) fell far shot of the presumed potential.
Round table discussions with experts across disciplines at various international conferences, as the European Symposium on Artificial Neural Networks (ESANN 2004), International Conference on Artificial Intelligence (IC-AI 2005), International Joint Conference on Neural Networks (IJCNN 2005) and the 2004 International Symposium on Forecasting (ISF), indicated that this may in part be contributed to the heuristic and often ad-hoc modelling process to determine the large degrees of freedom, questioning the validity, reliability and robustness of application of NN for a large set of time series. NN modelling seems to consist more of an ad-hoc ‘art’ of hand tuning individual models than a scientific approach following a valid methodology of the modelling process. Consequently, the necessity of manual expert interventions has prohibited large scale automation of NN modelling and their evaluation forecasting competitions of valid and reliable scope.
As a consequence, forecasting competitions conducted within the NN domain, e.g. the Santa Fe competition (Weigend, 1994), EUNITE competition (Suykens and Vandewalle, 1998) or IJCNN’04 CATS-competition, have focussed on single time series evaluation, ignoring evidence within the forecasting field on how to increase validity and reliability in evaluating forecasting methods (Fildes et al., 1998).
However, recent publications document competitive performance of NN on a larger number of time series (Liao and Fildes, 2005, Zhang and Qi, 2005, Crone, 2005), indicating the use of increased computational power to automate NN forecasting on a scale suitable for automatic forecasting. Therefore, a forecasting competition using a representative number of time series within a set time frame seems feasible.
In addition, despite research by (Remus and O'Connor, 2001) little knowledge is disseminated on sound “principles” to assure valid and reliable modelling of NN for forecasting, particularly considering the ever increasing number of NN paradigms, architectures and extensions to existing models. Different research groups and application domains favour certain modelling paradigms, preferring specific data pre-processing techniques (differencing, deseasonalising, outlier correction or not), data sampling, activation functions, rules to guide the number of hidden nodes, training algorithms and parameters etc. However, the motivation for these decisions – derived from objective modelling recommendations, internal best practices or a subjective, heuristic and iterative modelling process - is rarely documented in publications. In addition, original research often focuses on the publication of improvements to existing knowledge or practice, instead of the consolidation of accepted heuristic methodologies. Therefore we seek to encourage the dissemination of implicit knowledge through demonstrations of current “best practices” methodology on a representative set of time series.
Consequently, we propose a forecasting competition evaluating a set of consistent NN methodologies across a representative set of time series. We seek to propose two essential research questions, which may be resolved through inviting current experts in the NN academic community to participate in a forecasting competition:
© 2006 BI3S-lab - Hamburg, Germany - All rights reserved - Questions, Comments and Enquiries via eMail - [Impressum & Disclaimer]
The Knowledge Portal on Forecasting with Neural Networks @ www.neural-forecasting.com - last update: 18.10.2006