Previous Topic Back Forward Next Topic
Print Page Dr. Frank Dieterle
Ph. D. ThesisPh. D. Thesis 8. Results – Growing Neural Network Framework8. Results – Growing Neural Network Framework
About Me
Ph. D. Thesis
  Table of Contents
  1. Introduction
  2. Theory – Fundamentals of the Multivariate Data Analysis
  3. Theory – Quantification of the Refrigerants R22 and R134a: Part I
  4. Experiments, Setups and Data Sets
  5. Results – Kinetic Measurements
  6. Results – Multivariate Calibrations
  7. Results – Genetic Algorithm Framework
  8. Results – Growing Neural Network Framework
    8.1. Modifications of the Growing Neural Network Algorithm
    8.2. Application of the Growing Neural Networks
    8.3. Growing Neural Network Algorithm Frameworks
    8.4. Applications of the Growing Neural Network Frameworks
    8.5. Conclusions and Comparison of the Different Methods
  9. Results – All Data Sets
  10. Results – Various Aspects of the Frameworks and Measurements
  11. Summary and Outlook
  12. References
  13. Acknowledgements
Research Tutorials
Site Map
Print this Page Print this Page

8.   Results – Growing Neural Network Framework

The genetic algorithms for a variable selection, which were proposed and applied in section 2.8.5 and in chapter 7, successfully improved the calibration of the refrigerant data set by selecting the most predictive variables and by optimizing the number of hidden neurons in a single hidden layer using a simple gradient method. As already stated in section 2.8.2, a total non-uniform optimization of the topology of the neural networks should be superior to a pure variable selection and to a simple optimization of the number of hidden neurons. The algorithms for a structure optimization decide on the need of each single network element resulting in sparse yet effective non-uniform networks. In addition, these algorithms can be used for networks with several hidden layers. The oldest and most popular methods of structure optimization are the pruning algorithms, which were introduced in section 2.8.8 and applied in section 6.10. Yet, as already seen and discussed in both sections the pruning algorithms are faced by several drawbacks rendering the application of these algorithms in practice doubtful. The sophisticated approach of optimizing the network topology by the use of genetic algorithms is also faced by several problems and limits discussed in section 2.8.9 rendering the application of these algorithms nearly impossible for analytical data sets.

The growing neural network algorithm, which was initially proposed by Vinod et al. [125] and which was introduced in section 2.8.10, has already been successfully applied to the calibration of sensor data sets [28].  In this chapter, several modifications of the algorithm are introduced. The application of this algorithm to the refrigerant data set shows an improved calibration with similar prediction errors like the genetic algorithm framework. In order to improve the reproducibility of the algorithm, two frameworks for the growing neural networks similar to the genetic algorithm framework are introduced. Both frameworks show an extraordinary calibration and generalization ability and a good reproducibility.

Page 88 © Dr. Frank Dieterle, 14.08.2006 Navigation