Skip to main content
Log in

Ensemble Learning Using Multi-Objective Evolutionary Algorithms

  • Published:
Journal of Mathematical Modelling and Algorithms

Abstract

Multi-objective evolutionary algorithms for the construction of neural ensembles is a relatively new area of research. We recently proposed an ensemble learning algorithm called DIVACE (DIVerse and ACcurate Ensemble learning algorithm). It was shown that DIVACE tries to find an optimal trade-off between diversity and accuracy as it searches for an ensemble for some particular pattern recognition task by treating these two objectives explicitly separately. A detailed discussion of DIVACE together with further experimental studies form the essence of this paper. A new diversity measure which we call Pairwise Failure Crediting (PFC) is proposed. This measure forms one of the two evolutionary pressures being exerted explicitly in DIVACE. Experiments with this diversity measure as well as comparisons with previously studied approaches are hence considered. Detailed analysis of the results show that DIVACE, as a concept, has promise.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Abbass, H. A.: A memetic pareto evolutionary approach to artificial neural networks, in Proceedings of the 14th Australian Joint Conference on Artificial Intelligence Springer-Verlag, Berlin Heidelberg New York, 2000, pp. 1–12.

  2. Abbass, H. A.: The self-adaptive pareto differential evolution algorithm, in D. B. Fogel, M.A. El-Sharkawi, X. Yao, G. Greenwood, H. Iba, P. Marrow and M. Shackleton (eds.),Proceedings of the 2002 Congress on Evolutionary Computation CEC2002, IEEE, 2002, pp. 831–836.

  3. Abbass, H. A.: Pareto neuro-evolution: Constructing ensemble of neural networks using multi-objective optimization, in The IEEE 2003 Conference on Evolutionary Computation, Vol. 3, IEEE, 2003, pp. 2074–2080.

  4. Abbass, H. A.: Speeding up backpropagation using multiobjective evolutionary algorithms, Neural Comput. 15(11) (November 2003), 2705–2726.

    Article  MATH  Google Scholar 

  5. Abbass, H. A.: Pareto neuro-ensemble, in 16th Australian Joint Conference on Artificial Intelligence, Perth, Australia, Springer, 2003, pp. 554–566.

  6. Abbass, H. A. and Deb, K.: Searching under multi-evolutionary pressures, in Proceedings of the 2003 Evolutionary Multiobjective Optimization Conference (EMO03), LNCS-2632, Springer, Berlin, Heidelberg, New York, 2003, pp. 391–404.

  7. Abbass, H. A., Sarker, R. and Newton, C.: PDE: A pareto-frontier differential evolution approach for multi-objective optimization problems, in Proceedings of the IEEE Congress on Evolutionary Computation (CEC2001), Vol. 2, IEEE, 2001, pp. 971–978.

  8. Baldwin, J.: A new factor in evolution, Am. Nat. 30 (1896), 441–451.

    Article  Google Scholar 

  9. Blake, C. and Merz, C.: UCI repository of machine learning databases, 1998.

  10. Boers, E., Borst, M. and Sprinkhuizen-Kuyper, I.: Evolving artificial neural networks using the “baldwin effect,” Technical Report 95–14, Leiden Unversity, Department of Computer Science, The Netherlands, 1995.

  11. Brown, G.: Diversity in Neural Network Ensembles, PhD thesis, School of Computer Science, University of Birmingham, 2004.

  12. Brown, G., Wyatt, J., Harris, R. and Yao, X.: Diversity creation methods: A survey and categorisation, Inform. Fusion 6(1) (March 2005), 5–20.

    Article  Google Scholar 

  13. Brown, G. and Wyatt, J. L.: Negative correlation learning and the ambiguity family of ensemble methods, in Proc. Int. Workshop on Multiple Classifier Systems (LNCS 2709), Springer, Guildford, Surrey, June 2003, pp. 266–275.

  14. Brown, G. and Wyatt, J. L.: The use of the ambiguity decomposition in neural network ensemble learning methods, in T. Fawcett and N. Mishra (eds.), 20th International Conference on Machine Learning (ICML'l03), Washington DC, USA, August 2003.

  15. Chandra, A.: Evolutionary approach to tackling the trade-off between diversity and accuracy in neural network ensembles, Technical report, School of Computer Science, The University of Birmingham, UK, 2004.

  16. Chandra, A. and Yao, X.: DIVACE: Diverse and accurate ensemble learning algorithm, in Proc. 5th Intl. Conference on Intelligent Data Engineering and Automated Learning (LNCS 3177), Berlin, Heidelberg, New York, Springer, August 2004, pp. 619–625.

  17. Chandra, A. and Yao, X.: Evolutionary framework for the construction of diverse hybrid ensembles, in Proc. 13th European Symposium on Artificial Neural Networks, d-side, Brugge, Belgium, April 2005, pp. 253–258.

  18. Darwen, P. J.: Co-Evolutionary Learning by Automatic Modularisation with Speciation, PhD thesis, University of New South Wales, November 1996.

  19. Darwen, P. J. and Yao, X.: A dilemma for fitness sharing with a scaling function, in Proceedings of the Second IEEE International Conference on Evolutionary Computation, IEEE, Piscataway, New Jersey, 1995.

  20. Darwen, P. J. and Yao, X.: Automatic modularization by speciation, in IEEE International Conference on Evolutionary Computation, IEEE, May 1996, pp. 88–93.

  21. Darwen, P. J. and Yao, X.: Every niching method has its niche: Fitness sharing and implicit sharing compared, in Proc. of the 4th International Conference on Parallel Problem Solving from Nature (PPSN-IV), (LNCS-1141), Berlin, Heidelberg, New York, Springer, September 1996, pp. 398–407.

  22. Darwen, P. J. and Yao, X.: Speciation as automatic categorical modularization, IEEE Trans. Evol. Comput., 1(2) (1997), 100–108.

    Google Scholar 

  23. Deb, K.: Multi-objective evolutionary algorithms: Introducing bias among pareto-optimal solutions, Technical Report 99002, Kanpur Genetic Algorithm Group, Department of Mechanical Engineering, Indian Institute of Technology, Kanpur, India, 1999.

  24. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, Chichester, UK, 2001.

    MATH  Google Scholar 

  25. Deb, K., Agrawal, S., Pratab, A. and Meyarivan, T.: A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II, in M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J. J. Merelo and H.-P. Schwefel (eds.), Proceedings of the Parallel Problem Solving from Nature VI Conference, Paris, France, Springer, 2000, pp. 849–858. Lecture Notes in Computer Science No. 1917.

  26. Dietterich, T. G.: Machine-learning research: Four current directions, AI Mag. 18(4) (1998), 97–136.

    Google Scholar 

  27. Forrest, S., Smith, R. E., Javornik, B. and Perelson, A. S.: Using genetic algorithms to explore pattern recognition in the immune system, Evol. Comput. 1(3) (1993), 191–211.

    Article  Google Scholar 

  28. Hansen, L. K. and Salamon, P.: Neural network ensembles, IEEE Trans. Pattern Anal. Mach. Intell. 12(10) (1990), 993–1001.

    Article  Google Scholar 

  29. Horn, J., Nafpliotis, N. and Goldberg, D. E.: A niched Pareto genetic algorithm for multiobjective optimization, in Proceedings of the First IEEE Conference on Evolutionary Computation, IEEE World Congress on Computational Intelligence, Vol. 1, Piscataway, New Jersey, IEEE Service Center, 1994, pp. 82–87.

  30. Islam, M. M., Yao, X. and Murase, K.: A constructive algorithm for training cooperative neural network ensembles, IEEE Trans. Neural Netw. 14(4) (July 2003), 820–834.

    Article  Google Scholar 

  31. Khare, V. and Yao, X.: Artificial speciation and automatic modularisation, in L. Wang, K. C. Tan, T. Furuhashi, J.-H. Kim and X. Yao (eds.), Proceedings of the 4th Asia-Pacific Conference on Simulated Evolution And Learning (SEAL'02), number 1, Singapore, November 2002, pp. 56–60.

  32. Khare, V. and Yao, X.: Artificial speciation of neural network ensembles, in J. A. Bullinaria (ed.), Proc. of the 2002 UK Workshop on Computational Intelligence (UKCI'02), Birmingham, September 2002, pp. 96–103.

  33. Krogh, A. and Vedelsby, J.: Neural network ensembles, cross validation, and active learning, NIPS 7 (1995), 231–238.

    Google Scholar 

  34. Langdon, W. B., Barrett, S. J. and Buxton, B. F.: Combining decision trees and neural networks for drug discovery, in Genetic Programming, Proceedings of the 5th European Conference, EuroGP 2002, Kinsale, Ireland, 3–5 April 2002, pp. 60–70.

  35. Liu, Y. and Yao, X.: Ensemble learning via negative correlation, Neural Netw. 12(10) (1999), 1399–1404.

    Article  PubMed  Google Scholar 

  36. Liu, Y., Yao, X. and Higuchi, T.: Evolutionary ensembles with negative correlation learning, IEEE Trans. Evol. Comput. 4(4) (November 2000), 380–387.

    Google Scholar 

  37. Michie, D., Spiegelhalter, D. and Taylor, C.: Machine Learning, Neural and Statistical Classification, Ellis Horwood Limited, 1994.

  38. Opitz, D.: Feature selection for ensembles, in Proceedings of 16th National Conference on Artificial Intelligence (AAAI), 1999, pp. 379–384.

  39. Opitz, D. and Maclin, R.: Popular ensemble methods: An empirical study, J. Artif. Intell. Res. 11 (1999), 169–198.

    MATH  Google Scholar 

  40. Opitz, D. W. and Shavlik, J. W.: Generating accurate and diverse members of a neural-network ensemble, NIPS 8 (1996), 535–541.

    Google Scholar 

  41. Sharkey, A.: Multi-Net Systems, Chapter Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems, Springer, 1999, pp. 1–30.

  42. Smith, R., Forrest, S. and Perelson, A.: Searching for diverse, cooperative populations with genetic algorithms, Evol. Comput. 1(2) (1993), 127–149.

    Article  Google Scholar 

  43. Srinivas, N. and Deb, K.: Multi-objective function optimization using non-dominated sorting genetic algorithms, Evol. Comput. 2(3) (1994), 221–248.

    Article  Google Scholar 

  44. Stanley, K. O. and Miikkulainen, R.: Evolving neural networks through augmenting topologies, Evol. Comput. 10(2) (2002), 99–127.

    Article  PubMed  Google Scholar 

  45. Storn, R. and Price, K.: Differential evolution – a simple and efficient adaptive scheme for global optimization over continuous spaces, Technical Report TR-95-012, International Computer Science Institute, Berkeley, USA, 1995.

  46. Tumer, K. and Ghosh, J.: Analysis of decision boundaries in linearly combined neural classifiers, Pattern Recogn. 29(2) (February 1996), 341–348.

    Article  Google Scholar 

  47. Yao, X.: Evolving artificial neural networks, in Proceedings of the IEEE, Vol. 87, No. 9, September 1999, pp. 1423–1447.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Arjun Chandra.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Chandra, A., Yao, X. Ensemble Learning Using Multi-Objective Evolutionary Algorithms. J Math Model Algor 5, 417–445 (2006). https://doi.org/10.1007/s10852-005-9020-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10852-005-9020-3

Mathematical Subject Classification (2000)

Key words

Navigation