About the GPEMjournal blog

This is the editor's blog for the journal Genetic Programming and Evolvable Machines. The official web site for the journal, maintained by the publisher (Springer) is here. The GPEMjournal blog is authored and maintained by Lee Spector.

Friday, September 14, 2018

GPEM 19(4) is now available

The fourth issue of Volume 19 of Genetic Programming and Evolvable Machines is now available for download.

It contains:

Grammatical evolution as a hyper-heuristic to evolve deterministic real-valued optimization algorithms
by Iztok Fajfar, Árpád Bűrmen & Janez Puhan

Self-adaptive multi-population genetic algorithms for dynamic resource allocation in shared hosting platforms
by Azam Shirali, Javidan Kazemi Kordestani & Mohammad Reza Meybodi

Comparison of semantic-based local search methods for multiobjective genetic programming
by Tiantian Dou & Peter Rockett

Alain Pétrowski and Sana Ben-Hamida: Evolutionary Algorithms
by Keith Downing

Kathryn E. Merrick: Computational models of motivation for game-playing agents
by Spyridon Samothrakis

Ryan J. Urbanowicz and Will N. Browne: Introduction to learning classifier systems
by Analía Amandi

GPEM 19(3) is now available

The third issue of Volume 19 of Genetic Programming and Evolvable Machines, a special issue on genetic programming, evolutionary computation and visualization, edited by Nadia Boukhelifa & Evelyne Lutton, is now available for download.

It contains:

Guest editorial: Special issue on genetic programming, evolutionary computation and visualization
by Nadia Boukhelifa & Evelyne Lutton

Visualising the global structure of search landscapes: genetic improvement as a case study
by Nadarajen Veerapen & Gabriela Ochoa

Unveiling evolutionary algorithm representation with DU maps
by Eric Medvet, Marco Virgolin, Mauro Castelli, Peter A. N. Bosman, Ivo Gonçalves & Tea Tušar

Data exploration in evolutionary reconstruction of PET images
by Cameron C. Gray, Shatha F. Al-Maliki & Franck P. Vidal

Visualisation with treemaps and sunbursts in many-objective optimisation
by David J. Walker

VALIS: an evolutionary classification algorithm
by Peter Karpov, Giovanni Squillero & Alberto Tonda

Wednesday, August 1, 2018

CFP: Integrating Numerical Optimization Methods with Genetic Programming

[Full CFP on the Journal's Springer site]

Guest Editors

About this Issue

This special issue focuses on integrating numerical optimization methods with Genetic Programming (GP) in order to improve the evolutionary search. In traditional GP the search space is the space of all possible syntactic expressions that can be generated from the set of functions and terminals, which depends upon the type of program representation used. The search operators modify individuals at the level of syntax, and, given that syntactic expressions tend to be fragile, their effect on behavior is usually non-local and difficult to predict. This has lead researchers to explore other search operators or program representations.

One possibility is to use numerical optimization methods as a local search process. In fact, the representation and adaptation (i.e. learning) of real-valued parameters in GP is still an open issue in GP at large, where most of the work has focused on what Koza termed ephemeral random constants and some effort has been devoted to adapting them [1]. Although more advanced approaches such as adaptive node gains were proposed over two decades ago [2] and their uptake has produced some success [3] [4], it has been relatively limited [5]. This area is particularly relevant in modern machine learning, where powerful computing platforms like GPUs are highly optimized for performing such tasks. Numerical optimization can also be used to tune hyper-parameters or to derive surrogate models on-line.

This special issue intends to explore new representations, algorithms and methodologies that can enhance GP systems by exploiting numerical optimization techniques to improve convergence, reduce computation cost and achieve state-of-the- art performance in real-world machine learning challenges [6], and in particular in Deep Learning, a field in which Genetic Programming is becoming increasingly successful [7].


  • Novel representations that are amenable to numerical and local search methods
  • Search approaches for real-valued parameters or meta-parameters in GP individuals
  • New search operators that can exploit both syntactic and numerical search
  • Implementations that improve search efficiency and reduce training times
  • Techniques that are optimized for High Performance Computing platforms, such as GPUs and FPGAs

Important Dates

  • Submission deadline: January 20, 2019 
  • Notification of first review: May 2, 2019 
  • Resubmission: June 3, 2019
  • Final acceptance notification: August 2, 2019

Submissions and Review Procedures

Special Issues are handled in the normal way via the online Editorial Manager system found at https://genp.edmgr.com. Please choose the article type “Integrating Numerical Optimization Methods with Genetic Programming.” Special Issue articles should fulfil all the standard requirements of any GPEM article. Authors should note that the same criteria apply to articles in Special Issues as to regular articles. Special Issue articles must not consist of overviews of the authors' previously published work, e.g. peer- reviewed articles, book chapters, official reports, etc.

All papers will undergo the same rigorous GPEM review process. Please refer to the GPEM website for detailed instructions on paper submission: http://www.springer.com/10710


[1] L. M. Howard and D. J. D'Angelo, "The GA-P: a genetic algorithm and genetic programming hybrid," in IEEE Expert, vol. 10, no. 3, pp. 11-15, Jun 1995. doi: http://dx.doi.org/10.1109/64.393137

[2] Esparcia-Alcázar A.I., Sharman K.C. (1996) Genetic programming techniques that evolve recurrent neural network architectures for signal processing, Neural Networks for Signal Processing VI. Proceedings of the 1996 IEEE Signal Processing Society Workshop, pages 139-148 DOI: http://dx.doi.org/10.1109/NNSP.1996.548344

[3] Maarten Keijzer. 2004. Scaled Symbolic Regression. Genetic Programming and Evolvable Machines 5, 3 (September 2004), 259-269. DOI: http://dx.doi.org/10.1023/B:GENP.0000030195.77571.f9

[4] Emigdio Z-Flores, Leonardo Trujillo, Oliver Schütze, and Pierrick Legrand. 2015. A Local Search Approach to Genetic Programming for Binary Classification. In Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation (GECCO '15), Sara Silva (Ed.). ACM, New York, NY, USA, 1151-1158. DOI: http://dx.doi.org/10.1145/2739480.2754797

[5] Leonardo Trujillo, Emigdio Z-Flores, Perla S. Juárez Smith, Pierrick Legrand, Sara Silva, Mauro Castelli, Leonardo Vanneschi, Oliver Schütze and Luis Muñoz. Local Search is Underused in Genetic Programming. In Rick Riolo et al. editors, Genetic Programming Theory and Practice XIV, Ann Arbor, USA, 2017. Springer.

[6] Numerical and Evolutionary Optimization Workshop: http://neo.cinvestav.mx/NEO2018/

[7] Risto Miikkulainen, Evolving Multitask Neural Network Structure, Metalearning Symposium at NIPS 2017, http://metalearning-symposium.ml/files/miikkulainen.pdf

Monday, July 16, 2018

CFP: GPEM 20th anniversary issue

“There’s no continuum. Current claims and hopes for progress in models for making computers intelligent are like the belief that someone climbing a tree is making progress toward reaching the moon.” – Stuart Dreyfus, Mind over machine
After several decades of evolutionary computation, and 20 years of the journal Genetic Programming and Evolvable Machines (GPEM), what’s the state of the field? Are we making meaningful progress towards being able to automatically evolve software, structures, and machines whose performance and behavior matters? Are these developments leading us in important directions? How does our work relate to the high profile successes in other areas of machine learning and artificial intelligence? What is our roadmap going forward as a field?

Are we making meaningful discoveries, or just climbing trees to reach the moon?

To celebrate 20 years of GPEM we aim to assemble an anniversary issue that is worthy of the fine work in the first two decades of the journal, and address some of these important questions. Our goal is both to take stock and to look ahead; to combine rigorous and diagnostic overviews with sharp new ideas that have the potential to challenge and reshape the field.

Like any publishing venture, we want people to read and refer to this issue for some time to come. How can that best happen? We believe through a combination of:

  • High quality review articles. A good review article can be an enormous service to the field, and receive numerous citations down the road. We want more than just an organized list of citations with limited commentary; this provides little more value than a good Internet search, and will age rapidly. We’re looking for surveys that go beyond the “hits”, make valuable connections, and provide insightful analysis, helping us better understand who we are and what we’re doing.
  • Challenge pieces. A piece (even a quite short one) that challenges the field in a meaningful way can be an important spur for action. This could be a “grand challenge” style article (which, to be honest, is probably a lot harder for a still young field like ours than it was for Hilbert), a more focussed challenge (e.g., “modularity really matters – here’s why and how we’d know we were making progress”), or a more methodological challenge (like recent work on benchmarking in GP, or challenges on how we best compare our work to other “hot” areas like deep neural networks). Challenges should explain why progress on this challenge matters, and provide meaningful ways to gauge progress or success; careful comparison of promising tools and techniques would also have value.
  • Groundbreaking new research. This is arguably the hardest category to judge; we all hope that our work is remarkable and will revolutionize the field, but most work is incremental, and the volume of publishing ensures that much of it will have limited impact. If you have work that you believe has the potential to profoundly affect the field, however, this would be a particularly apt venue, as anniversary issues have the potential for more mentions, search hits, etc., giving you a leg up in your quest for world domination. We’re looking here for research that could really change how we think about the field and our work; if you feel you have those kinds of ideas, please submit.

Our submission deadline is 17 Oct 2018, and we hope to have initial reviewing done by 14 Nov 2018. Resubmissions will be due 1 Dec 2018, with final notifications made by 9 Jan 2019. Questions, etc., should be sent to Nic McPhee at gpem20th@gmail.com

Looking forward to receiving some amazing work!

The official Call for Papers on the Springer site is here.

Tuesday, April 10, 2018

GPEM 19(1&2) is now available

The first issue of Volume 19 (a double issue, numbers 1 and 2) of Genetic Programming and Evolvable Machines is now available for download.

It contains:

Editorial introduction
by Lee Spector

Acknowledgment to reviewers
by L. Spector

Guest editorial: special issue on automated design and adaptation of heuristics for scheduling and combinatorial optimisation
by Su Nguyen, Yi Mei & Mengjie Zhang

Evolving dispatching rules for optimising many-objective criteria in the unrelated machines environment
by Marko Ɖurasević & Domagoj Jakobović

Comparison of ensemble learning methods for creating ensembles of dispatching rules for the unrelated machines environment
by Marko Ɖurasević & Domagoj Jakobović

Optimizing agents with genetic programming: an evaluation of hyper-heuristics in dynamic real-time logistics
by Rinde R. S. van Lon, Juergen Branke & Tom Holvoet

A hyperheuristic approach based on low-level heuristics for the travelling thief problem
by Mohamed El Yafrani, Marcella Martins, Markus Wagner, Belaïd Ahiod, Myriam Delgado & Ricardo Lüders

Evolutionary hyper-heuristics for tackling bi-objective 2D bin packing problems
by Juan Carlos Gomez & Hugo Terashima-Marín

Cooperative evolutionary heterogeneous simulated annealing algorithm for google machine reassignment problem
by Ayad Turky, Nasser R. Sabar & Andy Song

Comparing three online evolvable hardware implementations of a classification system
by Oscar Garnica, Kyrre Glette & Jim Torresen

Evolution of shared grammars for describing simulated spatial scenes with grammatical evolution
by Jack Mario Mingo & Ricardo Aler

Implementing the template method pattern in genetic programming for improved time series prediction
by David Moskowitz

Hod Lipson and Melba Kurman: Driverless: intelligent cars and the road ahead
by Christine Zarges

Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep learning
by Jeff Heaton

Christian Blum and Günther R. Raidl: Hybrid metaheuristics—powerful tools for optimization
by Ofer M. Shir