About the GPEMjournal blog
Saturday, December 31, 2011
Tuesday, November 29, 2011
Recent years have seen a sharp increase in the application of evolutionary computation techniques within the domain of games. Situated at the forefront of this research tidal wave, Moshe Sipper and his group have produced a plethora of award-winning results, in numerous games of diverse natures, evidencing the success and efficiency of evolutionary algorithms in general—and genetic programming in particular—at producing top-notch, human-competitive game strategies. From classic chess and checkers, through simulated car racing and virtual warfare, to mind-bending puzzles, this book serves both as a tour de force of the research landscape and as a guide to the application of evolutionary computation within the domain of games.
An outstanding, timely book in the rapidly growing area of computational intelligence in games. A must read for both the neophyte and the seasoned researcher, with all the hallmarks of a landmark book.
John Koza, author of Genetic Programming tetralogy
In Evolved to Win Moshe Sipper provides a treasure trove of detailed examples and advice on using evolutionary computation, in conjunction with human expertise, to solve hard puzzles and to win a wide variety of challenging games. Sipper and his colleagues know this field better than anyone else, having produced some of the field's strongest and most exciting results, and this book provides a comprehensive tour of their results along with ample guidance for newcomers to the field.
Lee Spector, Professor of Computer Science, Hampshire College, and Editor-in-Chief of the journal Genetic Programming and Evolvable Machines
Monday, November 21, 2011
May be there are good reasons for this but it was my understanding
that where work is available in a number of places we should cite
journal articles before conference papers and only cite technical
reports if the work is not otherwise available.
Monday, October 3, 2011
Response to the review of "Variation-Aware Analog Structural Synthesis: A Computational Intelligence Approach"
In the December issue of Genetic Programming and Evolvable Machines (volume 12:4), John Rieffel gave a review of the book "Variation-Aware Analog Structural Synthesis: A Computational Intelligence Approach", of which I was a co-author.We would like to express our thanks to John for a thoughtful review, covering the reliable and trustworthy approaches to perform industrially-oriented symbolic regression, robust optimization, and analog structural synthesis.We would like to clarify one point: while the review reports that the book ignores "simulators that cheat", the book in fact dedicates substantial space on this issue, and how open-ended synthesis approaches are highly prone to it. (See pp. 157-167, including the section "SPICE can lie".)The broader issue -- trustworthy synthesis -- is a broad challenge that the last half of the book addresses. Trustworthy synthesis outputs circuits that a designer trusts enough to commit to silicon. The book proposes to use hierarchical building blocks developed over the decades by expert designers, enabling synthesis to output circuits that are trustworthy by construction.As the book discusses, the computational intelligence techniques presented generalize beyond analog CAD, to domains such as robotics, financial engineering, mechanical design, and more.
-- Trent McConaghy, October 3, 2011
Sunday, September 18, 2011
Impact factor: 1.167
Rank in category: Artificial Intelligence: 63 out of 108
Rank in category: Theory and Methods: 41 out of 97
Journal Citation Reports also provides an Immediacy Index, for which our numbers are:
Immediacy index: 0.143
Rank in category: Artificial Intelligence: 64 out of 108
Rank in category: Theory and Methods: 52 out of 97
Friday, September 16, 2011
Saturday, September 3, 2011
Wednesday, August 31, 2011
Eg if the distribution is reasonably well behaved then if the answer to be checked lies outside the range of the 25th to 975th example we can say we confidently reject the null hypothesis and say our answer is not from the distribution used to generate the 1000 examples. We do not need Z-scores, t-tests etc.
This non-parametric test should be ok with any distribution. We are effectively burning CPU cyles rather than spending brain cycles on devising and validating a statistical technique specifically for our new distribution.
Wednesday, August 17, 2011
I just returned from the IXth Genetic Programming Theory and Practice Workshop held by the Center for the Study of Complex Systems at the University of Michigan. This is an invitation only workshop that brings together theorists and practitioners interested in the development and application of computer systems that can solve complex problems by developing their own programs (i.e. automatic programming). This group focuses on the use of genetic programming or GP to discover useful computer programs using the principles of evolution by natural selection. The proceedings from this workshop are published each year in a book that can be found on Amazon. The proceedings from this year will be published in late 2011 or early 2012.
The real value of this workshop is the large amount of time dedicted to open-ended discussion about how solve complex problems in medicine, industry, finance, etc. My own motivation for working with GP is to teach the computer how to solve a complex human genetics problem as I would. I do not believe that naive computer programs or analysis strategies such as those used in the agnostics genome-wide association study (GWAS) paradigm will be successful in addressing the complexity of the genotype-phenotype relationship. We, as human analysis engines, don't ignore the pathobiology of disease when we look at data. Why should we instruct the computer to do the same? Given infinite time, each of us would tinker and try new and different things with the data until we found a good answer that made biological sense. We would use our knoweldge of biochemistry, genomics, molecular biology, pathology and physiology to both frame the analysis and interpret the results. Our series of papers published as part of GPTP since 2006 have focused on adaptive computer programs that harness this kind of biological and biomedical knowledge to explore the space of computer programs that can build models of genetic architecture.
One of the more interesting and extended discussions at GPTP this year was about novelty-seeking. Ken Stanley gave a great talk about rewarding computer programs that explore new and different solutions to a problem (read more). His Picbreeder program is a nice example of novelty search in the sense that you can discover and develop interesting pictures without a clear initial objective in mind (e.g. evolve a picture of a car). An analogy in human genetics would be to reward computer program that generate genetic models of disease by exploring new biochemical pathways. I am working on approaches to try this within our own genetic analysis system. I like Ken's quote: "To achieve your highest goals, you must be willing to abandon them."
It is very clear that GP has been used to solve problems that humans or other computer programs haven't been able to. For example, Moshe Sipper has developed computer game players that rival human players (read more). Some of the participants (e.g. Michael Korns) even invest and make money using GP. This is a powerful way to do automatic programming and should be part of the broader toolbox of any complex problem-solver. I would be happy to send you a pre-print of our current GPTP paper.
Tuesday, August 16, 2011
Monday, August 8, 2011
Thursday, August 4, 2011
Stephanie Forrest gave the keynote on evolving fixes to software for which she won the Humie.
Two other international speakers were Federica Sarro, who talked on estimating time to produce software using GP and Wasif Afzal, who reviewed GP for prediction (Slides, Video).
David White talked about new work on optimising server farms, JVM in cloud computing systems (Slides, Video). I talked about evolving a CUDA kernel for gzip running on a GeForce 295 GTX GPU (paper)
Saturday, July 23, 2011
Friday, April 29, 2011
Thursday, February 24, 2011
GE Global Research, USA
steven D0T gustafson AT research D0T ge D0T com
unamay AT csail D0T mit D0T edu
• GP approaches to uncover nonlinear relationships between variables in complex systems
• Scalable GP systems that can handle one or more orders of magnitude more than typical systems to enable more real-world Systems Identification, e.g. financial anomaly detection.
• GP systems that provide an improved understanding of the solutions, from variable interaction to improved confidence bounding, e.g. providing statistics of similar to modern packages like Minitab, Matlab, R.
• Approaches that move GP closer to systems like CART as a way to explore variables, relationships, and data, where users can quickly inspect solutions and modify the system to improve performance and capability.
We encourage all prospective authors to contact the guest editors, at the address below, as early as possible, to indicate your intention to submit a paper to this special issue.
Submission Deadline: September 1, 2011
Acceptance Notification: November 15, 2011
Final Manuscript Deadline: January 15, 2012
Kotanchek, M. E., Vladislavleva, E. Y., and Smits, G. F. (2009). Symbolic Regression via GP
as a Discovery Engine: Insights on Outliers and Prototypes. In Riolo, R., O'Reilly, U.-M., and
McConaghy, T. Genetic Programming Theory and Practice VII, pp. 55-72, Springer.
Schmidt, M., and Lipson, H. (2009). Distilling Free-Form Natural Laws from Experimental
Data. Science 324(5923) pp. 81 - 85.