It is good to receive a review of our book by an expert in the field. The review is insightful. It highlights the strength as well as weakness of the book. Here we would like to address LeBaron's major concern: "... My biggest concern is that the authors should have tried some more standard financial forecasting tests. This would include setting these classifiers up as a trading rule, and seeing how well they do."
Let us emphasize that our main goal is to prove that the new methods improve over genetic programming. For example, given a set of decision generated by genetic programming, the Repository Method will generate a rule set with better predictive performance. Given that genetic programming is an important technique in machine learning, our improvement on it is significant. We made no attempt to establish our methods to be "the best methods for forecasting". That is the reason why we have not compared it with trading rules generated by other techniques and test benchmarks.
Besides, one major difficulty in comparing our methods against others is that our approaches were designed to generate a set of results (rules) to satisfy different levels of risk preferences. The results of our methods are plotted in ROC curves. Other predictive techniques would typically generate just one prediction rule, which means they would produce just one point in the ROC space; hence fair comparison would be difficult.
The onus is on the authors to make our objectives clear. Lebaron's review suggests that we have not explained the above points clearly. We are therefore grateful to the reviewer and the Journal to give us a chance to clarify them.