Jayadeva, Sameena Shah, et al.
Swarm Intelligence
The results obtained by Pollack and Blair substantially underperform my 1992 TD Learning results. This is shown by directly benchmarking the 1992 TD nets against Pubeval. A plausible hypothesis for this underperformance is that, unlike TD learning, the hillclimbing algorithm fails to capture nonlinear structure inherent in the problem, and despite the presence of hidden units, only obtains a linear approximation to the optimal policy for backgammon. Two lines of evidence supporting this hypothesis are discussed, the first coming from the structure of the Pubeval benchmark program, and the second coming from experiments replicating the Pollack and Blair results. © 1998 Kluwer Academic Publishers.
Jayadeva, Sameena Shah, et al.
Swarm Intelligence
Matthew Arnold, David Piorkowski, et al.
IBM J. Res. Dev
Rina Dechter, Kalev Kask, et al.
AAAI/IAAI 2002
Shuang Chen, Herbert Freeman
International Journal of Pattern Recognition and Artificial Intelligence