Paper
13 October 2000 Review of efforts to evolve strategies to play checkers as well as human experts
Kumar Chellapilla, David B. Fogel
Author Affiliations +
Abstract
We have been experimenting with evolutionary approaches to create artifical neural networks that can play checkers at a level that is competitive with human experts. In particular, multilayer perceptrons were used as evaluation functions to compare the worth of alternative boards. The weights of these neural networks were evolved in a coevolutionary manner, with networks competing only against other extant networks in the population. No external expert system was used for comparison or evaluation. Feedback to the networks was limited to an overall point score based on the outcome of 10 games at each generation. No attempt was made to give credit to moves in isolation or to prescribe useful features beyond the possible inclusion of the piece differential. Initial results indicated that the best-evolved neural network earned a rating of 1750, placing it as a Class B player. This level of performance is competitive with many humans. More recent results have generated networks with ratings in the 1900s, in Class A, one level below expert as accepted by the American Checkers Foundation.
© (2000) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Kumar Chellapilla and David B. Fogel "Review of efforts to evolve strategies to play checkers as well as human experts", Proc. SPIE 4120, Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computation III, (13 October 2000); https://doi.org/10.1117/12.403634
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Neural networks

Evolutionary algorithms

Internet

Legal

Software

Lanthanum

Artificial neural networks

Back to Top