bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] The little engine that could... Neural Network


From: Philippe Michel
Subject: Re: [Bug-gnubg] The little engine that could... Neural Network
Date: Tue, 5 Jan 2016 23:58:11 +0100 (CET)
User-agent: Alpine 2.20 (BSF 67 2015-01-07)

Øystein Schønning-Johansen  wrote:

What about other techniques? [...] Simplified evaluation function combined with deeper search?

I don't think this is very promising. With 21 rolls at each ply and very few forced moves, deeper search is very expensive.

Moreover, gnubg uses very radical forward pruning, with only the best statically evaluated move expanded at each ply. No minimax/alpha-beta or their probabilistic variants at all, so the static evaluation has to be pretty good.

In fact, maybe we could try to explore the game tree a bit more. For instance, if the best move at an intermediate ply is a hit, check the best non-hit as well (if it doesn't look a lot worse) and vice-versa.


Regarding the evaluation function, we could review the inputs. Besides the raw inputs, they are mostly (only ?) 40 year old ideas from Berliner. Joseph Heled apparently tried some things but found it hard to improve on the existant. XG apparently has 256 inputs (vs. 250 for gnubg) so it must use more and/or different inputs. In an old article in Inside Backgammon, Brian Sheppard wrote "We figure a typical net has 400 inputs" but this looks like a random guess. I think he is the author of a very strong Scrabble program and was knowledgeable on games AI in general, but maybe not specifically neural networks and backgammon.

A side issue with the inputs is that a few of them are extremely expensive to compute. Maybe they could be replaced by simpler ones (possibly replace one complex input by a few simpler ones and let the network emulate the former).
reply via email to

[Prev in Thread] Current Thread [Next in Thread]