bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Addendum: FW: [Bug-gnubg] Training gnu backgammon


From: Joachim Matussek
Subject: Addendum: FW: [Bug-gnubg] Training gnu backgammon
Date: Sun, 11 Mar 2007 08:13:39 +0100

I have to add some sub items to 5)

Think about...

- the NUMBER OF HIDDEN LAYERS per neural net (1 or 2). Really consider about 2 
hidden layers despite the common opinion that neural nets with 1 hidden layer 
are able to mimic any evaluation function. I like 2 hidden layers for 
particular problems because these neural nets are better suited for complex 
evlauation functions (testing required).
- the type of ACTIVATION FUNCTION. There are many more than just the simple 
(0/1)-symmetric sigmoid. TANH is better for sure because it is symmetric 
between -1 and 1 (leads to faster convergence). Of course, there is some 
transformation of the outputs required (mapping the winning chances 0%/100% to 
-1/1).
- the NORMALIZATION OF INPUTS. Adjusting the range of inputs into the input 
layer as well as to the hidden layer(s) symmetrically around 0 improves the 
generalization of the neural net.
- the SIZE OF THE INITIAL WEIGHTS. Small initials weights usually lead to 
better generalization.
- the LEARNING RATE. Small learning rates lead to better generalization and 
more accurate results. Big learning rates donŽt speed up the process of 
learning. After you have reached nice results with your learning rate, DECREASE 
it and do some cycles with the smaller learning rate. Weights will settle down 
and the accuracy of the outputs will improve nicely.
- the LEARNING ALGORITHM. Probably use simple BACKPROPAGATION. DonŽt consider 
batch learning. Batch learning is highly inefficient. DonŽt consider any "high 
speed" learning algorithms like Rprop, Quickprop... They are suited for pattern 
recognition but not for evaluation function.


Probably i have forgotten some ideas. I might post them later if i remember 
them. 

My main thought about neural backgammon net is that if you really search for 
improvement you should optimize all possible parameters, not just start GNUBGŽs 
neural net training. It is rather easy to achieve a common level of play (the 
similar playing level of GNUBG, Snowie and BGBlitz is an indication for that 
opinion). Further improvement requires hard expert work with a great concept 
and gazillions of CPU cycles nonetheless.

Enjoy,

Joachim Matussek



> -----Ursprüngliche Nachricht-----
> Von: Joachim Matussek <address@hidden>
> Gesendet: 11.03.07 02:24:49
> An: address@hidden
> Betreff: FW: [Bug-gnubg] Training gnu backgammon

> Hello,
> 
> i have read several times that there are new efforts to train new neural nets 
> for GNUBG. I believe there is a high risk of wasting gazillions of CPU cycles 
> if you donŽt have a very good concept.
> 
> In my opinion you need at least one person who is a strong BG player and one 
> person who is expert in training neural nets (especially backgammon neural 
> nets). They have to go through several steps if they want to succeed in 
> improving GNUBG. I will write down some of the most important items.
> 
> 0) Read all of Hans Berliner on BKG.
> 1) Analyze the strengths and weaknesses of the existing GNUBG neural nets and 
> the partitioning of the backgammon position space within GNUBG (e.g. 
> weaknesses: containment play/ almost all of crashed net/ odd-even-ply bias...)
> 2) Think about an improved partitioning of the backgammon position space. 
> Think of a quick algorithm how to decide which neural net is suited for a 
> particular position.
> 3) Think about an improved coding of the backgammon board (raw and additional 
> inputs) depending on the position type.
> 4) Analyze the former training process of GNUBG (detailed documentation 
> required). Decide what the best training process will be (TD training doesnŽt 
> give very accurate neural net but is able to learn from scratch/ supervised 
> training gives accurate nets only if the training data are very good)
> 5) Do some experiments on the size of the neural nets (accuracy vs. speed). 
> DonŽt forget problems like overfitting and generalisation.
> 6) Start TD training for parts of the game where GNUBG is too weak to use the 
> existing rollout or ply data.
> 7) Acquire training data for supervised training by rollouts and ply 
> evaluations.
> 8) Start supervised training -> 9.
> 9) Test the resulting neural nets -> 8.
> 
> Have fun,
> 
> Joachim Matussek
 

_______________________________________________________________________
Viren-Scan für Ihren PC! Jetzt für jeden. Sofort, online und kostenlos.
Gleich testen! http://www.pc-sicherheit.web.de/freescan/?mc=022222





reply via email to

[Prev in Thread] Current Thread [Next in Thread]