bug-gnubg
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-gnubg] Backpropagation in neural net.


From: Øystein Johansen
Subject: Re: [Bug-gnubg] Backpropagation in neural net.
Date: Thu, 22 Mar 2007 16:22:58 +0100
User-agent: Thunderbird 1.5.0.10 (Windows/20070221)

Øystein Johansen wrote:
Gary or Joseph,

I'm studying the backpropagation implemetation in neuralnet.c
(NeuralNetTrain). I notice that the weights are not updated according
to derivative of the sigmoid function. (In fact the sigmoid function
is not called at all in this function.) I have tried to find an
explanation of this in litterature, but I've not found any.

... and then I just checked the library, and found several book claiming that sig'(x) = z*(1-z) where z is the output vector, and I realized that it was just a matter of math. Putting it down on the paper makes it clear now.

Can you please explain how this works and why it works, and/or
suggest a reference to a book where this algoritm is explained?

I've found several books myself. Never mind.

And what's the difference of NeuralNetTrain and NeuralNetTrainS? Is
the difference just that NeuralNetTrainS doesn't update the output
weights? When is this used?

This is still unsolved, Joseph. I see you have the following in TrainPosition() in eval.c:

  if( ! tList ) {
    NeuralNetTrain(nn, arInput, arOutput, arDesired, a);
  } else {
    NeuralNetTrainS(nn, arInput, arOutput, arDesired, a, tList);
  }


What is tList?

-Øystein





reply via email to

[Prev in Thread] Current Thread [Next in Thread]