[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Neurostat-develop] first ideas...

From: Joseph Rynkiewicz
Subject: [Neurostat-develop] first ideas...
Date: Fri, 07 Dec 2001 00:13:17 -0500
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:0.9.5) Gecko/20011012

Well, I shoot the first in this mailing list.

The first (maybe not the last) neuronal object to do is the famous Multilayer Perceptron (MLP).

The idea is to do a "R" library because :

1) It's the best statistical free-software.
2) We don't reinvent the wheel for the post(pre)treatment of data.

Actually, there exist two projects of multilayer perceptron (MLP) in the "CRAN" (Comprehensive R Archives Network) The first from Brian Ripley (This one is limited and seems buggy) the second one from Adrian Trapletti (Hornik was his PhD director), I don't find any bug in this last one but it has serious limitations.

-The number of layer is fixed, it consist of a MLP with one hidden layer and a direct layer from the entries to the output (shortcut connections, ).

-There is no possibility to prune the MLP.

I think that we can do more ambitious library, especially by relaxing the constraint on the number of layer and allowing the MLP to be pruned.

This goal has two consequences :

(1) We have to carefully consider the implementation of the architecture of the MLP especially the possibility of shortcut connections jumping over layers. althought we use "C" it can be a good idea to use a object oriented philosphy and abuse of "typedef struct..."

(2) It's more elegant to use matrices with holes (sparse matrices) to implement the connections between the layers, since our MLP has to be pruned.

So, I propose to use sparse matrices for the connections, moreover I think that it's a good idea to use a "sparse vector" especially for the bias's connections since their role is very different in the back-propagation algorithm.

I can easily release a sparse library, extracted from my software of MLP written in C++ and in GPL (see

The code is not optimized but we can wait to have a working MLP before thinking of optimization.


reply via email to

[Prev in Thread] Current Thread [Next in Thread]