[again] TMultiLayerPerceptron: maximum number of neurons?

Hello,

I know this question was discussed here , but still…

I would like to use TMultiLayerPerceptron for the image recognition. It is usual that in this case the number of neurons in the input layer equals to the number of pixels in the image. In my case this number is around 1000.
The problem is that my script crashes if the number of neurons greater than approximately 60.

In order to investigate the problem I have created a simple example (see attachment).
There are 3 functions: first one creates a datafile ( create_data() ), second one trains the network ( train() ), and third function is just the entry point ( testnn() ). There is also one global variable N which defines the number of neurons in the input layer and in the (only one) hidden layer.

The output layer has one neuron which is the average of values in the input layer. So, the problem is well-defined and the minimisation should converge very fast (of course, “fast” strongly depends on the number of neurons).
Everything works fine If N less than 60, but when I use larger values I get the following message:

[quote]Error: Symbol G__exception is not defined in current scope testnn.C:50:
Error: type G__exception not defined FILE: testnn.C LINE:50[/quote]

What is the origin of this problem? Maybe there is some mistake in my code or this could be a problem of memory? Is there a way to use 1000 and more neurons?

My ROOT version is 5.15/05.
testnn.C.gz (618 Bytes)

Ok, I have found the problem. There is no explicit limitation in the code, but matrices are created for the BFGS algorithm, and there is a limitation on the size of TMatrixD.

When using 150 input neurons, the code tries to create a 22000x22000 matrix, which is too big. To solve this, you have to update to the cvs head and to specify another learning method.