Question about the event weights in TMultilayerPerceptron


I’m using the TMultilayerPerceptron class and could some one explain to me how to use the event weights and what the role of these weights, i.e. will it change during the training or testing process?

Thanks a lot


The role of the weight is, as for “Draw” or other TTree-related methods, to change the relative importance of events. It is not modified during the learning process.

If you are handling Monte-Carlo generated events that need to be reweighted, this is the place where to put the weight (if you don’t know about this, you don’t need it).

Another application is that, in order to have the best result when using the neural network to discriminate between signal and background, you generally need a more or less equal number of signal and background events. If this is not the case, you can assign a small weight to the type of events you have in excess.

To give you mathematical details, the error used for the training is computed as:
Sum_{events} [ w_{event} * Error_{event} ]
where the sum is over all events in the sample, w_{event} is the event weight and Error_{event} is the error for that event, defined as the difference between the target and the output.