Help with training model on TMVA::DNN

Greetings ,
I was trying to interpret the following python code using TMVA::DNN

Python Code :
model.add(LSTM(250,input_shape=(7,1)))
model.add(Dense(1))
model.compile(optimizer=‘adam’,loss=‘mse’)

Using TMVA::DNN
“Layout=RNN|250|7|1,RESHAPE|12182|7|1,DENSE|1|TANH,LINEAR”

As far as I learned from the search results ,
But unfortunately , on running it on root I get :
terminate called after throwing an instance of ‘std::bad_alloc’
what(): std::bad_alloc

Please help !

Hi,

I think you have some inconsistencies in your layer specification. An RNN with 250 units/neurons and 7 time outputs does not result in a 12182 by 7 output vector.

Cheers,
Kim