Hi,

I used http://junpenglao.xyz/Blogs/posts/2017-10-23-OOS_missing.html to implement my model but am still having some issues! I think it’s to do with how I define the shape of some of the random variables. To explain my model briefly, I have 12 input random variables, and one output random variable that I estimate through a neural network. This results in 13 observed variables each with measured noise. My training set has ~1000 (nobs) sets of measurements of these 13 observed variables. I would like to fit the training set, and then examine the performance on a further test set, before using the Bayesian neural network to calculate posterior predictive distributions for new observations for which the 13th observable(i.e. the `output`

) is missing. From your example, it seemed that I don’t need to give the information on the number of measurements, but in that case I don’t get the right distribution of weights. When I do give the nobs information, I get posteriors that look right but cannot then apply the model to new datasets with different numbers of measurements. Here’s my code:

```
# Number of input variables (fixed)
ninputs = inputsTrainScale.shape[1]
# Number of neurons on each hidden layer (fixed)
neurons_per_hiddenlayer = 5
# Set observed random variables to be shared (initially set to training set)
inputsShared = theano.shared(inputsTrainScale)
errInputsShared = theano.shared(errInputsTrainScale)
targetShared = theano.shared(targetTrain)
errTargetShared = theano.shared(errTargetTrain)
# Initialize random weights on neurons
init_hid = np.random.randn(inputsTrainScale.shape[1],neurons_per_hiddenlayer)
init_out = np.random.randn(neurons_per_hiddenlayer)
with pm.Model() as neural_network:
nobs = len(targetShared.eval())
## PARAMETERS OF NEURAL NETWORK (UNOBSERVED RANDOM VARIABLES)
# Weights from input to hidden layer
wts_in_hid = pm.Normal('wts_in_hid',mu=0,sd=1,shape=(ninputs,neurons_per_hiddenlayer),testval=init_hid)
# Weights from hidden layer to output
wts_hid_out = pm.Normal('wts_hid_out',mu=0,sd=1,shape=(neurons_per_hiddenlayer,),testval=init_out)
## OBSERVED RANDOM VARIABLES (TRUE VALUES)
# Deterministic spectral parameters (mean zeros assuming they have been scaled, and std 1 means similar to measured values)
true_x = pm.Normal('true_x', mu=0,sd=1, shape=(ninputs))
# Deterministic mass from neural network (assuming sigmoid and linear activation functions for the hidden and outer layers)
act_hid = TT.nnet.sigmoid(TT.dot(true_x,wts_in_hid))
act_out = TT.dot(act_hid,wts_hid_out)
true_mass = pm.Deterministic('true_mass',act_out)
## OBSERVED RANDOM VARIABLES (LIKELIHOODS USING OBSERVED VALUES)
# Define likelihoods using observations
likelihood_x = pm.Normal('likelihood_x',mu=true_x,sd=errInputsShared,observed=inputsShared, shape=(ninputs))
likelihood_mass = pm.Normal('likelihood_mass',mu=true_mass,sd=errTargetShared,observed=targetShared, shape=(nobs))
```