🎉 Celebrating 25 Years of GameDev.net! 🎉

Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!

AI Techniques for Game Programming P306 Error?

Started by
0 comments, last by Raeldor 12 years, 9 months ago
Hi All,

I've been trying to adapt the code to make it compatible with more than one hidden layer, but I think on P306 after the line '// and now we calculate the error', after calculating the error it should then assign the error value to the neuron, should it not? My network is kind-of functioning, but behaving a little oddly. The adapted code looks like below if anyone would care to have a look and see if they can spot anything odd.

Thanks
Rael




-(bool)trainNetworkOnceWithInputSet2:(double*)inInputSet ofSize:(int)inSetSize andOutputSet:(double*)inOutputSet {
// cumulative error for training set
errorSum = 0.0f;

// run each input pattern through the network, get outputs, calculate error and adjust weights
for (int s=0; s < inSetSize; s++) {
// first run input pattern and get outputs
double *outputs = [self updateWithInputs:&inInputSet[s*inputCount]];

// save layers for readability
NeuronLayer *outputLayer = [layers objectAtIndex:hiddenLayerCount];

// for each output neuron, calculate the error and adjust the weights
for (int o=0; o < outputCount; o++) {
// first calculate the error value
double err = (inOutputSet[s*outputCount+o] - outputs[o]) * outputs[o] * (1.0f - outputs[o]);

// update the error total (when this becomes lower than threshold, training is success)
errorSum += (inOutputSet[s*outputCount+o] - outputs[o]) * (inOutputSet[s*outputCount+o] - outputs[o]);

// keep a record of the error value
Neuron *thisOutputNeuron = [outputLayer.neurons objectAtIndex:o];
thisOutputNeuron.errorValue = err;

// for each weight for this output neuron except the bias
int hiddenNeuronIndex = 0;
NeuronLayer *hiddenLayer = [layers objectAtIndex:hiddenLayerCount-1];
for (int w=0; w < neuronsPerHiddenLayer; w++) {
// calculate the new weight based on the backprop rules
Neuron *thisHiddenNeuron = [hiddenLayer.neurons objectAtIndex:hiddenNeuronIndex];
thisOutputNeuron.weights[w] += err * learningRate * thisHiddenNeuron.activation;

// go to next weight and next hidden neuron
hiddenNeuronIndex++;
}

// also adust the bias
thisOutputNeuron.weights[neuronsPerHiddenLayer] += err * learningRate * bias;
}

// loop through hidden layers
for (int h=hiddenLayerCount-1; h >= 0; h--) {
// get this hidden layer
NeuronLayer *hiddenLayer = [layers objectAtIndex:h];
NeuronLayer *layerAbove = [layers objectAtIndex:h+1];

// for each neuron in the hidden layer, calc and adjust the weights
for (int n=0; n < neuronsPerHiddenLayer; n++) {
double err = 0;

// to calculate the error for this neuron we need to loop through the output neurons and sum the errors * weights
int myOutputCount = (h+1 == hiddenLayerCount ? outputCount : neuronsPerHiddenLayer);
for (int o=0; o < myOutputCount; o++) {
Neuron *thisOutputNeuron = [layerAbove.neurons objectAtIndex:o];
err += thisOutputNeuron.errorValue * thisOutputNeuron.weights[n];
}

// now we can calculate the error
Neuron *thisHiddenNeuron = [hiddenLayer.neurons objectAtIndex:n];
err *= thisHiddenNeuron.activation * (1.0f - thisHiddenNeuron.activation);

// save error value to this hidden neuron
thisHiddenNeuron.errorValue = err;

// for each weight in this neuron (hidden) calculate the new weight based on the error signal and learning rate
int myInputCount = (h == 0 ? inputCount : neuronsPerHiddenLayer);
for (int w=0; w < myInputCount; w++) {
if (h == 0)
thisHiddenNeuron.weights[w] += err * learningRate * inInputSet[s*inputCount+w];
else {
NeuronLayer *layerBelow = [layers objectAtIndex:h-1];
Neuron *inputNeuron = [layerBelow.neurons objectAtIndex:w];
thisHiddenNeuron.weights[w] += err * learningRate * inputNeuron.activation;
}
}

// calculate the bias
thisHiddenNeuron.weights[myInputCount] += err * learningRate * bias;
}
}

// free outputs array
free(outputs);
}

// make error sum not dependent on set size or output count
errorSum /= (float)inSetSize*(float)outputCount;

// all went ok
return YES;
}
Advertisement
Oh, and also the error sum from the network seems to go down AND up during training. That can't be right, right? Surely if it's adjusting using an error adjustment factor, it must always go up?

This topic is closed to new replies.

Advertisement