Advertisement

Some neural network "questions"

Started by April 09, 2003 03:19 AM
5 comments, last by Spencer 21 years, 5 months ago
Hi! All of you out there who have ever implemented even the simplest neural network should know that they are a pain in the A** to debug. Further more, my back propogation algorithm oftenly seems to be more or less screwed up. I have two simple questions from this: 1. Has anybode devloped a good debugging method for neural networks? 2. Can anybody please point me to a robust but simple implementation of the back.prop. alg. (preferrably in c++) so i can read the code? thank you all --Spencer
--Spencer"All in accordance with the prophecy..."
quote: Original post by Spencer
All of you out there who have ever implemented even the simplest neural network should know that they are a pain in the A** to debug. Further more, my back propogation algorithm oftenly seems to be more or less screwed up.


You might consider using a training algorithm other than backpropagation of errors. Backprop has known weaknesses which can be offset, but only with still more complicated code (momentum terms, etc). The forward pass (neural network recall) is relatively easy to program and debug, so you might consider training your neural network with something like simulated annealing, which is conceptually much simpler than backprop and offers some performance advantages.


Advertisement
Or check out fup''s website AI Junkie. He uses genetic algorithms for training and topology. His book (AI Techniques for Game Programmers) is a great tutorial for the GA-NN hybrid approach.

-Kirk
Debugging behaviours, or code?

For code, get some sample outputs and weight values (given an input) and compare it to your own results. Works great...


The answer to behaviours is... not really. You can use similar techniques for debugging standard AI, but don''t expect to be able to step inside a NN to understand what''s going on! (at least not without spending hours on end)


Artificial Intelligence Depot - Maybe it''s not all about graphics...

Join us in Vienna for the nucl.ai Conference 2015, on July 20-22... Don't miss it!

I think (if this is your first attempt, and you're just using a standard feedforward net with backprop) then you should use the XOR problem. This is so trivial a problem that you can easily work out what the expected outputs of your network should be with a pencil and paper. That means it's easy to check everything step by step.

edit: just noticed Alex got there before me. But at least I suggested a specific problem ;0)




ai-junkie.com

[edited by - fup on April 9, 2003 1:13:19 PM]
I recommend you download a package such as SNNS. It provides a visual representation of a neural network and allows you to see how the network changes at each time step. This feature can be incredibly useful if you want to compare the results with your own neural network.


Advertisement
An aspect of NNs which I almost never see people talk about in articles is their geometric interpretation. The activation of a neuron is given as the dot product of the input vector and the weight vector. We all know the dot product of 2 vectors is cosine of the angle between them(scaled by magnitude of the vectors etc). So it is possible to think of a neural net as a hyperplane equation Ax+By+Cz....=theta (theta becomes the output neuron threshold or -theta is a bias input).

This topic is closed to new replies.

Advertisement