Not many can claim 25 years on the Internet! Join us in celebrating this milestone. Learn more about our history, and thank you for being a part of our community!
yea,ive just forget to write a line where :
[color="#1C2837"][color="#00008B"]foreach[color="#000000"] neural [color="#00008B"]in[color="#000000"] the nextlayer
sum[color="#000000"]+=[color="#000000"]neural[color="#000000"].[color="#000000"]value[color="#000000"]*[color="#000000"]currentneural[color="#000000"].[color="#000000"]weights[color="#000000"][[color="#000000"]neural[color="#000000"]];
[color="#1C2837"][color="#000000"]
[color="#1C2837"][color="#000000"]myerror=sum*myoutput*(1-myoutput);
[color="#1C2837"][color="#000000"]
[color="#1C2837"][color="#000000"]its seems like when im training the normal 4 examples,
each 2 example with different output just contradict each other(change the weights in opposite directions) :S
I 'm not sure about your code either, but have you tried reducing your learning rate to something much smaller? 0.1, 0.05 etc. If your learning rate is too high it may not converge