Single layer neural network. What am I doing wrong?

In summary, the conversation discusses the implementation of an example for iteration training and the use of a single layer perceptron for a dataset with three clusters. The conversation also mentions the addition of a two node hidden layer to improve the performance of the backprop neural network. The expert suggests making a scatter plot of the dataset and points out the similarity between a fighter and a bomber in the third cluster, which may cause problems for a single layer perceptron. The expert also mentions a mistake in taking the wrong value for readjusting weights and suggests using a dynamic number of neurons in the hidden layers. Finally, the expert inquires about the specific issue with the algorithm used.
  • #1
GProgramer
10
0
I'm trying to implement this example http://www.cs.bham.ac.uk/~jxb/INC/l3.pdf (page 15)
I'm trying to do Iteration training, but it seems as if the results always converging to a steady error rate that is too large to be acceptable, the values centering around 0 while they should be close to -1 and +1.

I don't know if there's something wrong with the code, or I have the training concept misunderstood?
Code:
	close all;clc;
	M=3;N=1;

	X=[-1 1.0 0.1;-1 2.0 0.2; -1 0.1 0.3; -1 2.0 0.3; -1 0.2 0.4; -1 3.0 0.4; -1 0.1 0.5; -1 1.5 0.5; -1 0.5 0.6; -1 1.6 0.7];
	X=X';
	d=[-1;-1;1;-1;1;-1;1;-1;1;1];

	Wp=rand([M,N]);
	Wp=Wp'/sum(Wp(:));  % theta is 1 so sum of Wp and W needs to be <1
	W=rand([M,N]);
	W=W'/sum(W(:));
	V1=zeros(1,10);  %Pre allocating for speed
	Y1=zeros(1,10);
	e=zeros(1,10);

	while(1)
		
	i=randi(length(X),1);
	%---------------Feed forward---------------%    
	V1(i)=W*X(:,i);
	Y1(i)=tanh(V1(i)/2);
	e(i)=d(i)-Y1(i);


	%------------Backward propagation---------%
	delta1=e(i)*0.5*(1+Y1(i))*(1-Y1(i));

	Wn(1,1)=W(1,1) + 0.1*(W(1,1)-Wp(1,1)) + 0.1*delta1*Y1(i);
	Wn(1,2)=W(1,2) + 0.1*(W(1,2)-Wp(1,2)) + 0.1*delta1*Y1(i);
	Wn(1,3)=W(1,3) + 0.1*(W(1,3)-Wp(1,3)) + 0.1*delta1*Y1(i);

	Wp=W;
	W=Wn;

	figure(1);
	stem(Y1);
	axis([1 10 -1 1]);
	drawnow;
	end
 
Technology news on Phys.org
  • #2
First off, your backprop algorithm is just wrong. Do you have a reference for the algorithm you are using?

Even if you correct your update algorithm, a single layer perceptron is going to have a very, very hard time with this dataset. Make a scatter plot of this dataset. For example, make a graph with mass on the x axis, speed on the y axis. Mark each fighter with an F, bomber with a B. There are three clusters. Near the y-axis there's a cluster of four light, fast fighters. Near the x-axis there's a cluster of four slow, heavy bombers. The third cluster is going to be problematic for a single layer perceptron. The fighter with mass=1.6, speed=0.7 is very similar to the bomber with mass=1.5, speed=0.5.

Adding a two node hidden layer makes this problem much more amenable to a backprop neural network.
 
  • #3
D H said:
First off, your backprop algorithm is just wrong. Do you have a reference for the algorithm you are using?

Even if you correct your update algorithm, a single layer perceptron is going to have a very, very hard time with this dataset. Make a scatter plot of this dataset. For example, make a graph with mass on the x axis, speed on the y axis. Mark each fighter with an F, bomber with a B. There are three clusters. Near the y-axis there's a cluster of four light, fast fighters. Near the x-axis there's a cluster of four slow, heavy bombers. The third cluster is going to be problematic for a single layer perceptron. The fighter with mass=1.6, speed=0.7 is very similar to the bomber with mass=1.5, speed=0.5.

Adding a two node hidden layer makes this problem much more amenable to a backprop neural network.
Thank you for your detailed reply. I realized my mistake was taking the Y1 instead of X when readjusting the weights.

Currently it's more or less working (with a couple of the training values having around 30% error rate)

This particular set isn't what I'm aiming for, so it's why I didn't bother doing a detailed graph for it.

I realize that a single layer perceptron isn't enough, and I was just making one so as to test the algo if it's working. I am currently making a 2 hidden layer network, with a dynamic number of neurons in the hidden layers, and I will test it on a sampled cosine function, drawing the output function and the desired function over the same graph so as to compare.

May I ask exactly what is wrong with the algo? If you are talking about the hidden layer deltas, I've yet to add them.

Thank you again for the great reply!
 
Last edited:

Related to Single layer neural network. What am I doing wrong?

1. What is a single layer neural network?

A single layer neural network is a type of artificial neural network that consists of only one layer of neurons. It is also known as a perceptron. The input of the network is fed through the neurons, which perform calculations and generate an output.

2. How does a single layer neural network work?

A single layer neural network works by taking in inputs, multiplying them by weights, and adding the results. This calculation is then passed through an activation function to generate an output. The weights are adjusted through a process called backpropagation to improve the accuracy of the network's predictions.

3. What are the limitations of a single layer neural network?

A single layer neural network can only model linear relationships between inputs and outputs. This means that it cannot learn complex patterns and relationships in the data. Additionally, it can only solve simple classification problems and may struggle with more complex tasks.

4. What are some common mistakes when implementing a single layer neural network?

Some common mistakes when implementing a single layer neural network include using incorrect activation functions, not normalizing the input data, and not adjusting the learning rate properly. It is also important to choose the right number of neurons and to properly initialize the weights.

5. How can I improve the performance of my single layer neural network?

To improve the performance of a single layer neural network, you can add more layers, use a more complex activation function, or increase the number of neurons in the layer. It is also important to choose the right learning rate and to properly tune the network's hyperparameters. Additionally, using a larger and more diverse training dataset can also improve the network's performance.

Similar threads

  • Programming and Computer Science
Replies
2
Views
951
  • Programming and Computer Science
Replies
1
Views
2K
  • Programming and Computer Science
Replies
1
Views
3K
  • Programming and Computer Science
Replies
2
Views
804
  • Programming and Computer Science
2
Replies
41
Views
4K
  • Differential Equations
Replies
2
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
2
Views
2K
  • Programming and Computer Science
Replies
5
Views
5K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
Back
Top