Neurostuff: Neural Structure vs A-NNs

In summary, the neurons in an owl's brain fire very briefly (1 per ms) to compute the position of a mouse, but the owl is able to catch the mouse every night because there is a population of fast neurons and a population of slow neurons that come from the same original neurons.
  • #1
neurocomp2003
1,366
3
[Q0]An old neuropsych prof of mine told me that any biological neuron layer A that feeds into another layer B will mostlikely receive feedback. Now is this feedback:
Direct, meaning that some axons from B will return to A o
[II] Indirect, B to C...D to A (cyclic)

I know II occurs but does I?

[Q1] Also are there isolated networks in the brain that only collect input from specific regions and output through specific ones...for example

I - [ABCDE...]- O where I,O can be numerous layers/cells
and ABCDE... are truly hidden layers that cannot be accessed through other parts of the brain, access only occurs through I/O. Oh and ABCDE need not be strict Feedforwards(can have cyclic, intra, fully connected).
 
Last edited:
Biology news on Phys.org
  • #2
A quick google (didn't do any exhaustive reading) produced this site:

http://www.gc.ssr.upm.es/inves/neural/ann1/concepts/structnn.htm

Which looks to me like it's saying the feedback is direct.
 
Last edited by a moderator:
  • #3
lol what you posted there i already know...thats ANNs...i'm looking for brain stuff, biological neuronal structure.
i probably should edit the first post
 
  • #4
neurocomp2003 said:
lol what you posted there i already know...thats ANNs...i'm looking for brain stuff, biological neuronal structure.
i probably should edit the first post
That's both funny and sad. I skimmed the whole page and didn't catch that it was about artificial neural networks. I think I googled "neuron layers feedback" and wasn't expecting that to take me to anything artificial.
 
  • #5
lol bottom fo the page in italics ==> Artificial neural networks hehe...thx for the effort though.
 
  • #6
neurocomp2003 said:
lol bottom fo the page in italics ==> Artificial neural networks hehe...thx for the effort though.
It's written so tiny, though, and tucked away down at the bottom like some devious fine print in a legal document.
 
  • #7
OK, I googled a little more and got this lead:

"Key to this process is the concept of feedback, without which a net of neurons would be unable to learn. The relative strength of each synapse may be increased or decreased to better reflect the desired response. There appear to be specific modulatory neurons that facilitate this process. There are a variety of neural learning techniques that have been hyped, and the exact mechanisms involved is an area of particularly intense contemporary research. Regardless of the means, however, the performance of each neuron is constantly being monitored, and the synaptic strengths are continually readjusted to improve the overall performance of each layer of the network. Neurons can also grow new dendritic and axon branches and create new synapses with other neurons. You may be doing this right now as you read this month's Futurecast."

I think this is saying that the "feedback" takes the form of an adjustment to the receptivity of a neuron, as opposed to an axon leading back to the imputting neuron. There seem to be dedicated "modulatory" neurons responsible for these feedback-like adjustments, but the whole thing seems not to have been sorted out yet.
From:
http://www.kurzweilai.net/meme/frame.html?main=/articles/art0254.html?m%3D10
 
Last edited by a moderator:
  • #8
Zoobyshoe,

These modulatory cells are interneurons and glia cells in brain.
Neurals networks are firstly created redundantly and ameliorated by competition. Those which respond in a better manner to the task are selected by glia by a better feeding and rewards.
 
  • #9
somasimple said:
Zoobyshoe,

These modulatory cells are interneurons and glia cells in brain.
Neurals networks are firstly created redundantly and ameliorated by competition. Those which respond in a better manner to the task are selected by glia by a better feeding and rewards.
Very interesting. What are the glia reacting to? (i.e. How do they determine one neuron is responding "better" than another? What do glia like in a neuron?)
 
  • #10
temporal summation probably, some chemical device...where are the neuroscientists on the forums.
 
  • #11
neurocomp2003 said:
temporal summation probably, some chemical device...where are the neuroscientists on the forums.
What's "temporal summation"?
 
  • #12
Zoobyshoe,

I have unfortunately for replying at the time.
Glia cells promote the "good" neurons with some growth factor.
It is called competition between concurrent neurons cells within networks.
It enhances drammatically the power of neuron computation.
 
  • #13
temporal summation: the ability for some neurons to record the # of signals from another neuron or region...may only be theoretical but that is the concept.
 
  • #14
Hi all,

About competition as a success story for evolution.

There was a riddle about owl's vision targeting a mouse from a branch of a tree. Scientists were amazed that an owl need an accuracy of 1µs to compute the position of the mouse but neuron firing was only 1 per ms. It was scientifically impossible that an owl may catch the mouse! But owls catch mice every night!

They observed that neurons doing the job were divided in two populations.
  • a faster and bigger one, active.
  • a slower one, inactive and inhibited by the primers.
But they came from same native neurons.

When an owl learns targeting a mouse, if there is an imbalance with the speed of a R neurons with the according L side thus a glia cell boosts the winning R candidate (those who won the "race"). The best are selected and accuracy created by selecting and maintenaing these good guys.

(From La Recherche)
 
  • #15
lol and that has to do with the topic at hand how?

but none-the-less interesting...always wondered if neurons acted on different time systems and summed together masked those t-systems
Wonder if Carson Chow or Bard Ermentrout could model that.
guess that answers my question.
 
  • #16
neurocomp2003 said:
lol and that has to do with the topic at hand how?
He was answering my question about why glial cells pick certain neurons to encourage.
 
  • #17
Hi,

there is a lot of summation possible within our living neurons that are not existing in artificial ones.

temporal summation is a sum that exists when two inputs are quite close in firing and "added" to produce a compound response.

spatial summation exists when concurrent inputs creates a compound one.

But all these analogic responses may be changed by some neuro-transmitters. The only rule appliable within neurons is adding and counter-adding (inhibition). There is no legit subtraction.
 
  • #18
"there is a lot of summation possible within our living neurons that are not existing in artificial ones." ummm that's not true.
and i thought inhibition is considered to be "subtraction" in mathematical modelling terms.
 
Last edited:
  • #19
Neurocomp2003,

How, with signals which have the same sign (all positive) can you produce a subtraction? :wink:
It is why I said counter-addition.

It may of course modeled by a subtraction in a math model.
 
  • #20
and that's what i meant, Somasimple: "It may of course modeled by a subtraction in a math model." I prefer to use math terms. =]
 
  • #21
That's your problem, Neurocomp2003. o:)
But... If artificial neural networks are intended to copy the natural behaviours, it may be interesting to understand the natural mathematic coming from neurons.

Mathematical simplification misses Nature's simplicity.
 

Related to Neurostuff: Neural Structure vs A-NNs

1. What is the difference between neural structure and artificial neural networks (A-NNs)?

Neural structure refers to the physical arrangement of neurons in the brain, while artificial neural networks (A-NNs) are computer algorithms designed to simulate the behavior of biological neural networks.

2. How do artificial neural networks (A-NNs) learn and make decisions?

A-NNs learn through a process called training, where they are fed large amounts of data and adjust their connections between nodes to optimize their performance. They make decisions based on this learned information, similar to how the human brain processes information.

3. Can artificial neural networks (A-NNs) mimic the complexity of the human brain?

While A-NNs can perform a wide range of tasks and have shown impressive abilities, they are still far from replicating the complexity of the human brain. The human brain is estimated to have over 80 billion neurons, while even the most advanced A-NNs only have a few million.

4. What are some potential applications of artificial neural networks (A-NNs)?

A-NNs have a wide range of potential applications, including image and speech recognition, natural language processing, and predictive modeling. They are also used in fields such as healthcare, finance, and transportation to make data-driven decisions and improve efficiency.

5. How do researchers study neural structure and artificial neural networks (A-NNs)?

Researchers study neural structure through techniques like brain imaging, dissection, and electrophysiology. A-NNs can be studied through experimentation and analysis of their performance on different tasks, as well as through simulations and modeling.

Similar threads

  • Biology and Medical
Replies
8
Views
1K
Replies
1
Views
719
  • Programming and Computer Science
Replies
3
Views
910
  • Biology and Medical
Replies
2
Views
4K
  • Programming and Computer Science
Replies
1
Views
2K
Replies
16
Views
2K
  • Biology and Medical
Replies
2
Views
5K
  • Quantum Physics
Replies
7
Views
1K
  • General Discussion
Replies
4
Views
2K
  • Biology and Medical
Replies
29
Views
5K
Back
Top