This is an idea about how intelligence could emerge from simple biological patterns. If confirmed, this could have direct impact about how we understand the human brain and how we can design new Artificial Intelligence systems.
There is now a widely spread belief that human brain intelligence is based on very simple structures (neurons) that can implement various thinking processes, based on location and structure. This belief is sustained by researches that shows how brain can learn to perceive with regions that were not initially related to that sensing type (like "seeing with your tongue"). This is the approach taken by various researches that are trying to reproduce human-like brain processes (like recognizing objects) with general purpose neural networks.
I also believe that there must be a very simple pattern that emerges into intelligence. The first hint is that DNA stores a very limited amount of information and human intelligence does not account for a significant additional DNA information compared with less intelligent beings. The fact that intelligence usually emerges in humans even in cases of brain damage makes me believe that brain does not contain a lot of high level complex structures at birth and these structures develops in time. The fact that after brain surgery some other parts of the brain can learn to take responsibility from the damaged part of the brain ("re-learn to speak") also sustain this idea of a simple and plastic brain.
Even more amazingly for me is how animals with a way simpler nervous system than humans can do practical tasks of recognizing and even learning. Even if we go below the proverbial "chicken brain" to insects and other animals without a real central brain (thing garden snail) we still discover intelligent behaviors that we still struggle to reproduce with our current artificial intelligence.
All of these make me believe that there is a simple biological pattern that can easily emerge into what we call "learning" and even "intelligence" and that we can simulate such patterns with Artificial Intelligence based on computers of other artificial constructs.
Emerging intelligence
I searched for a simple structure that could naturally emerge into learning of environment and that can scale to a higher and higher level of intelligence along with a higher "brain" size. The whole processing must be distributed, there is no central "coordinator" to coordinate the human brain. There is no "linear algebra" in the brain to "back propagate" the error and no distinct supervisor to inform the brain about the error.
If we scale down to very simple organisms, there is a simple pattern of control that can implement the "follow the light" behavior. It is amazingly simple and powerful, I don't remember where I've read about it: you have 2 light sensors ("eyes") in the front and 2 propulsion systems ("motors") in the back; the light sensors are linked to motors so that the left light sensor will activate the right motor and the right light sensor will activate the left motor. This simple pattern will keep the simple organism orienting toward a moving light, compensating if the light moves. If the light is on the right, the right "eye" will receive more light and will activate the left motor more, resulting in a shift to right of the organism.
I believe that our symmetrical hemispheres brain is somehow derived from such a simple pattern, and even now it might fulfill a similar role. We can see the optical nerves that are indeed connecting to brain in opposite sides like in the above example: left eye to the right hemisphere, right eye to the left hemisphere.
Another use of symmetrical brain might be the 3D vision. Making a metaphor, I see the brain like a "dark chamber" that "reflects" the outside environment, "focusing" some outside features and filtering others, depending on the state of it's "lens". The images from the 2 eyes might each "reflect" on the opposite hemisphere then meet together in the "corpus callosum" between the 2 brain hemispheres. The positions where images overlap denotes the debt of the image scene.
The idea
I will start with a classical biological network concept, with neurons activating in chain by electro-chemical stimulation over a threshold. Now I will add something new to this model: neurons can receive feedback about the well being of the organism immediately after they fire. Based on this feedback, neurons that fired "correctly" to a better state are nourished and strengthen while neurons that fired "wrongly" (toward a worse state) are "poisoned" to be weakened and disappear after a while.
How can this be implemented in biology? In order to be effective, it must be a very sudden feedback. For example it could come from the endocrine system or similar, that can flood the brain with the endorphin, cortisol or similar. It can also work using the level of glucose, however it seems less likely. The important thing is that only the neurons that fired must receive this feedback. This might happen naturally as these neurons becomes "open" as the result of firing, by changing their electro-chemical polarization.
Such firing neurons will make a "bet" with their "life" regarding the outcome of their firing. They need to "open" to "state their believe" in order to grow, but they can also be badly repressed if they are "wrong".
Either they fired for the good and receive nourishment, either they fired "for the worse" and they will receive a dose of "poison". Of course, this is just a metaphorical way of presenting it for easy understanding. Although the exact biological methods of receiving such feedback are not known, it seems logical to me that there must be such embedded feedback in the brain. Probably the various human addictions are also based on this process. It also seem interesting that neurons are using 4 chemical ions, while 2 ions would be enough for simple activate/inhibit behaviors like we are using in a classical neural network.
So this is my idea. In theory, we could implement such neural network with a bunch or unorganized neurons that fire with a certain level of randomization based on the input, and nourish neurons that resulted in good behavior and weaken the neurons that resulted in bad outcome.
Delay network
Well, this seem still too similar to a regular neural network. There is still one ingredient that is missing in the classical neural networks. The activation of neurons must happen based on time delay rather than mare summation: neurons must be activated only if they receive in a close interval enough stimulation, by different paths. Such pattern would unleash the possibility of creating multiple hierarchical abstractions and correlations, and can naturally implement 3D vision, eye-hand coordination, sound direction detection and other behaviors that are widely used by any living being that needs to adapt to a real environment. This is also consistent with the Hebbian rule: "neurons that fire together, wire together"
I don't have a proof-of-concept yet, I rarely manage to allocate the time to invest in this idea. Maybe, someday, me or someone else might make it work.
Dear reader, please leave a message if you exist! ;) Also, please share this article if you find it interesting. Thank you.
Comments