I will call "competency living" the sum of behavioral competencies that allows an organism (in particular a human) to adapt to environment through learning.
"Competency living" will not cover simple bacterial strategies like "eat and multiply". However it is not limited to human level intelligence like abstract thinking. In my definition "competency living" will also cover complex animal behaviors like learning to jump over a rock, learning to climb, recognize good food, recognize dangers, flight maneuvers for birds, etc. Only in particular it also covers abstract human thinking.
I would speculate that abstract human intelligence is based and emulated on "competency living". Also, I would appreciate that the informational complexity of "competency living" is far greater than the additional needed for implementing abstract thinking. A consequence of this would be that abstract thinking is only gradually more complex than "competency living" and for making an artificial intelligence the biggest challenge is to emulate "competency living".
What is different between abstract logical thinking and the competencies that are necessary for living ("competency living")? The fact that we had better success to emulate the former on computers makes me to guess that it's actually a bit simpler to model abstract (mathematical/logical) thinking than to model the learning processes necessary for living. Why is this?
Actually, applying mathematics and logic is usually reduced to applying simple rules to determine new propositions from axioms and theorems. There is also a very complex creative thinking for imagining and proving new theorems, or choosing the axioms set, however this is a competency so specialized and rare that we can consider it not to be a general feature of human intelligence. General problem solving is something that we can consider as being a general feature of humans, however most of it is very far from a formal analysis and more close to applying heuristics based on experience. Also determining the "useful" theorems is a not-so-exact activity to find a match between processes from environment and various abstract patterns derived from axioms - so it is more related to "competency living".
Abstract thinking versus "competency living"
I would say that a first difference between abstract (like logical/mathematical) thinking and "competency living" is the domain on which they operate.
Abstract thinking operates on a mostly finite set of propositions and rules. Even when we assess infinity, the theorems and propositions are still a finite combination of a number of axioms and rules.
"Competency living", on the other hand, must create a model of a potentially infinite complexity world in a finite information space that a finite brain can provide.
While the abstract thinking can theoretically have all the information about the formal system that it is using in the brain, "competency living" can never have the whole information about the environment he operates in. Therefore, "competency living" must take a radically different approach to create a model of the environment in order to adapt by learning.
"Competency living" learning
First, even the information received from the sensors are not 100% trust-able. While a human can struggle for a while remembering a certain mathematical formula, he always have the means to finds it and verify it. On the other hand, "competency living" is condemned to operate under uncertainty, at least because the received information is initially incomplete and most of it cannot be held in memory. One additional problem is that there are no irrefutable truths that can be used as an axiomatic base.
Mathematical truths are not truths about environment, they are truths about things that respects a certain set of axioms. The fact that the axioms can be applied or not to a certain situation is a pure intuitive decision based on heuristics. The only conjecture that we can take as a leap of faith is that the environment follows simple enough rules that can be derived from observations, so we must "believe in a compressible Universe".
The only way for learning by experience is by using challenging the falsifiability of various inferences. For example you should believe that all swans are white as long as you don't find a black swan. You can actively engage in attempting to falsify your assumption, however if you don't have luck you are doomed to believe that all swans are white.
We, humans, also have a scientific community that can provide counter-examples, however this does not fully solve the problem, it just extends a bit the experimenting possibilities. While nobody ever sow a blue swan, it's not impossible (even unlikely) to discover one somewhere in a corner of the Earth. The situation is even worse for the things that we cannot directly evaluate through senses, like the subatomic structure of matter.
My hypothesis is that the human brain, along with the brain of other complex behavior animals like most mammals, is a machine of falsifiability inferences along with a machine of evaluating the likelihood of different predictions ("if you put the hand on fire, you will likely get burned"). Both need to be based on a system of recalling patterns based on a fuzzy matching, because fire is actually different each time. See also the familiarity_heuristic.
The important part of this compressing scheme is to compress only the relevant differences that explains differences from the previous observed regularities. For example all apples are tasty, except the ones that have white filaments from mold.
One interesting task would be a way to model our intuition regarding abstract logic (and, or, not, implications) in terms of natural falsifiability phenomenons associated with competency living.
I guess this will be the most difficult part for creating an artificial intelligence: how to create a falsifiability machine based on a lossy compressing algorithm to compress the experience in an infinitely complex environment.
Dear reader, please leave a message if you exist! ;) Also, please share this article if you find it interesting. Thank you.
"Competency living" will not cover simple bacterial strategies like "eat and multiply". However it is not limited to human level intelligence like abstract thinking. In my definition "competency living" will also cover complex animal behaviors like learning to jump over a rock, learning to climb, recognize good food, recognize dangers, flight maneuvers for birds, etc. Only in particular it also covers abstract human thinking.
I would speculate that abstract human intelligence is based and emulated on "competency living". Also, I would appreciate that the informational complexity of "competency living" is far greater than the additional needed for implementing abstract thinking. A consequence of this would be that abstract thinking is only gradually more complex than "competency living" and for making an artificial intelligence the biggest challenge is to emulate "competency living".
What is different between abstract logical thinking and the competencies that are necessary for living ("competency living")? The fact that we had better success to emulate the former on computers makes me to guess that it's actually a bit simpler to model abstract (mathematical/logical) thinking than to model the learning processes necessary for living. Why is this?
Actually, applying mathematics and logic is usually reduced to applying simple rules to determine new propositions from axioms and theorems. There is also a very complex creative thinking for imagining and proving new theorems, or choosing the axioms set, however this is a competency so specialized and rare that we can consider it not to be a general feature of human intelligence. General problem solving is something that we can consider as being a general feature of humans, however most of it is very far from a formal analysis and more close to applying heuristics based on experience. Also determining the "useful" theorems is a not-so-exact activity to find a match between processes from environment and various abstract patterns derived from axioms - so it is more related to "competency living".
Abstract thinking versus "competency living"
I would say that a first difference between abstract (like logical/mathematical) thinking and "competency living" is the domain on which they operate.
Abstract thinking operates on a mostly finite set of propositions and rules. Even when we assess infinity, the theorems and propositions are still a finite combination of a number of axioms and rules.
"Competency living", on the other hand, must create a model of a potentially infinite complexity world in a finite information space that a finite brain can provide.
While the abstract thinking can theoretically have all the information about the formal system that it is using in the brain, "competency living" can never have the whole information about the environment he operates in. Therefore, "competency living" must take a radically different approach to create a model of the environment in order to adapt by learning.
"Competency living" learning
First, even the information received from the sensors are not 100% trust-able. While a human can struggle for a while remembering a certain mathematical formula, he always have the means to finds it and verify it. On the other hand, "competency living" is condemned to operate under uncertainty, at least because the received information is initially incomplete and most of it cannot be held in memory. One additional problem is that there are no irrefutable truths that can be used as an axiomatic base.
Mathematical truths are not truths about environment, they are truths about things that respects a certain set of axioms. The fact that the axioms can be applied or not to a certain situation is a pure intuitive decision based on heuristics. The only conjecture that we can take as a leap of faith is that the environment follows simple enough rules that can be derived from observations, so we must "believe in a compressible Universe".
The only way for learning by experience is by using challenging the falsifiability of various inferences. For example you should believe that all swans are white as long as you don't find a black swan. You can actively engage in attempting to falsify your assumption, however if you don't have luck you are doomed to believe that all swans are white.
We, humans, also have a scientific community that can provide counter-examples, however this does not fully solve the problem, it just extends a bit the experimenting possibilities. While nobody ever sow a blue swan, it's not impossible (even unlikely) to discover one somewhere in a corner of the Earth. The situation is even worse for the things that we cannot directly evaluate through senses, like the subatomic structure of matter.
My hypothesis is that the human brain, along with the brain of other complex behavior animals like most mammals, is a machine of falsifiability inferences along with a machine of evaluating the likelihood of different predictions ("if you put the hand on fire, you will likely get burned"). Both need to be based on a system of recalling patterns based on a fuzzy matching, because fire is actually different each time. See also the familiarity_heuristic.
The important part of this compressing scheme is to compress only the relevant differences that explains differences from the previous observed regularities. For example all apples are tasty, except the ones that have white filaments from mold.
One interesting task would be a way to model our intuition regarding abstract logic (and, or, not, implications) in terms of natural falsifiability phenomenons associated with competency living.
I guess this will be the most difficult part for creating an artificial intelligence: how to create a falsifiability machine based on a lossy compressing algorithm to compress the experience in an infinitely complex environment.
Dear reader, please leave a message if you exist! ;) Also, please share this article if you find it interesting. Thank you.
Comments