17 Jan 2009
Human-level intelligence
Terrence Sejnowski on the edge:
“By 2015 computer power will begin to approach the neural computation that occurs in brains. This does not mean we will be able to understand it, only that we can begin to approach the complexity of a brain on its own terms. Coupled with advances in large-scale recordings from neurons we should by then be in a position to crack many of the brain’s mysteries, such as how we learn and where memories reside. However, I would not expect a computer model of human level intelligence to emerge from these studies without other breakthroughs that cannot be predicted. Computers have become the new microscopes, allowing us to see behind the curtains. Without computers none of this would be possible, at least not in my lifetime.”
We have seen AI and ALife with little success. What is missing? Which of the following A-* fields do you consider as important for modeling human-level intelligence:
- artificial emotions ?
- artificial curiosity ?
- artificial intuition ?
- artificial insights/humor ?
- artificial empathy ?
- artificial will ?
Salaam Alekum
None of those. Natural language is the key. Google translated “wsT jw AlmErkp or ??? ?? ??????? ” as “central air battle”. Correct is “the climatic environmental battle” or a more free translation would be “the battle against climate and environment”. If you don’t know the difference between pouring concrete and fighting Israel you are NOT intelligent.
Necessity of NL translation is therefore proven. Sufficiency? Look at the Website.
Ian Parker
January 18th, 2009 at 1:51 pmpermalink
Sure, natural language is important: not only processing, but also understanding. Understanding natural language is essential. Basically if you have created a system which understands a 3D world, you have created a system which can understand natural language, since language is a description of the physical, real world. A true understanding means to understand metaphors and analogies as well as inconsistencies and ambiguities.
jfromm
January 18th, 2009 at 7:59 pmpermalink
I believe that the question presupposes the answer. In other words “intelligence” or “human-level intelligence” is philosophical cart before the horse. We can identify many A-* or self-* properties that different agents exhibit to some degree or other. And only when we can tick off all such properties that we identify as human are we prepared to call it intelligence. Yet from the perspective of survival (or thriving or virtual stability) of the agent, such a lofty threshold need not be met, and by construction cannot be met unless the agent is human. This is not to suggest that there is anything special about humans, rather the opposite: we are unprepared to give up the notion of human specialness and will construct whatever missing property necessary to exclude a putative agent into the club of intelligence once it has met all the previous hurdles.
Rafe Furst
January 25th, 2009 at 1:46 ampermalink