30 Sep 2009
AI and the Singularity
The technological singularity is a hypothetical future event which takes place when humans succeed in creating an AI being more intelligent than any human could ever be. Imagine a future where computers match or exceed our own intelligence; where problem solving is no longer limited by human thinking. I think it is possible to create a strong AI with common sense comparable to humans. The mind does not resist explanation, because it is formed by the world around us. We know what brains are made of, and we are able to do impressive things if we work together. If we can fly to the moon and back, it should be possible in principle to create a network of computers that could mimic the thought patterns of people, although I doubt that it will be easy to achieve. For as long as we’ve been aware of our ability to make machines that can think, creating human-like intelligence has seemed just a small step away, but has always been completely out of reach on a closer look.
So yes, we will be able to create an AI, but it will be hard. Despite all the difficulties, eventually we will be able to construct an AI which develops a similar form of self-consciousness than we have – but it certainly will be as confused as we are about it. It is doubtful that it will be much more intelligent or “radically super-intelligent” than any human (or group of humans) could ever be. There will be no “explosion if intelligence” as long as the AI speaks the same language, experiences the same worlds, and explores the same universe at the same resolution. Humans take 18-20 years of learning to grow up, and the pool of human knowledge available and accessible on the internet is quite large. Our thoughts may be contained in plain text, simple
“one-dimensional strings of code”, but they are not one-dimensional.
Intelligence depends on experience, learning and training. Intelligent software or intelligent machines, like humans, will need to be trained in particular domains of knowledge and expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have. If intelligent means just being flexible, adaptive and attentive to change, is there a stronger form of intelligence than a “blank slate” with absolute formability and plasticity, which can be found in an agent which is constructed and formed by its environment?
Everyone of us speeks at least one language, the native language. Some may speak clearer and faster than others, but basically all people are able to formulate any idea which they want to express. It is hard to invent radically new metaphors or words for daily life because people have done this for hundreds of years. Therefore even if an AI finally understands language, it will be difficult to come up with completely new metaphors, analogies or words for the same reason. If machines will understand the world using the same methods humans do, then they will face similar difficulties and problems. But an AI of the future may have access to different languages, to different virtual worlds and computational universes that are completely distinct from ours. It may handle a greater amount of information at a greater speed and store and combine a much larger number of patterns. Then it may indeed produce ideas which exceed the limits of our imagination.
If a technological singularity will happen, then probably more as a result of a major transition in evolution, or in the way of Koch and Tononi, and less as a point of exponential increase in intelligence. According to Koch and Tononi, you need to be a single integrated entity with a large repertoire of highly differentiated states to be conscious: ideally a universe of states mapped and compressed on a single point, which resembles the notion of a mathematical singularity. Before an agent is able to recognize itself it must be able to roughly understand a world. During this adaptation process, it has to learn to represent all the basic structures and dynamic processes, until a complete internal universe of neural assemblies has been built up. In one world our self is a single agent, in the other world our self is a complete universe. Self-consciousness is the confusion which arises if we try to bring the one point of view into agreement with the other, the chaos which results if different worlds collide. In this sense, each of us is a small singularity, a small exceptional 1/x point where different universes meet and worlds collide.
—
The neuron picture is a Flickr picture from Scott Ingram Photography