11 Mar 2012
The purpose of emotions
In Disney and Pixar films non-humans are sentient beings. In Disney films these non-humans are typically animals, in Pixar films robots, toys, cars, or monsters. These non-human beings like the toys in Toy Story possess human levels of intelligence, although they are not human. And although they are non-human beings, they behave sometimes more human that we do. How is that possible?
They are social beings: toys care for other toys, robots for other robots, and monsters for other monsters. They are guided by emotions and conscious decisions. They are conscious of themselves, i.e. they are not only toys, robots or monsters, they are aware that they are (just) toys, robots or monsters. And they are willing to sacrifice themselves for their peers and their “purpose”. The toys in Toy Story have a main purpose or primary objective for which they were made. The purpose of the toys is to be played with by kids. Walter Isaacson writes in the biography of Steve Jobs:
The idea that John Lasseter pitched was called ‘Toy Story’. It sprang from a belief, which he and Jobs shared, that products have an essence to them, a purpose for which they were made. If the object were to have feelings, these would be based on its desire to fulfill its essence. The purpose of glass, for example, is to hold water; if it had feelings, it would be happy when full and sad when empty. The essence of a computer screen is to interface with a human. The essence of a unicycle is to be ridden in a circus. As for toys, their purpose is to be played with by kids, and thus their existential fear is of being discarded or upstaged by newer toys. So a buddy movie pairing an old favorite toy with a shiny new one would have an essential drama to it, especially when the action revolved around the toys’ being separated from their kid. The original treatment began, ‘Everyone has had the traumatic childhood experience of losing a toy. Our story takes the toy’s point of view as he loses and tries to regain the single thing most important to him: to be played with by children. This is the reason for the existence of all toys. It is the emotional foundation of their existence.
The idea is if an object were to have feelings, these would be based on its desire to fulfill its main purpose or primary objective. The purpose of the robots in WALL-E is to clean up and evaluate the environment: WALL-E was built to collect garbage, while the task of EVE is to evaluate vegetation. The purpose of the cars in Cars is drive on roads and to win a race. The purpose of the toys in Toy Story is simply to be played with. The purpose of the monsters in Monsters, Inc is to arise emotions in kids (i.e. to frighten them or make them laugh).
This idea can be applied to agent-oriented software engineering as well: if an agent were to have feeelings, then the feelings should be based on its desire to fulfill its main purpose or primary objective. If we equip the agent with the right emotions, we can be sure it does everything to fulfill its main purpose, and leave at the same time enough room for decisions which can not be specified in advance. This trade-off is a way to reconcile emergence and engineering: on the one hand, the agent can do whatever it likes to do as long as it does not violate the primary objective, which means that unpredictable things can occur and emerge during the interaction of agent and environment. On the other hand, the agent has to followed the primary objective, which guarantees its purpose and function. It is a trade-off between purpose and autonomy or force and freedom. And it is a solution to the major problem in agent-oriented software engineering: “I have a Multi-Agent System, but what is its purpose and function ?” If an agent decides itself what it needs to do, how can we make sure that it does something useful or something we want it to do? Genes have solved the problem of agent-oriented engineering long ago. Their natural blueprint specifies a built-control system for every sentient being.
The key is to equip the agent with the right emotions, the same control mechanism the DNA uses to control their bodies. If the agent does things that feel good and avoid things that feel bad, like we do, then the agent should of course feel bad if the primary objective or prime directive is missed. It should feel good if it is fulfilled. That’s all in principle. The concrete implementation depends on the directive and the architecture. Animals have the directive to (a) survive long enough to (b) reproduce themselves. Point (a) means to get enough food and water. All animals crave for the building blocks of life: sugar, fat and water. In this sense, a robot which depends on energy to survive would long for energy. A robot with the directive to explore the world would be very curious, it would crave for new information, insights, or ideas, or it would try to explore new regions, new worlds, and new horizons.
How the agent or robot implements the directive depends on the cognitive architecture. Using the subsumption architecture from Rodney Brooks, the primary objective of a robot or agent may be to explore the world, i.e. to look for unknown places. Below this layer there would be the directive to “wander around”. At the bottom there would be the directive “avoid collision with objects”. While the agent moves around, it constantly checks the lowest directive (“Don’t collide with objects”). If it is fulfilled, it checks the directive on top of it and seeks to fulfill it (“Wander around”). If this is also fulfilled, it tries to fulfill the top directive (“Explore the world”, i.e. “Look for unknown places”).
If we consider the mind as a society of agents, then emotions are not primarily distinct agents, but rather the same agents activated and organized in different ways. As Minsky argues in his book “The Emotion Machine”, emotions, intuitions, and feelings are not distinct things, but different ways of thinking. Emotions act like a jury or an advisory system, which says which is good or bad. For example for threats it is important “to be ready” for action to react fast enough. This can be realized by a threat-level dependent advisory system, i.e. a system which exhibits certain levels of alertness or readiness for action, for instance in a sports team or a military system. In a sports team the trainer may play the role of regulating emotions if he wakes the team up in case of a threat and calms it down it is too disturbed. A military system has usually many kinds of alerts (from an slightly increased level of attention to the famous “red alert”) which prepares the system for action.
- A single agent can follow an objective if it is guided by feelings and emotions which are based on its desire to fulfill its objective.
- A group of agents can follow an objective if all are guided by a central principle, for example by being part of an organization, group or team. In this organization every agent plays a certain role, which is in turn shaped by the overall objective of the organization. The group as a whole can be subject to “emotions” if it has a “built-in” jury or advisory system which encourages certain actions while suppressing others.
Thus if the feelings of agents are directed towards the fulfillment of their primary directives, we can reconcile naturally engineering and emergence, force and freedom, or purpose and autonomy, just as living beings have done this for millions of years. Emotions are the built-in drive to fulfill the underlying primary directive which specifies the purpose of the agents. Their presence ensures the fulfullment, compliance and completion of the directives specified in the blueprint of the agent. They guide the agent in the direction specified by the genes of the blueprint. They are good at it, because they have done it in natural, biological systems for millions of years successfully.
This is the real, overall purpose of emotions: to guarantee the fulfillment of the primary directive (for biological organisms this means survive and reproduce). The specific purpose of this ancient control system is manifold. If we consider the belief-desire-intention model or the perceive-reason-action cycle in detail, the purpose of emotions is..
- ..to influence our desires and decisions. We don’t have to think about something in order to make the right decision. Emotions care that we do the right thing. They determine our motivations, desires and reasons.
- ..to influence our beliefs and color our life. They influence how we perceive things. We feel good if we experience positive emotions, we feel bad for negative ones. We tend to remember things that triggered strong emotions much better than those which did not.
- ..to influence our intentions and control our life. They influence how we act. We tend to do things that feel good and avoid things that don’t. The purpose of good feelings is to tell us what is good for us, what we should pursue, and what we should do. The purpose of bad feelings is to tell us what is bad for us, what we should avoid, and what we should not do.
From an evolutionary perspective, emotions are an adaptation of goal-oriented sentient animals which try to survive in a fast changing and challenging complex environment. Especially bad feelings and negative emotions are an adaptation to harsh environments with frequent bad conditions. One can consider consciousness as an adaptation for these conditions, too. One that is even better. And one that is not restricted to humans, as we begin to realize. Whatever form non- or super-human intelligence takes, it is clear that humans are no longer the only sentient conscious beings in the universe in the near future.
Summary
To sum it up: all living beings are sentient beings. This means they have feelings and emotions. These feelings are directed towards the fullfillment of the primary directive, i.e. to survive and reproduce themselves, which is the purpose specified by their genes. The purpose of emotions is to guarantee the fulfillment of the primary directive. Although living beings are just survival vehicles engineered by their genes in order to replicate themselves, human are more than that. They strive and long for more. If we create non-human sentient beings, and we want to solve the fundamental problem of agent-oriented engineering, i.e. to bridge the gap between autonomy and purpose, then we should endow them with control systems which act like emotions, too. If we equip them with feelings and emotions, then these feelings should be directed towards the fullfillment of their primary directive. This solves the problem of agent-oriented enginerring, but creates another problem: we have created a sentient being which has its own rights and feelings.
References
* Marvin Minsky, The Society of Mind, Simon & Schuster, 1988
* Marvin Minsky, The Emotion Machine, Simon & Schuster, 2006
* Rodney Brooks, A robust layered control system for a mobile robot, IEEE Journal of Robotics and Automation, 1986
* Walter Isaacson, “Steve Jobs”, Simon & Schuster, 2011
(The picture from EVE, the Extraterrestrial Vegetation Evaluator robot, is a low resolution screenshot from the highly recommendable Pixar film WALL-E)