|Artificial Intelligence, Theory of Mind, and Systems Theory|
|This essay discusses artificial intelligence systems, and asks questions about what exactly constitutes a mind.
Traditional computer programs are deterministic. That is if you enter the same input, you always get the same output. Their function is analogous to deductive reason. See www.davegentile.com/philosophy/knowledge
Artificial intelligence programs work on a different principle. They are modeled after the evolutionary process. Trial solutions are generated. A selection of the best solutions is retained. Portions of the good solutions are swapped, to produce new solutions, of which the best are then selected, and the process continues. The process is analogous to inductive reasoning. Again see
The system learns by trial and error. In effect, it learns by observation of what works. A deductive program may be able to arrive at precisely the best answer to a problem. However, an AI program, like a human, may not get a perfect solution, but may come up with a very good solution. Often when the arrived at algorithms are analyzed by human programmers, they can not make heads or tails of it, and can’t say why it works. Complex external conditions have been mapped to a complex inner language. AI programs have been used to do things like redesign complex circuit boards, to achieve greater efficiency than human engineers had. (see example) They can be applied in virtually any situation involving pattern recognition or optimization.
This same process drives evolution. Complex external selection criteria are mapped to the DNA. The DNA stores information, in coded form, about the external reality. Again the process involves reproduction, competition, and selection.
This would also seem to be the process that shapes our brains. Repeated stimulation of a given neural path will cause it to physically alter, in order to make that path easily available. Learning has occurred. Information about the outside world has been mapped to the complex structure of the brain. Pathways that are effective in dealing with the world are reinforced.
In order to function these systems need some sort of random generator. In computers they are almost always pseudo-random numbers. We don’t need to go into detail about how these function. In principle they could be based on quantum events, and made truly random. But in either case, the basic function is the same. In the human brain, we have recently discovered neurons that fire at sporadic intervals with no prior stimulation, whenever we are awake. These seem to be “awareness” neurons. Clearly they could represent a randomizing element within our brain. Many neurons fire when trigged by external stimuli or other neurons, thus producing a predictable response. But the self-firing neurons produce an element of internal uncertainty as their signals meet and mix with other signals in the brain at any given time.
This brings us to the Turing test. The Turing test was proposed in answer to the question, “How would we know if a computer was thinking like a human?” The Turing test answered that if another human could not tell whether they were dealing with a human or a computer, then the computer must, in fact, be thinking like a human. This philosophy is known as “functionalism”. It says that if all the observable external characteristics, or produced results are the same, then we must infer that the internal experience is the same. After all, we can never really get inside another person’s head, to see how they think, or if they do think. Instead we infer that they think by observing that they behave in ways similar to the way we behave, and that we think. Most philosophers fall into this functionalist category, although some still maintain a philosophy of dualism, which says there is a complete seperation between the mind and body.
One notable exception is John Searle. You can see his “Chinese room” argument in some detail here. http://www.utm.edu/research/iep/c/chineser.htm
The teaching company also has a good course, with John Searle as the lecturer.
A search will also find other information pro and con, including philosophy Ph.D. theses. (The philosophy of mind is a hot topic within philosophy these days.) For our purposes here we can just say the issue is not completely settled.
My personal view is that functionalism, with some minor qualifications, is correct. If the external observables indicate that whatever it is appears to think. That is, if it learns and adapts, then we must infer that it really does think. Perhaps it does not think in anything like the human way of thinking but it thinks none-the-less. Under this definition we are surrounded by a world of thought. A computer can be made from anything, and any medium can be data storage. If functionalism is taken as correct, then anything that can function inductively should be seen as thinking. Evolution, insect colonies, societies, cities, AI systems, and brains would then all be thinking systems.The individual bits may be completely incapable of directing the whole, but the whole functions as an inductive thinking machine, or a mind.
Our minds can be described as arising from the collective action of neurons, and can be described purely in terms of laws of physics and biology. And, at the same time, we can be described as thinking beings with freewill. Evolution can be described in terms of physical laws, and random chance, but it can also be described as the operation of a mind. (Albeit one that thinks on much longer time scales than we do). The adaptive process of evolution is exactly analogous to inductive thought. This type of thinking goes by a few names. You can find it as “systems theory” and “emergence”, among other names. Two good books on the topic are:
Global Brain: The Evolution of Mass Mind from the Big Bang to the 21st Century
by Howard Bloom
Emergence: The Connected Lives of Ants, Brains, Cities, and Software
by Steven Johnson
Complex systems have emergent properties. Simple physical laws may be enough to exactly describe the workings of the individual pieces in a system. But as the number of pieces increases, the interrelation between the pieces can increase exponentially. Eventually the system can not be modeled from fundamental principles because of the complexity. The whole system may then have properties that can not be predicted from simple fundamental laws. These are known as emergent properties. Human thought in this framework is an emergent property of our complex brains.
Some philosophers try to push this one additional step. I would say that emergent systems are unpredictable, because we lack information, that is they are subjectively uncertain. But some would claim that these complex systems are objectively uncertain. I respond to one such view in a letter here.
In any case, we will need a different language to describe properties of these systems at different levels. Thus to describe human action we can talk about our thoughts and desires and making choices. Sociologist and psychologists can break down human behavior, and say that all human behavior is a result of our biological genes, and our learned cultural memes. In Freudian terms, the ego does not exist. What we call the ego is just the sum of the superego, and the id. We can also describe human behavior as simple laws of physics and random events at work within the brain.
All three of these views are equally valid. They study either the whole system, or they break it down to lower levels. They use a different language to describe events at their level of interest, and the lower level descriptions completely miss emergent properties, but they are all simultaneously true.
|Back to philosophy main page|