ARTIFICIAL INTELLIGENCE - HARBINGER
OF A NEW ERA

Naeem Ahmad

 

There is a variety of mental activities that involves ‘intelligence.’ Doing arithmatic, operating a machine or understanding language,_ all require various degrees of intelligence. In the recent past many a computing machine has been invented that can perform all these tasks. A question naturally arises “If a machine can do arithmatic, calculate accurately and do other jobs that requir skill and dexterity, can we ascribe ‘intelligence’ to it?” Michael Scriven explains the nature of the problem:

“An example of this arises, in connection with the word ‘intelligence’. One can well imagine a man whose work lies largely with one of the great electronic computers coming to apply this word to it. He often makes mistakes: it is faultless. His memory for figures is limited: it. has an enormous storage capacity. He is intelligent, yet the machine is better at the job. At first a slang, then,seriously, these machines will be called intelligence. A means for comparing the intelligence of different machines will perhaps be devised: connected with their speed and accuracy of working, rather than mere capacity; perhaps also with their versatility come to be used less for performing particular calculations than solving complete problems, the notion of consulting a computer, rather than using one, will grow. In various other ways usage will reflect the increasing tendency to regard a computer as a specialist par excellence. Then one day a man may ask “can machines ever be really intelligent?”

No doubt in the beginning, computers were a little better than such mechanical devices as windup toys, puppets and music boxes. But over the past few decades computer technology has made such remarkable progress that the claim is proving true that the digital computer will someday match — rather surpass the intellectual abilities of the human mind. Many people liken the computer data to human knowledge, process of feeding the computer to the process of human learning, computer’s operation of the programme to the stream of human consciousness.

Latest computer systems can diagnose diseases, plan the synthesis of complex chemicals, solve differential equations in symbolic form, analyse electronic circuits, understand limited amount of human speech and natural language text, or write small computer programmes to meet formal specifications. We might say that such systems possess ‘intelligence’ the question naturally arises “does machine think”? or “Does it merely simulate human thinking?” This question is not a new one. The Seventeenth century philosopher Rene Descartes was also confronted with this problem. He believed in the duality of Mind and Matter or of Thought and Extension. In the realm of Extension, laws were fixed once for all, every thing was predetermined and ‘tied up’ in the Universal chain of cause-and-effect. On the contrary, in the realm of thought, there was freedom and creativity, not mechanism and determinism. For Descartes the two substances were diametrically opposed to each other, yet he believed that they interact in the most mysterious and subtle manner. On any human action, both incorporeal mind and corporeal body interact with and influence each other. The question arose “Where do both meet together”? Descartes referred to “pineal gland” as the point of contact between mind and body, yet he was not satisfied with this solution and, in a letter to the Queen Christina, he confessed his inability to solve this problem.

The same Cartesian problem is revived by the advent of Artificial Intelligence, of course, with greater intensity. This can be restated against the background of computer technology as follows: “Does machine has consciousness” or “Is it capable of consiousness” when we use the term consciousness we imply all those attributes which are associated with life such as thinking, willing, learning, remembering, loving etc.

Our immediate answer to this question is that a robot despite he maximum degree of perfection, cannot be conscious, nor can it be capable of it.

A little reflection will reveal that the problem is not as simple as it appears to be. ‘Conscious’ is a term which is applied to man and other highly evolved species but one feels hesitant to apply it to some lower forms of life such as plants, amoeba or earthworm. Even in the case of a human being, the term cannot be used in the absolute sense. The child becomes conscious at some particular stage during his development from the unconscious germ-plasm. Again, I have only one way to establish that other people have minds, and that is on the analogy of my ownself. I observe the outer behaviour of a man and compare it with that of my own and conclude that he has also a mind like mine. The robot that emulates the behaviour of humans, despite all similarities of observable behaviour, cannot be regarded as having mind or life. Further, this is quite evident that observable outer behaviour does not necessarily imply the presence of mind. A person can be absolutely paralysed so far as his outward behaviour is concerned, but may not have lost consciousness. On the other hand, a person could be turned into a robot by thoroughly anesthetizing him and fixing tiny radio-active devices to the ends of his afferent nerves. The outward behaviour of this man will be similar to that of any other human being, but will not imply his consciousness. This will become the mechanical radio active behaviour being controlled from a distance. If the outward behaviour of a living human being can be mechanically controlled not by his consciousness but by some external agency, can’t we regard the mechanical behaviour of a robot as ‘intelligent’? Where does the mechanical, the material end and where does the creative the free and the living begin? It is quite clear that no hard and fast line of cleavage can be drawn. Cartesian problem becomes ever more perplexing.

Even if we ascribe intelligence, in some sense, to machines, we will not treat them at par with living beings. The machine can emulate human behaviour par excellence, yet it will differ, at least in one important respect, from the humans. The machines cannot procreate or duplicate themselves. This is quite interesting to note that according to some thinkers even this difference does not matter at all:

“When man looks at the electronic computer and sees one supposedly unique human quality after another taken asay from him by the machine, he may fall back upon a major distinction between animal and machine and want to say

“Well, at least I can reproduce my own kind. I can father a human child” But now machines can, in a sense, reproduce their own kind. That is, they can create new “organisms” like themselves out of parts that can be obtained by them from their environment and utilized by other machines operating under instructions supplied by the “parent” device. But the animal uses food and a highly complex series of chemical transformations, while the machine uses mechanical parts, such as wires, batteries, photoelectric cells, and so on. Yet it is possible for a machine so programmed and with access to necessary material to construct another. Moreover, simple machines can be used to design more complex ones — the Remington-Rand Corporation of New York used Univac I and II in the design of Univac III, for example.”!

Some philosophers subscribe to the view that it is possible to manufacture a computer that is conscious or capable of consciousness. Douglas R. Hofstadler of Indiana University believes that a time will come when computer hardware and human software will combine and make it possible for the machine to think, create and feel. Thus the computer may become capable of reflecting upon its own operations i.e. it may become self-conscious. A.M.Turing in his article “Computing Machinery and Intelligence”[1] has examined and rejected a number of objections that could be put forward to prove the contrary view that machines cannot think. Turing says “I believe that at the end of the century the use of the words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicated.”[2]

Does the machine have consciousness or not? Is its thinking creative or does it merely mimick human behaviour? These questions could be discussed endlessly. But one thing is incontrovertable — that the computer has brought about a revolution which has changed the whole intellectual scene. It presents modern man with far reaching economic, philosophic and social problems. According to a recent report by the National Research Council (America;, A.I. would effect the circumstances of human life profoundly. It would surely create a new economics, a new sociology and a new history”.[3]

Thus a study of artificial intelligence has become necessary not only for other disciplines but also for philosophy.Aaron Slcman says:

“Within a few years if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incompetence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving’ degree course in physics which includes no quantum theory„[4]

At the end it seems appropriate to point out a few limitations of the computer:

1. The most difficult task for the thinking machine is to simulate commonsense.

“probably the most telling criticism of current work in artificial intelligence is that it has yet not been successful in modelling what is called commonsense.”

One difficulty in simulating commonsense is that a programme must link perception, reasoning and action simultaneously, because ultimately the intelligent use of a concept depends on all three domains.”1

2. Reitman (1965) has pointed out that human mind while solving a problem is not as rigid as the computer is. The human problem solver is quite distractable, both by external stimuli and by ideas unrelated to the problem he is working on. In other words, the computer programme works on one thing at a time while the human works simultaneously on several things, either productively or unproductively,within a given period.

3. The computer has typically perfect access to previous information, while humans lose information over time. Enormous capacity of storing memories is useful for the computer, but is a source of great torture for human life. Certain irrelevant events we ought to forget, else life would become intolerable. The computer cannot unlearn and forget that way. We can say that it does not have an Unconscious in the Freudian sense.

4. Computer technology, instead of alleviating human sufferings, may add to man's misery and alienation. It is quite possible that thinking machines assume independent role and make decisions which bring humanity to the brink of total destruction. Machines that can learn and decide are not obligated to be subserviant to humanity, they may turn out to be hostile to it.


NOTES & REFERENCES

[1] “ The Mechanical Concept of Mind”, Michael Seriven, Mind, Vol. LXII, 246 (1953)

[2] Corinne Jacker `Man, Memory and Machines. Dell Book, New York 1966. pp. 69—70

[3] See “Minds and Machines” edited by Allan Ross Adersn, Englewood Cliffs N.J. 1964

[4] Ibid, p. 14.

5-‘Machines that think’ by Stanlay N. Wellborn, Economic

Impact (No.48) A Quarterly Review of World Economic, USA.

6-“The Computer Revolution in Philosophy”. the Harvestor

Press Sussex 1978, P. 5.

7-‘Artificial Intelligence’. by David L. Waltz, appeared in Scientific American Oct. 1982