We understand by a cybernetical machine an apparatus which performs a set of
operations according to a definite set of rules. Normally we "programme" a
machine: that is, we give it a set of instructions about what it is to do in
each eventuality; and we feed in the initial "information" on which the machine
is to perform its calculations. When we consider the possibility that the
mind might be a cybernetical mechanism we have such a model in view; we suppose
that the brain is composed of complicated neural circuits, and that the
information fed in by the senses is "processed" and acted upon or stored for
future use. If it is such a mechanism, then given the way in which it is
programmed -- the way in which it is "wired up" -- and the information which has
been fed into it, the response -- the "output" -- is determined, and could,
granted sufficient time, be calculated. Our idea of a machine is just this, that
its behaviour is completely determined by the way it is made and the incoming
"stimuli": there is no possibility of its acting on its own: given a certain
form of construction and a certain input of information, then it must act in a
certain specific way. We, however, shall be concerned not with what a machine
must do, but with what it can do. That is, instead of
considering the whole set of rules which together determine exactly what a
machine will do in given circumstances, we shall consider only an outline of
those rules, which will delimit the possible responses of the machine, but not
completely. The complete rules will determine the operations completely at every
stage; at every stage there will be a definite instruction, e.g., "If the number
is prime and greater than two add one and divide by two: if it is not prime,
divide by its smallest factor": we, however, will consider the possibility of
there being alternative instructions, e.g., "In a fraction you may divide top
and bottom by any number which is a factor of both numerator and
denominator". In thus relaxing the specification of our model, so that it
is no longer completely determinist, though still entirely mechanistic, we shall
be able to take into account a feature often proposed for mechanical models of
the mind, namely that they should contain a randomizing device. One could build
a machine where the choice between a number of alternatives was settled by, say,
the number of radium atoms to have disintegrated in a given container in the
past half-minute. It is prima facie plausible that our brains should be
liable to random effects: a cosmic ray might well be enough to trigger off a
neural impulse. But clearly in a machine a randomizing device could not be
introduced to choose any alternative whatsoever: it can only be permitted to
choose between a number of allowable alternatives. It is all right to add any
number chosen at random to both sides of an equation, but not to add one number
to one side and another to the other. It is all right to choose to prove one
theorem of Euclid rather than another, or to use one method rather than another,
but not to "prove" something which is not true, or to use a "method of proof"
which is not valid. Any randomizing devices must allow choices only between
those operations which will not lead to inconsistency: which is exactly what the
relaxed specification of our model specifies Indeed, one might put it this way:
instead of considering what a completely determined machine must do, we
shall consider what a machine might be able to do if it had a randomizing device
that acted whenever there were two or more operations possible, none of which
could lead to inconsistency.
If such a machine were built to produce theorems about arithmetic (in many ways the simplest part of mathematics), it would have only a finite number of
components, and so there would be only a finite number of types of operation it
could do, and only a finite number of initial assumptions it could operate
on. Indeed, we can go further, and say that there would only be a
definite number of types of operation, and of initial assumptions, that
could be built into it. Machines are definite: anything which was indefinite or
infinite we should not count as a machine. Note that we say number of
types of operation, not number of operations. Given sufficient time, and
provided that it did not wear out, a machine could go on repeating an operation
indefinitely: it is merely that there can be only a definite number of different
sorts of operation it can perform.
If there are only a definite number of types of operation and initial
assumptions built into the system, we can represent them all by suitable symbols
written down on paper. We can parallel the operation by rules ("rules of
inference" or "axiom schemata") allowing us to go from one or more formulae (or
even from no formula at all) to another formula, and we can parallel the initial
assumptions (if any) by a set of initial formulae ("primitive propositions",
"postulates" or "axioms"). Once we have represented these on paper, we can
represent every single operation: all we need do is to give formulae
representing the situation before and after the operation, and note which rule
is being invoked. We can thus represent on paper any possible sequence of
operations the machine might perform. However long, the machine went on
operating, we could, give enough time, paper and patience, write down an
analogue of the machine's operations. This analogue would in fact be a formal
proof: every operation of the machine is represented by the application of one
of the rules: and the conditions which determine for the machine whether an
operation can be performed in a certain situation, become, in our
representation, conditions which settle whether a rule can be applied to a
certain formula, i.e., formal conditions of applicability. Thus, construing our
rules as rules of inference, we shall have a proof-sequence of formulae,
each one being written down in virtue of some formal rule of inference having
been applied to some previous formula or formulae (except, of course, for the
initial formulae, which are given because they represent initial assumptions
built into the system). The conclusions it is possible for the machine to
produce as being true will therefore correspond to the theorems that can be
proved in the corresponding formal system. We now construct a Gödelian formula
in this formal system. This formula cannot be proved-in-the- system.
Therefore the machine cannot produce the corresponding formula as being true.
But we can see that the Gödelian formula is true: any rational being could
follow Gödel's argument, and convince himself that the Gödelian formula,
although unprovable-in-the-system, was nonetheless -- in fact, for that very
reason -- true. Now any mechanical model of the mind must include a mechanism
which can enunciate truths of arithmetic, because this is something which minds
can do: in fact, it is easy to produce mechanical models which will in many
respects produce truths of arithmetic far better than human beings can.
But in this one respect they cannot do so well: in that for every machine there
is a truth which it cannot produce as being true, but which a mind can. This
shows that a machine cannot be a complete and adequate model of the mind. It
cannot do everything that a mind can do, since however much it can do,
there is always something which it cannot do, and a mind can. This is not to say
that we cannot build a machine to simulate any desired piece of mind-like
behaviour: it is only that we cannot build a machine to simulate every
piece of mind-like behaviour. We can (or shall be able to one day) build
machines capable of reproducing bits of mind-like behaviour, and indeed of
outdoing the performances of human minds: but however good the machine is, and
however much better it can do in nearly all respects than a human mind
can, it always has this one weakness, this one thing which it cannot do, whereas
a mind can. The Gödelian formula is the Achilles' heel of the cybernetical
machine. And therefore we cannot hope ever to produce a machine that will be
able to do all that a mind can do: we can never not even in principle, have a
mechanical model of the mind.
This conclusion will be highly suspect to some people. They will object first
that we cannot have it both that a machine can simulate any piece
of mind-like behaviour, and that it cannot simulate every piece.
To some it is a contradiction: to them it is enough to point out that there is
no contradiction between the fact that for any natural number there can be
produced a greater number, and the fact that a number cannot be produced
greater than every number. We can use the same analogy also against those who,
finding a formula their first machine cannot produce as being true, concede that
that machine is indeed inadequate, but thereupon seek to construct a second,
more adequate, machine, in which the formula can be produced as being true. This
they can indeed do: but then the second machine will have a Gödelian formula all
of its own, constructed by applying Gödel's procedure to the formal system which
represents its (the second machine's) own, enlarged, scheme of operations. And
this formula the second machine will not be able to produce as being true, while
a mind will be able to see that it is true. And if now a third machine is
constructed, able to do what the second machine was unable to do, exactly the
same will happen: there will be yet a third formula, the Gödelian formula for
the formal system corresponding to the third machine's scheme of operations,
which the third machine is unable to produce as being true, while a mind will
still be able to see that it is true. And so it will go on. However complicated
a machine we construct, it will, if it is a machine, correspond to a formal
system, which in turn will be liable to the Gödel procedure for finding a
formula unprovable-in-that- system. This formula the machine will be unable to
produce as being true, although a mind can see that it is true. And so the
machine will still not be an adequate model of the mind. We are trying to
produce a model of the mind which is mechanical -- which is essentially
"dead" -- but the mind, being in fact "alive", can always go one better than any
formal, ossified, dead, system can. Thanks to Gödel's theorem, the mind always
has the last word.
A second objection will now be made. The procedure whereby the Gödelian
formula is constructed is a standard procedure -- only so could we be sure that a
Gödelian formula can be constructed for every formal system. But if it is a
standard procedure, then a machine should be able to be programmed to carry it
out too. We could construct a machine with the usual operations, and in addition
an operation of going through the Gödel procedure, and then producing the
conclusion of that procedure as being true; and then repeating the procedure,
and so on, as often as required. This would correspond to having a system with
an additional rule of inference which allowed one to add, as a theorem, the
Gödelian formula of the rest of the formal system, and then the Gödelian formula
of this new, strengthened formal system, and so on. It would be tantamount to
adding. to the original formal system an infinite sequence of axioms, each the
Gödelian formula of the system hitherto obtained. Yet even so, the matter is not
settled: for the machine with a Gödelizing operator, as we might call it,
is a different machine from the machines without such an operator; and,
although the machine with the operator would be able to do those things in which
the machines without the operator were outclassed by a mind, yet we might expect
a mind, faced with a machine that possessed a Gödelizing operator, to take this
into account, and out-Gödel the new machine, Gödelizing operator and all. This
has, in fact, proved to be the case. Even if we adjoin to a formal system the
infinite set of axioms consisting of the successive Gödelian formulae, the
resulting system is still incomplete, and contains a formula which cannot be
proved-in-the-system, although a rational being can, standing outside the
system, see that it is true. We had expected this, for even if an infinite set of axioms were added, they would have to be specified by some finite rule or specification, and this further rule or specification could then be taken into account by a mind considering the enlarged formal system. In a sense, just because the mind has the last word, it can always pick a hole in any formal system presented to it as a model of its own workings. The mechanical model must be, in some sense, finite and
definite: and then the mind can always go one better.
This is the answer to one objection put forward by Turing. He argues that the limitation to the powers of a
machine do not amount to anything much. Although each individual machine is
incapable of getting the right answer to some questions, after all each
individual human being is fallible also: and in any case "our superiority can
only be felt on such an occasion in relation to the one machine over which we
have scored our petty triumph. There would be no question of triumphing
simultaneously over all machines." But this is not the point. We are not
discussing whether machines or minds are superior, but whether they are the
same. In some respect machines are undoubtedly superior to human minds; and the
question on which they are stumped is admittedly, a rather niggling, even trivial, question. But it is enough, enough to show that the machine is not
the same as a mind. True, the machine can do many things that a human mind
cannot do: but if there is of necessity something that the machine cannot do,
though the mind can, then, however trivial the matter is, we cannot equate the
two, and cannot hope ever to have a mechanical model that will adequately
represent the mind. Nor does it signify that it is only an individual machine we
have triumphed over: for the triumph is not over only an individual
machine, but over any individual that anybody cares to specify -- in Latin
quivis or quilibet, not quidam -- and a mechanical
model of a mind must be an individual machine. Although it is true that any
particular "triumph" of a mind over a machine could be "trumped" by another
machine able to produce the answer the first machine could not produce, so that
"there is no question of triumphing simultaneously over all machines", yet this
is irrelevant. What is at issue is not the unequal contest between one mind and
all machines, but whether there could be any, single, machine that could do all
a mind can do. For the mechanist thesis to hold water, it must be possible, in
principle, to produce a model, a single model, which can do everything the mind
can do. It is like a game. The mechanist has
first turn. He produces a -- any, but only a definite
one -- mechanical model of the mind. I point to something that it cannot do,
but the mind can. The mechanist is free to modify his example, but each time he
does so, I am entitled to look for defects in the revised model. If the
mechanist can devise a model that I cannot find fault with, his thesis is
established: if he cannot, then it is not proven: and since -- as it turns out -- he
necessarily cannot, it is refuted. To succeed, he must be able to produce some
definite mechanical model of the mind -- anyone he likes, but one he can specify,
and will stick to. But since he cannot, in principle cannot, produce any
mechanical model that is adequate, even though the point of failure is a minor
one, he is bound to fail, and mechanism must be false.
J. R. Lucas
"Minds, Machines, and Gödel"
Philosophy 36 (1961)
Friday, September 7, 2012
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment