But the mathematics is a simplification and idealization of real situations. The mathematics does not apply, if the participant's life depends on having $100 and if he has no pressing need for an addition $400. It does not apply if the participant is so frightened or bored by being on the quiz show that he would be prepared to pay $67 to be out of it. It applies only within narrowly defined circumstances. Perhaps the tendency of subjects to choose differently from the mathematicians is due in part, though not altogether, to a failure to grasp the mathematical presuppositions or to keep from crowding in considerations that mathematicians studiously block off.

There is, however, much more to be said about the interpretation of Kahneman and Tversky's data. Suppose that instead of testing naive subjects, they tested mathematicians who had studied decision theory. Would not the results be different? Suppose, too, that before taking part in the experiments, subjects were required to read a couple of Kahneman and Tversky's papers. Would the results not be different? Of course they would be, but why? Presumably, because the subjects would then have satisfied themselves that the mathematics of decision theorists reflects intuition better than untutored impulse. But that is precisely to claim that the mathematics is a competence theory, that it does reflect carefully sifted intuition. (For present purposes, I assume that Kahneman and Tversky are applying the appropriate mathematics, though the matter is disputed by Cohen (1981).) Kahneman and Tversky (1982) go some distance toward recognizing all this:

It is important to emphasize ... that the [psychological] value function is merely a convenient summary of a common pattern of choices and not a universal law.

Kahneman and Tversky have also gone some distance toward explaining erroneous decisions. Take, for example, the common gambler's fallacy, as manifested by betting on tosses of a coin. A particular case of the fallacy is that it is an advantage to bet heads after a run of tails. Kahneman and Tversky (1973) suggested that the fallacy can be explained if we suppose that the gambler knows that long runs of tails are unlikely but fails to take account of the fact that a coin has no memory. The naive gambler, then, is acting on a belief, in this case a true one, that makes his action intelligible. The irrationality lies in the failure to take account of other relevant facts. Notice, though, that the belief that rationalizes the naive gambler's decision is itself a mathematical one -- the probability of a particular outcome is the ratio of the number of event types favorable for an outcome to the number of all relevant event types. Though the gambler may not have quantified things so precisely, the mathematical law does make the intuition precise.

Nevertheless, Tversky and Kahneman (1983) do seem to reject the idea that a mathematical ideal can be a psychologically useful competence theory:

Indeed, the evidence does not seem to support a "truth plus error" model, which assumes a coherent system of beliefs that is perturbed by various sources of distortion and error. Hence we do not share Dennis Lindley's optimistic opinion that "inside every incoherent person there is a coherent one trying to get out," and we suspect that incoherence is more than skin deep. (p. 313)

This brings us back to the specter discussed in the last section, so I do not repeat here the remarks made there. Instead I would like to comment on what seems to be one of the main grounds for the judgment just cited. It is that "in cognition, as in perception, the same mechanisms produce both valid and invalid judgments" (Tversky and Kahneman 1983, p. 313).

Apart from any scruples we may have about the use of the word "mechanisms" in this connection, there is something unsatisfactory about the last statement. Tversky and Kahneman are drawing a parallel with visual illusions and they observe, correctly, that visual illusions are the product of a perfectly running visual system. But the analogy is in many ways misleading. Just imagine for a moment that what vision delivers in the first instance is a set of uninterpreted, well-formed formulas in a language, a position I am inclined to adopt because of certain findings in visual perception (see Niall (unpublished)). If that position is correct, the parallel Tversky and Kahneman seek to establish cannot be constructed. The reason is that the output of an inference in everyday reasoning is an interpreted sentence. There is nothing wrong with the bent appearance of a straight stick partly submerged in water; it is the interpreted sentence, "The stick is bent," that is unsatisfactory. There is something deeply wrong with the gambler's conclusion that, because runs of tails are rare, the probability of a head increases after such a run.

How does this difference make a difference? Well, it would be odd if the same set of implicators (in my language), properly applied (as Tversky and Kahneman allow), yielded both valid and invalid inferences. Something is needed to explain the variation. To begin, note that the same set of basic implicators must be available to Kahneman and Tversky on the one hand and to their subjects on the other. How, then, could Kahneman and Tversky use such untrustworthy devices to attain such certain results as the mathematics against which they interpret their subjects' responses? Any answer I might offer is going to be far more uncertain than the mathematics in question. But the existence of that mathematics and of Kahneman and Tversky's access to it undermines their rejection of the mathematics as the appropriate competence theory.

In the light of that general stance I can offer one conjecture. In the first place, there are many implicators, and people seem to be able to add to the set that nature has endowed them with. That was the lesson of an earlier discussion in this chapter. It could be that an individual or group of individuals could add a faulty implicator, as the gambler's fallacy suggests. It could be that the implicators involved in decision under uncertainty are remote from the basic ones and require a long train of intermediate inferences in justification of their validity. My conjecture is that the basic set with which we start out comprises only valid implicators and that their operation is infallible in clear cases. it is surely this that enables mathematicians to overcome the impulses they share with the naive gambler -- start out from clear and compelling intuitions and build up the system called decision theory. Mathematicians must find sure footing somewhere; I suggest that it is in the basic implicators. This is not to say that the output of the basic implicators is imposed on the mind, that judgment is forced by them. Rather, the idea is that their output is inevitably presented to the mind whenever the conditions for their operation are satisfied. Judgment seems to be another manner.

This all leads to the conclusion that we have seen no good grounds, theoretical or empirical, to reject the thesis that (ideal) logic supplies a competence theory for the psychology of human reasoning.

John Macnamara

*A Border Dispute: The Place of Logic in Psychology*

## 1 comment:

Jim: I share your criticism on the "differing value of payout" point. As you observed, the size of payouts matter crucially in determining the best decision, both the subject's valuation of them and relative to each other. Since a given currency value

will be valued differently by different subjects, it's tough to generalize this. Many real- and thought experiments suffer this problem. Newcomb's Box is one. What we need is to estimate the direct utility to the subjects and once that's normalized, then we can get further with this experiment.

A similar problem that decision and game theory-type experiments also suffer from the one-round

effect. Though it doesn't affect this particularly experiment, one of the biggest is that games are distorted badly if the participants know when the last round is. Consequently if participants know there's exactly *one* round, they're going to behave extremely differently from a game with indefinite rounds. This actually explains why people seem more likely to cheat at certain infrequently-performed or one-off economic transactions, because they know it's only one round and there's less reason not to defect. These are really just problems with experimental design in the real world but don't get directly at your deeper point of whether we can know what sort of model decision-makers are using.

Post a Comment