Saturday, August 28, 2010

Quote of the Day

A rule of simplicity -- for example, Ockham's Razor -- is normative. In its traditional formulation, Ockham's Razor can be expressed as follows: one should avoid the needless multiplication of entities. What counts as needless -- that is, one's standard of simplicity -- does not matter here. What does matter is that there be a rational ground for deciding among hypotheses. This decidability presupposes boundary conditions among which is one or another standard of simplicity. Without some standard of simplicity, one cannot succeed in falsifying a hypothesis, because one cannot reasonably rule out ad hoc complications introduced to accommodate facts that would otherwise falsify the hypothesis.

A rule of simplicity expresses a relationship between a purpose sought in attempting to explain and a means necessary for achieving that purpose. A rule of simplicity is thus a norm expressible in the form of a conditional: if one wishes to achieve any purpose by an attempt at explanation, then one may not arbitrarily introduce complications.

Inasmuch as it is a rule, a rule of simplicity is not a statement of fact, although it is related to the facts which make up the process of scientific inquiry itself. A rule of simplicity involves a claim about a relationship between two aspects of that process -- namely, a purpose of the process and one of the conditions necessary to achieve it.

[...]

A determinist can admit a difference between conditional statements and conditional norms which he can explain on his own ground. Conditional norms might be explained, for instance, as expressions of emotion or as causally determined exhortations. The general rule of efficiency we have described might be explained, for instance, as a key component in the survival mechanism of the organism.

For the sake of argument we will grant any such deterministic account of normativity in terms of antecedent determining conditions. But in conceding this point, we mean to hold determinists to all the implications of determinism, including those implications which determinists ignore when involved in arguments with someone who disagrees with them. In particular, we wish to call into question the consistency between any determinist's account of normativity and his assertion of his position, involving -- as it necessarily does -- a rule of simplicity as an essential ingredient.

Any determinist hypothesis must be able to account for the existence in the world of conflicting attempts to account for the data of human experience -- there are positions that contradict determinism. A determinist might try to account for this fact by saying that both positions are determined effects of different sets of antecedent conditions.

Nevertheless, every determinist makes the claim that his account of the data is superior to his opponent's, and therefore ought to be accepted in preference to the alternative position. The question is, what meaning can a determinist attach to the word "ought" in this context? Certainly no determinist can mean what anyone who would disagree with him would mean by saying that we ought not accept determinism. Someone rejecting determinism can distinguish between the force of a norm and the force of determining conditions. But, any determinist must say that among the sets of determining conditions there is one set of determining conditions that determines him to say "ought" and determines whatever effects follow from his utterance of "ought." And he must give the same account of his opponent's utterance of "ought." This result will not seem odd to a determinist; it follows logically from any form of the determinist hypothesis.

On his own account of "ought," then, a determinist is perfectly able to say we ought to accept his position and ought not hold the contradictory position. But on those same grounds he must also grant that someone who has articulated a contradictory position is equally able to say that we ought to accept his position and ought not accept a determinist hypothesis.

No determinist can avail himself of a distinction between positions in fact maintained and positions justifiably maintained in any sense of that distinction which a determinist account would preclude. Where normativity is explained in terms of antecedent determining conditions, the exclusion of any position can be achieved only be excluding the very articulation of that position. But inasmuch as determinism is a more economical account of a set of facts that initially present themselves as including the naively realistic interpretation of the experience of choice -- which any determinist hypothesis explains as an illusion -- the contradictory position is necessarily articulated whenever any determinist position is articulated.

It follows that a determinist hypothesis cannot exclude its contradictory in the only sense of "exclude" that is available to a determinist. Any determinist hypothesis implies the impossibility of excluding its counterpositions, but necessarily presents its own counterposition in its very articulation. But a determinist, in arguing with his opponent, precisely does want to exclude the contradictory position. Otherwise there would be no point in the determinist's entering the argument, because the utterance of a sentence without the intention of excluding the contradictory is not a statement.

Joseph M. Boyle, Jr., Germain Grisez, and Olaf Tollefsen
"Determinism, Freedom, and Self-Referential Arguments"
The Review of Metaphysics 26 (1972-3)

Monday, August 23, 2010

Evolution and Information Theory

I've heard some critics of evolution claim that all we have evidence for is micro-evolution, and this only involves decreases in genetic information rather than increases. So, for example, there was an original bear "kind" which -- due to mutations, geographical separation, and the different environmental pressures different populations of this kind faced -- devolved into the various species of bears in the world today, such as grizzly, Kodiak, polar, etc. In other words, when various groups of bears were isolated and so had a smaller genetic stock to work with, certain traits were expressed that had not been expressed by the larger group -- such as the lack of pigment in the polar bear's fur, or the lack of certain genes which make their paws under-developed but better for swimming.

I find it ironic that this claim is championed by young-earth advocates, since it is a known paradigm of Darwinian evolution called allopatric speciation. What these critics of evolution claim is that this speciation only applies to the family or order level and, since it only allows for the loss of information not gain, it demonstrates that an intelligent agent directly created the original kinds with all of the genetic differences of the particular genera and species already in place but not expressed.

Now I've read very little about information theory, but what I have read contradicts this (although, I think these critics of evolution are partially excused because some defenders of evolution describe it in these same categories). According to Information and the Origin of Life by Bernd-Olaf Küppers it's a misunderstanding of information theory to claim that these scenarios involve a loss of information. Information is, by definition, expressed. If it is not expressed, it is not information; it is potential information (or technically, "syntactic" or "Shannon" information). For example, a string of 100 characters of gibberish may have more potential information than a string of 30 characters that makes up a coherent sentence since the first string contains more characters to which one could ascribe meaning. But in point of fact, the string of 100 characters of gibberish does not convey any information insofar as it is gibberish, whereas the string of 30 characters that make up a coherent sentence does convey actual information. Thus, if the meaningless string of 100 characters turned into the meaningful string of 30 characters by losing 70 characters, this would involve an increase in information.

Let me illustrate this. If you have a string of characters

nbtldwepob( kvpkhla&u jsgv *xfndistvl,emc nbijnsmv $hsfgevlvs.ecjn

and it experiences a mutation so that only every third character is expressed, it leaves us with the following string:

two plus five is seven

Now according to the critics of evolution -- at least those who argue as I've indicated -- the first string contains more information than the second. But this is false. The first string has more characters, certainly, but it doesn't have any meaning, and hence conveys no information. The second string, on the other hand, does have meaning and does convey information. If the first string evolved into the second as I've illustrated, this would be an increase in information, since it goes from a series of characters that's meaningless to one that's meaningful.

Potential information is essentially just the building blocks before they are actually arranged into any kind of meaningful order. Any combination is equally likely or unlikely as any other, regardless of whether they have any meaning. This isn't "nothing", since it is an actual series of the building blocks in question, but the sequence is irrelevant. Potential information can "carry" information, but is not true information itself.

Beyond this is semantic information, in which there is a code where meaning is attached to certain sequences. To use one of our previous examples, 30 characters making up a coherent sentence has more semantic information than 100 characters of gibberish, since the former means something and the latter does not. The next level is pragmatic information, where the information evokes action. Obviously, these three dimensions of information are all inter-related: the pragmatic level presupposes the semantic level, which in turn presupposes the potential. Moreover, semantic information cannot exist by itself without evoking a response, and thus always leads to the pragmatic level. The point in all of this is that these critics equivocate between potential information on the one hand and semantic or pragmatic information on the other.

What this illustrates is that actual (i.e. semantic or pragmatic) information always requires a context. If our meaningless sequence of 100 characters lost 70 characters, and became a meaningful, coherent sentence, it would constitute an increase in information. If a genetic mutation prevented some kind of protein synthesis, but the overall effect was a positive one, it would constitute an increase in information. If a mutation prevented some minor aspect of an animal's normal morphological development, but the change made the animal more able to survive in its particular environment, it would constitute an increase in information. The context determines whether the change constitutes an increase or a decrease in information, and in the above contexts, the acquired meaning, or the improved adaptability of the cell or the individual animal means that the changes in question were increases in information.

(cross-posted at Quodlibeta)

Saturday, August 21, 2010

Church Music



Tuesday, August 17, 2010

Perspective

I think he's too ungracious towards the political left, but Victor Davis Hanson argues that, for all our criticism of American society, we're forgetting how good we have it.

This is the most tolerant society in the world, the most multiracial and richest in religious diversity — and the most critical of its exceptional tolerance and the most lax in pointing out the intolerance of the least diverse and liberal.

It is market capitalism, unfettered meritocracy, and individual initiative within a free society that create the wealth for Al Gore to live in Montecito (indeed to create a Montecito in the first place), or for Michelle to jet to Marbella, or for John Kerry to buy a $7 million yacht. We know that, but our failure to occasionally express such a truth, coupled with a constant race/class/gender critique of American society, results in an insidious demoralization among the educated and bewilderment among the half- and uneducated.

In short, the great enigma of our postmodern age is how American society grew so wealthy and free to create so many residents that became so angry at the conditions that have made them so privileged — and how so many millions abroad fled the intolerance and poverty of their home country, and yet on arrival almost magically romanticize the very conditions in the abstract that they would never live under again in the concrete.

Of course there are (and always will be) plenty of things in society that are messed up, and we have a duty to alleviate them as much as we can. But by focusing exclusively on what's wrong we pass over what's right, and that inevitably leads us to having a distorted picture. If you only see the bad in society, you need to take a step back, rub your eyes, and take another look.

(see a previous dose of perspective here)

Saturday, August 14, 2010

Better Never To Have Been (Written)

This is a book review of a philosophy book I haven't read. I just want to get that out of the way up front: I have not read this book, and I doubt I will ever read it, despite the fact that its thesis resonates very deeply with me. The book is Better Never To Have Been: The Harm of Coming into Existence by David Benatar. The book is summarized as follows:

Most people believe that they were either benefited or at least not harmed by being brought into existence. Thus, if they ever do reflect on whether they should bring others into existence---rather than having children without even thinking about whether they should---they presume that they do them no harm. Better Never to Have Been challenges these assumptions. David Benatar argues that coming into existence is always a serious harm. Although the good things in one's life make one's life go better than it otherwise would have gone, one could not have been deprived by their absence if one had not existed. Those who never exist cannot be deprived. However, by coming into existence one does suffer quite serious harms that could not have befallen one had one not come into existence. Drawing on the relevant psychological literature, the author shows that there are a number of well-documented features of human psychology that explain why people systematically overestimate the quality of their lives and why they are thus resistant to the suggestion that they were seriously harmed by being brought into existence. The author then argues for the 'anti-natal' view---that it is always wrong to have children---and he shows that combining the anti-natal view with common pro-choice views about foetal moral status yield a "pro-death" view about abortion (at the earlier stages of gestation). Anti-natalism also implies that it would be better if humanity became extinct. Although counter-intuitive for many, that implication is defended, not least by showing that it solves many conundrums of moral theory about population.

The first thing that occurs to most people after reading this, I think, is if existence is that terrible, why bother writing a book about it? That would just seem to add to the problem. The second thing is that writing a book like this ranks right up there with making pornography as "things you can't do until your parents are dead".

However, I'm a very pessimistic person (I have to be heavily medicated to be as pleasant as I am), and this kind of perspective fits in well with how I instinctively view things. I used to think that the best thing I could do was to minimize the effect I had on the world -- not in an environmental sense, but in a social sense. I should just try not to infect others with my presence by avoiding personal contact and relationships as much as is reasonable. But of course, this wouldn't work because I always had some people who cared for me and to shut myself away from them would cause them distress. So it seems, either way, I'd cause harm.

The way God got me out of this was my wife. She is the rarest of all people: someone who is a) intrinsically happy, even cheerful, and b) not annoying. I had tricked myself into thinking that even though my pessimism was my instinctive and uncritical way of looking at the world it was nevertheless the more intellectual and responsible way of thinking of things. Optimism, I thought, was a naïve refusal to recognize the bad things about life. What I've discovered is that this is not the case. Pessimism is a refusal to see the good in things that is actually there, while optimism recognizes it. The fact that most good things have been polluted by the bad doesn't justify ignoring the good that is still there. I could say more about this, but I'll stop with that.

Now, from the description of Better Never To Have Been given above, Benatar would counter that we subconsciously trick ourselves into thinking existence is better than it is. Even though this is not the way I think, I could see myself believing this about others very easily. The way God gets me out of this is my kids. Both of my children are very happy, inherently happy. Being children, they of course have plenty of things that make them cry. But I was expecting them to be colicky, I guess because I thought that was just what all young kids were. Yet they give every indication of thoroughly enjoying existence. They behave, and have behaved since the days they were born, as if they perceive their existence as good; not just good, but as overflowing with wonder and glory. My point being that it's difficult to think of existence as bad when those who are thrust into it seem to think the opposite.

Now I suspect Benatar's claim is that even if you live a life characterized by great joy, as long as you experience one bad thing, all the goodness is negated; it would be better not to have any bad or good things happen to you rather than one bad thing and millions of good things. But I still think my kids' obvious delight in existence refutes this. And even though I have a strong inclination to agree with Benatar before I think about it, once I do think about it, I don't see any reason to think that negative experiences so outweigh the positive ones. Perhaps I'd have to read Better Never To Have Been before I can say that, since Benatar probably goes into some detail about it. But I fear that, for me at least, to read this book would be to enflame an unhealthy aspect of my personality, an aspect that is better left to atrophy.

Tuesday, August 10, 2010

Why I Love the Internet, part 5

Via Dangerous Idea I learn that there is a new website devoted to historical apologetics, "the branch of apologetics dealing with the authenticity and credibility of the scriptures and particularly of the New Testament": The Library of Historical Apologetics, run by philosopher Timothy J. McGrew.

Sunday, August 8, 2010

My reading

The reason I signed up with GoodReads, and the reason I have a widget in the sidebar advertising what I'm reading, is to guilt-trip me into reading the books I need to in order to write my dissertation. I have reached a slight problem: I have several notebooks of articles -- some of which are chapters or shorter excerpts from books, but most of which are from philosophy journals -- and I need to start immersing myself in them. But these are not listable on GoodReads. I was thinking of having something in the sidebar listing what articles I was reading, but that might get cluttered. The best idea I have right now is that I'll just write a post each week summarizing what I've read. This might smack of bragging, but that was a problem with the GoodReads widget too. It's still just an idea, but if I start listing articles in the near future, you'll know what's happening.

Thursday, August 5, 2010

The Oddity and Audacity of Openness Theodicy

Openness theology is a sort of halfway house between traditional Christian theology and process theology. Much of the motivation for it rests in its theodicy, the attempt to reconcile the occurrence of evil with the existence of an omnibenevolent God.

According to process theology, God is dependent on the world; as such, he is unable to directly cause any events, but can only "woo" free agents (free in the libertarian sense) to submit to his will. This absolves God of evil fairly easily: God doesn't stop evil because he can't. Such a view, however, can't be reconciled with Christianity, or even theism -- it's panentheistic rather than theistic. God cannot perform miracles, such as the creation of the universe or raising Jesus from the dead. It exchanges God's omnipotence for impotence.

Traditional Christian theology has claimed that God, being omnibenevolent, is not responsible for evil. Human beings, being free agents, are responsible for most of the evil that they experience. God, however, allows such evil, but then uses it to bring about good. Jesus' crucifixion is the paradigm for this: the one innocent human being that has ever lived was brutally tortured and executed. Yet, by his death, the human race is reconciled to God. In fact, this seems to suggest that the greater the evil, the greater the good that God can bring out of it.

Openness theologians and philosophers object to this scenario, since it would mean that God foreknows horrific evils and doesn't stop them. God knew the Holocaust would happen, recognized it as evil, and then let it happen anyway. By allowing evil to take place, God is culpable for it, and this is incompatible with his omnibenevolence. They consider this to be simply unacceptable. God must not know that evil will take place before it happens, and therefore he must not know anything before it happens. The future is "open". It is not already laid down for us in the divine mind. We are free to choose the evil or the good. Of course, traditional Christian theology says we are free as well, but openness theologians do not think this view of freedom is acceptable, partially, again, because it makes God bear much of the responsibility for evil.

But this raises enormous problems for openness theologians, not least of which is whether their theodicy really accomplishes what they think it does. First, although God may not know infallibly what will happen, does that mean that he has no idea whatsoever what the future holds? To deny this would seem absurd: human beings can often know what's going to happen before it actually does, and while this knowledge is certainly fallible, it still allows us to sometimes see evil approaching before it reaches us. Thus, openness theology does not deny that God may know with great probability what we will freely choose to do, he just doesn't know it with absolute certainty. But this raises the question, how often is God right? Wouldn't it be possible for God to know everything with such a high degree of probability that he's never wrong? If so, we're faced with the same problem of evil as traditional Christianity has wrestled with; if not, why not? If God's foreknowledge is not infallible, on what basis does the openness advocate determine the degree to which God can know our future free decisions?

So the openness theologian's claims would seem to suffer from the same critiques which he gave to traditional Christian theology: if God knows that a particular evil will probably transpire, why wouldn't he stop it? The only way out of this for the openness theologian that I can see is if every instance of evil goes against what God expected would probably occur. But surely this is preposterous; after all, Nietschze predicted that the twentieth century would be the bloodiest that humanity had ever seen, a prediction that was fulfilled. Would the openness theologian maintain that an atheist philosopher had more insight than God? (If so, the atheist philosopher would appear to be right. Perhaps Nietschze meant to say God is dumb instead of dead.) The point here is that human beings have some capacity to successfully prognosticate when bad things will happen, so it would seem absurd to deny God the same faculty. The difference is that God supposedly has greater motive and ability to intercede.

Even if God were surprised by every instance of evil, this would still leave openness theology with a less adequate theodicy than traditional Christian theology. After all, according to the latter, God allows specific instances of evil only to prevent greater evils or to produce good. In the openness view, God is surprised by specific instances of evil, has no purpose in allowing them to continue, but allows them to anyway. Openness theology claims it is completely implausible that God could have had morally sufficient reasons for allowing the Holocaust; it's more reasonable to think that God didn't know the Holocaust was going to happen. But if this is the case, why didn't God stop it once it started? Why didn't God intervene and stop the Holocaust when it first began instead of letting six million Jews be killed over several years? The traditional Christian theologian can claim that God had morally sufficient reasons for allowing the Holocaust. The process theologian can claim that God was incapable of stopping it. But the openness theologian must maintain that God had the capacity to stop the Holocaust, had no morally sufficient reason not to, but didn't anyway. In other words, their attempt to build a better theodicy has produced the very worst theodicy possible, short of maltheism.