Showing posts with label Critical Thinking. Show all posts
Showing posts with label Critical Thinking. Show all posts

Thursday, January 6, 2011

Quote: The scientific method

The following is a rather neat expiation of the scientific method. While it leaves a great deal out (institutions, the social nature of science, etc.), it's damn good nonetheless. The writer is John D. Barrow and the quote is taken from his essay "Simple Reality: From Simplicity to Complexity - And Back Again", published in Seeing Further: The Story of Science & The Royal Society:
Laws reflect the existence of patterns in Nature.We might even define science as the search for those patterns. We observe and document the world in all possible ways; but while this data-gathering is necessary for science, it is not sufficient. We are not content simply to acquire a record of everything that is, or has ever happened, like cosmic stamp collectors. Instead, we look for patterns in the facts, and some of those patterns we have come to call the laws of Nature, while others have achieved only the status of by-laws. Having found, or guessed (for there are no rules at all about how you might find them) possible patters, we use them to predict what should happen if the pattern is also followed at all times and in places where we have yet to look. Then we check if we are right (there are strict rules about how you do this!). In this way, we can update our candidate patterns and improve the likelihood that it explains what we see. Sometimes a likelihood gets so low that we say the proposal is 'falsified', or so high that it is 'confirmed' or 'verified', although strictly speaking this is always provisional, none is ever possible with complete certainty. This is called the 'scientific method'. 

Sunday, June 20, 2010

Anecdotes as evidence

An anecdote is a tale, story or an account of events that is sometimes humorous or meant to convey a moral, but is often taken as evidence. In this latter sense there are two broad categories. Firstly, there is testimony, i.e., inferring that such-and-such happened or is true because someone said so. For simplicity's sake, we can further divide testimony into a bunch of different subtypes, most notably, into eyewitness testimony, expert testimony and hearsay. The second broad category of anecdotal evidence is personal experiences (or, if you will, ‘personal testimony’) that takes the form 'I saw so-and-so, therefore it is reasonable for me to believe such-and-such'. Note the key similarity and key difference between these two categories: in both cases someone's experience (or alleged experience) is taken as evidence for some claim, but, for testimony and not for one's own experiences, one has to believe a particular experience occurred on someone else's say-so. Now, it seems perfectly reasonable to believe some things on testimony or personal experience - indeed life would be impossible without it. I am currently having the personal experience of sitting on my couch with my notebook on my lap while typing this post. Hyperbolic doubt aside, I have no good reasons to doubt that his is what is really going on, and you have little reason to doubt my testimony. On the other hand, however, it is common for people to believe wild or improbable things – that aliens regularly visit earth, that highly diluted substances can cure any illness, etc. – solely on the strength of anecdotes. So what exactly is the evidentiary status of anecdotes?

It is vital, first, to distinguish between two types of proposition that anecdotes can be claimed to be evidence for: causal and observational propositions. A causal claim is of the form “P caused X” (or “P, Q and R caused X”) and an observational proposition is “P happened” (or “P occurred, then Q happen and after that R”). This distinction is important for a simple reason: an anecdote can in isolation (almost) never establish the truth or falsehood of a causal claim, it can only be evidence for observational propositions. Why this is the case will be clearer if we think through some concrete examples.

Consider the claim “Mary cheated on John with Bob”. This kind of proposition is pretty straightforward. I can convince myself it is true simply by determining whether Mary and John are in an exclusive relationship, and, if I see Mary make out with Bob, I can reasonably conclude the proposition is true. Now, obviously, there are a bunch of ways I could get it wrong: maybe the woman involved wasn’t Mary, maybe it was simply a friendly hello kiss, or maybe Mary had broken up with John (so it’s making out, but not cheating). It’s clear, though, that if I’m just a little careful, there are a wide variety of circumstances in which I could be very confident Mary did in fact cheat based on my personal experiences. There are a couple of extra complications when John has to decide whether to believe my testimony – maybe I’m lying, for example – but, again, these are not difficult to understand even if they’re difficult to deal with in practice. In other words, we here have a clear case of an anecdote – both in the sense of personal experience and testimony – that can be a good reason to accept the truth of some proposition. Notice two things, though: the claim is not a causal one, and the plausibility (or prior probability) of it being true is pretty high since we know people do in fact cheat on each other regularly. I’ll explain what the latter means in a bit, but let’s move on to anecdotes as evidence for causal claims.

So consider the causal proposition “I took medicine X, I got better, therefore I got better because of medicine X”. This claim is much more complicated than the one about Mary, and you can’t determine whether it’s true simply by looking or making a few observations. Why? Because causality is counterfactual: to say A caused B, is to say B would not have occurred if A had not occurred. (There is a long-standing and complicated philosophical debate about causality. Take it from me: steer clear and stick to the counterfactual view). So, on our example, when you claim “medicine X caused my recovery” you’re committed to the counterfactual proposition that you wouldn’t have gotten better had you not taken medicine X. And you simply cannot, even in principle, know this. For one thing, you have an immune system (gasp!) which may have fought off your infection irrespective of whether you took the medicine. Alternatively, you could have ingested some other substance - maybe you took medicine Y as well, maybe you ate something therapeutic - that took care of the infection. In other words, there are a whole bunch of things - i.e. confounds - that could have caused your recovery, and a single observation – the source of an anecdote – simply cannot distinguish between them. In general, the only reliable way to establish the truth or falsity of causal claims is to do controlled experiments. Let’s look at this a bit more closely.

Assume we are trying to understand the causal relationship between four dichotomous and independent variables (A-D) and a particular dichotomous outcome (O). Assume also that these four variables exhaust the universe of all variables even conceivably related to the outcome. Our aim is to make either inclusion inferences (i.e. conclude the relevant variable has a causal relationship to the outcome) or exclusion inferences (i.e. that the variable does not have a causal relationship to the outcome). Given these assumptions, inclusion inferences are valid only when, from:

A1   B1   C1   D1  =>  O1
A2   B1   C1   D1  => O2

it is concluded that A1 caused O1. The inference is valid because all the variables except one (A) was controlled – held constant – and given the outcome changed, it follows that A is causally related to O. Exclusion inferences are valid only when, from:

A1   B1   C1    D1   =>  O1
A2   B1   C1    D1   =>  O1

it is concluded A1 is not causally related to O1. Again, the validity of the argument is assured because all the variables save one (A) was controlled. Given that A varies while O does not, it follows that A is not causally related to O. Notice that in both cases we are comparing one outcome with a counterfactual. The process of engineering such comparisons is called experimentation and it is at the very heart of the scientific method. (These, by the way, are versions of Mill's Methods).

If we want to determine whether medicine X can cure some disease, we cannot rely on anecdotes because they don’t allow for counterfactual comparison. Assume, for example, that A1 is taking medicine X, and A2 is not taking it; that B1 is having an immune system, and B2 not; that C1 is taking medicine Y, and C2 not taking it; that D1 is being overweight, D2 is not being overweight; and, finally, that O1 is getting better and O2 is not getting better. When we have a single observation - we know that Thaba over there took medicine X, has a healthy immune system, that he isn't taking medicine Y, that he's rather overweight and that he got better after a few days - all we have is:

A1   B1   C1    D2   =>  O1

We have no proper counterfactual: we have not controlled for variables B, C, or D so we can't make any logical inferences about variable A. (Making this inference is the post hoc fallacy). At best, we can say that taking the medicine is possibly related to getting better, but then the same goes for B, C and D. Note also that adding more anecdotes does not resolve our problem: in the real world there are many more than just four variables so things are much more complicated, experiments involving humans are always possibly confounded by the placebo effect, and, importantly, the variables may interact in complex ways. In the oft repeated phrase, the plural of anecdote is anecdotes, not data. Thousands of anecdotes are no more convincing that a single anecdote. As a result, then, anecdotes cannot in general (that is, barring extreme exceptions) establish the truth or falsity of causal propositions.

As I showed above anecdotes can reasonably be taken as convincing evidence for observational claims. But that does not mean we should believe every anecdote (concerning observational propositions). Bob Carrol of Skepdic (an excellent resource worth referring to often, by the way) nicely enumerates the possible problems:
Anecdotes are unreliable for various reasons. Stories are prone to contamination by beliefs, later experiences, feedback, selective attention to details, and so on. Most stories get distorted in the telling and the retelling. Events get exaggerated. Time sequences get confused. Details get muddled. Memories are imperfect and selective; they are often filled in after the fact. People misinterpret their experiences. Experiences are conditioned by biases, memories, and beliefs, so people's perceptions might not be accurate. Most people aren't expecting to be deceived, so they may not be aware of deceptions that others might engage in. Some people make up stories. Some stories are delusions. Sometimes events are inappropriately deemed psychic simply because they seem improbable when they might not be that improbable after all. In short, anecdotes are inherently problematic and are usually impossible to test for accuracy.
In other words, while anecdotes can be good evidence for believing observational propositions - "x happened" - for the reasons listed above, we certainly can't accept all anecdotes uncritically. So what to do? Life is impossible if we dismiss all anecdotes, but we'll be led astray if we accept all anecdotes. The solution is skepticism: that is, being open minded but then filtering beliefs through a bullshit detector. Doing this is simple in principle, but incredibly difficult in practice, so some examples are in order. (Recommended books: Thinking About Thinking by Anthony Flew, The Demon-Haunted World by Carl Sagan [my review], Truth by Simon Blackburn [my review], and Mistakes Were Made (But Not by Me) by Travis & Aronson [my review]).

Megan Fox: not in Jeff's league. 
One of the most important and useful bullshit detecting skills is weighing up the evidence against the plausibility of the claim (here is an example of me doing this). Determine, firstly, how plausible the claim is given everything else we know. For example, given everything I know about my friend Jeff, people in general and the state of technology, the proposition that he has flown in an airplane at least once is highly plausible. (He is middle-class, airplane tickets are cheap and abundant, I've seen him in other cities, etc.). On the contrary, given everything we know, it is extremely implausible that he once had a threesome with Megan Fox and Jessica Alba. (For one thing, he is short and balding. For another,  he's never been to the US). The threesome claim is an extraordinary one: given what we know about attractive celebrities, balding South African men, sexual psychology, and so on, Jeff having a threesome with Fox and Alba is just not the kind of thing we expect to happen. Having determined the plausibility of a claim, the next step is to assess the strength of the evidence. In our example, all the evidence we have is Jeff's testimony. As Bob Carrol explained above, anecdotes are often unreliable because people suffer from innumerable cognitive biases and, more obviously, they sometimes lie. Since in the Alba-Fox example Jeff has a strong motivation to lie - having it believed is highly status-enhancing - his testimony is further undermined. What we have, then, is an extraordinary claim, the only evidence for which is very weak. It is reasonable, then, to withhold assent until better evidence is provided.

A somewhat more enlightening example is alien abduction. People from all over the world claim to have been abducted by extraterrestrials and then molested, lectured on the necessity of world peace, and so on. So, step one: how plausible are these claims? Given what we know about physics and human psychology, not very. First of all, we currently have no evidence that life - let alone intelligent life - exists anywhere else in the universe. (My gut tells me alien life is abundant, but as Carl Sagan pointed out, we shouldn't think with our guts). Secondly, if intelligent life does exist, the aliens will in all likelihood be tens of light-years or more distant, and, since we have no reason to think faster than light travel is practicable, there is no known way for aliens to get to earth in a reasonable period of time. Step two: what about the evidence? Again we have anecdotes: tales from people who claim to have been abducted. Significantly for this example, there is a highly plausible alternative explanation that undermines the evidentiary status of the accounts, namely, hypnogogia and hympnopompia. Briefly, these are vivid hallucinations that occur as you're falling asleep or waking up that are accompanied by sleep paralysis. Typically, a person wakes up terrified and unable to move, senses (an often malevolent) 'presence' in the room, and may also experience a variety of visual, auditory and proprioceptionary hallucinations. Tellingly, this well-studied phenomenon closely mirrors accounts of alien abduction, which often feature extreme fear, a 'presence' in the room, and being unable to move. Since these experiences are accompanied by visual hallucinations and alien visitation is a common trope in popular culture, the other reported experiences are easily accounted for. Importantly, also, hypnogogia and hypnopompia are common (much more common than claims of alien abduction). Extraordinary claims require extraordinary evidence. Since claims of abduction are extraordinary, anecdotes of being abducted are far from extraordinary evidence, so, until much more evidence is provided, it is reasonable to withhold assent.

So... what exactly is the evidentiary status of anecdotes? In summary: (1) anecdotes on their own can never establish the truth or falsity of causal propositions. (2) While anecdotes can be evidence for observational propositions, the plausibility of the claim must be taken into account. To be believed, highly implausible claims require much, much more than mere anecdotes.

Tuesday, February 23, 2010

The Cost of Truth is Eternal Vigilance

A recurring theme on this blog is that it is unwise to rely on 'everyday' or uncritical thinking because our minds are liable to innumerable biases, failures of memory, and so on. An important part of being a good thinker, then, is to submit ideas - and especially our own - to critical scrutiny. I am not, obviously, immune to these biases, in fact, I am as liable to them as anyone else. I do work hard to scrutinize my beliefs carefully, though, and I regularly give up previously held beliefs as a result. To demonstrate not only the dangers of uncritical thinking, but also that I (try to) practice what I preach, here are two recent instances of having to change my mind. Both are pretty unimportant beliefs, but they illustrate the issues nonetheless.

I moved from Johannesburg to Durban in early 2007 and my fiancée did the same in early 2009. Possibly as a result of her comments about how much it has been raining in Durban, I came to believe that 2009 had been an especially wet year: I thought it must be the wettest since I'd moved here. I knew, of course, that the only way to establish this for sure was to look at actual statistics because our memories are flawed and we use the availability heuristic to make inferences about trends. But... I didn't bother to check for a while. When I finally did, it became quite clear that my intuitive sense about Durban's weather was spectacularly wrong. The wonder that is Wolfram Alpha let me create the following two graphs: the first shows the total estimated yearly precipitation (rain, for Durban's purposes) for the last 5 years, and the second shows (I think weekly) rainfall amounts over the same period.

As should be abundantly clear, 2009 is not the wettest year since I moved to Durban, it is in fact the driest. Now, it could be the case that 2009 had less total rainfall, but more rainy days, so I could have been misled for that reason. The second graph, though, is only mildly suggestive on that front and I can find no other data (that's free). So it seems fair to conclude that I was led astray by thinking intuitively when I should have known not to trust my intuitions about trends in complex, variable systems. (For detailed evidence that people are spectacularly bad at thinking statistically, see Kahneman, Slovic & Tversky, 1982).

The second example concerns bias and rather nicely illustrates the importance of blinding. If you had asked me a while ago what the best search engine was, I would have said: "Google - and by a wide margin". Until I found BlindSearch, that is. Branding biases our judgments and Google's brand is so powerful that being objective while knowing which search engine's results you're looking at is extremely difficult. BlindSearch remedies this problem: it lets you search Bing, Yahoo and Google simultaneously, presents the results in three columns, and blinds you to which search engine produced which results. You look through the results, vote for the one you prefer, and then only are the brand names revealed. I've now used BlindSearch dozens of times and a clear pattern has emerged: Google isn't nearly as superior as I once thought it was. While I still tend to prefer Google's results a plurality of the time, Bing and Yahoo do get my vote more often than I would have thought. For the sake of concreteness, here are ten searches I did with my vote listed next to it. I tried to pick topics that were either obscure or controversial to 'test' the search engines, since search terms with obvious results aren't exactly indicative of quality. Also, I verified some of these results by checking whether my vote stayed the same later (it did in all cases).
So that's 2 for Bing, 5 for Google and 3 for Yahoo. Without blinding, my guess would have been that I would have preferred Google 9 times out of 10. Turns out I was wrong. And, contrary to what I'd like to believe, branding works on me too. Bottom line: our biases affect our decisions and our judgments, so when those decisions or judgments are important (which is not the case with search engines), appropriate blinding is vital.

These are just two, small, inconsequential examples, of course. They illustrate an important point though: if you want to be right, you have be be skeptical, self-critical, willing to reconsider and admit error, cautious, and scrupulously careful with facts and arguments. Or, to corrupt a glorious quote misattributed to Thomas Jefferson: the cost of truth is eternal vigilance.

Sunday, February 21, 2010

Quotes: Clifford on belief

I posted some quotes from WK Clifford's "The Ethics of Belief" a while back. Here are some more. (I'm not saying I endorse all of these - he's far too strong in places. Though, I like the sentiment and the prose is fun). 
"No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe."

"If I let myself believe anything on insufficient evidence, there may be no great harm done by the mere belief; it may be true after all, or I may never have occasion to exhibit it in outward acts. But I cannot help doing this great wrong towards Man, that I make myself credulous. The danger to society is not merely that it should believe wrong things, though that is great enough; but that it should become credulous, and lose the habit of testing things and inquiring into them; for then it must sink back into savagery."

"The credulous man is father to the liar and the cheat; he lives in the bosom of this his family, and it is no marvel if he should become even as they are. So closely are our duties knit together, that whoso shall keep the whole law, and yet offend in one point, he is guilty of all."

"'But,' says one, 'I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.' Then he should have no time to believe."

"It is wrong in all cases to believe on insufficient evidence; and where it is presumption to doubt and to investigate, there it is worse than presumption to believe."

Tuesday, February 16, 2010

Quote: Clifford on the Ethics of Belief

I'm in the process of editing my piece on deferring to experts for publication somewhere (maybe Skeptical Inquirer), so I've been doing a bit more reading in the area. I just remembered William Clifford's famous 1877 essay, "The Ethics of Belief" (a nice pdf version is here). It's well worth a read, if only for its purple prose and hardnosed conclusion: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." Another nice quote: 
Belief, that sacred faculty which prompts the decisions of our will, and knits into harmonious working all the compacted energies of our being, is ours not for ourselves but for humanity. It is rightly used on truths which have been established by long experience and waiting toil, and which have stood in the fierce light of free and fearless questioning. Then it helps to bind men together, and to strengthen and direct their common action.

It is desecrated when given to unproved and unquestioned statements, for the solace and private pleasure of the believer; to add a tinsel splendour to the plain straight road of our life and display a bright mirage beyond it; or even to drown the common sorrows of our kind by a self-deception which allows them not only to cast down, but also to degrade us. Whoso would deserve well of his fellows in this matter will guard the purity of his beliefs with a very fanaticism of jealous care, lest at any time it should rest on an unworthy object, and catch a stain which can never be wiped away.

Wednesday, February 10, 2010

Quotes: Galileo and Darwin

While writing my piece on intellectual deference, I came across a lot of awesome related quotes. Two of my favorites didn't make it into the final version of that post. These are they:
"The less people know and understand about [matters requiring thought], the more positively they attempt to argue concerning them, while… to know and understand a multitude of things renders men cautious in passing judgment upon anything new." - Galileo
"Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin

Tuesday, February 9, 2010

Calling Africa's science nerds

In light of all of our problems - poverty, witch hunts, anti-vaccinationism, quackery, religious obscurantism of various kinds, and so on - it has long seemed obvious to me that Africa badly needs skepticism, science, logic and reason. The great Sir Francis Bacon wrote in the Novum Organum that:
Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. Nature to be commanded must be obeyed; and that which in contemplation is as the cause is in operation as the rule.
Knowledge, in the words of the popular corruption, is power. Achieving our ends depends (at least in part) on our understanding of how the world works. But, as Bacon also pointed out, (1) the world is exceedingly complicated ("the subtlety of nature is greater many times over than the subtlety of the senses and understanding") and (2) the human mind is prone to error ("for the mind of man is far from the nature of a clear and equal glass, wherein the beams of things should reflect according to their true incidence, nay, it is rather like an enchanted glass, full of superstition and imposture"). Making sensible decisions in a complex world, then, depends (in part) on us applying science to our problems. 

Science, however, is not merely a matter of 'applications', not only relevant to policy makers, and certainly not only a way of fostering economic development. Or, to again borrow from (and somewhat adapt) Bacon, scientists are not merely concerned with "the relief of man's estate", they are also "merchants of light". Scientific and skeptical thinking - the commitment to submit all ideas (especially our own) to severe critical scrutiny, keep an open mind, aim at unified knowledge, resist obscurantism, and rely on reason and experimentation (among other things) - is the only reliable way of answering the deep questions of our origins, place in the universe, and ultimate fate. To understand the universe and ourselves, in short, we need to apply the 'technology of truth': science.

Africa, then, needs skeptical, reasoned, and scientific voices, not only to foster development and growth, but to serve as merchants of light: to hold out a candle in the dark in a demon-haunted world. It is for this reason that I have long been trying to organize, promote and otherwise advance the skeptical/scientific blogging community in South Africa, and latterly Africa as a whole. So if you are an African skeptical or scientific blogger (or know of such bloggers) please contact me on ionian.enchantment@gmail.com. Participate in our carnival, post and get listed on our blogroll, and join our email discussion group. And, of course, if you have a blog, keep up the good work! If you don't, start one!

I'll give the final word to E. O. Wilson (who I quoted in the very first post on my blog, and who gave me the idea for its name):
Such, I believe, is the source of the Ionian Enchantment. Preferring a search for objective reality over revelation is another way of satisfying religious hunger. It is an endeavor almost as old as civilization and intertwined with traditional religion, but it follows a very different course – a stoic’s creed, an acquired taste, a guidebook to adventure plotted across rough terrain. It aims to save the spirit, not by surrender but by liberation of the human mind. Its central tenet, as Einstein knew, is the unification of knowledge. When we have unified enough certain knowledge, we will understand who we are and why we are here. (Consilience: The Unity of Knowledge, p. 7)

Wednesday, February 3, 2010

In Praise of Deference

It is common to hear that you should make up your own mind and not let other people make it up for you. While I wholeheartedly agree with the sentiment and believe it’s silly to be obsequious to arbitrary authority, I nonetheless think intellectual deference is both unavoidable and a virtue. This notion may seem repugnant, an affront to liberal values, and perhaps even unnecessary, so I’ll defend it in some detail. (I want to stress, however, that while I’ll be defending my considered opinion, my argument for it will be sketchy – a full, precise, treatment would require far more time and space). My argument rests on (among others) the following two premises: (1) the world is staggeringly complicated and regularly in conflict with our intuitions, and (2) intellectual responsibility requires you to know what you’re talking about before you form an opinion. (The picture, by the way, is a high-resolution map of science, from this PLoS One paper).

It’s a clichĂ© that the more you learn the more you realize how little you know, but it’s true: things are nearly always more complicated than they seem, and thus peeling away layers of ignorance usually reveals yet more complexities, uncertainties and undreamt of intricacy. Or as Douglas Adams is supposed to have said: "the universe is a lot more complicated than you might think even if you start from a position of thinking that it’s pretty damn complicated to begin with". If you’re at all in doubt that the world really is as complex as I make out, I invite you to examine the work of those whose job it is to understand it. Have a look at contents of any respected peer-reviewed journal, Science, say, or Nature Neuroscience or Physical Review Letters. Or, if you prefer, examine an advanced-level textbook in any explanatorily successful field such as molecular biology, astrophysics, or neuroscience. (It has to be a field that is at least partially explanatorily successful because incorrect theories can be arbitrarily simple given they need not reflect reality). Explanations of empirical phenomena almost invariably require us to employ sophisticated techniques and equipment, set aside ingrained or natural ways of thinking, invoke vast bodies of previously established knowledge, and express or establish our findings with abstruse mathematics or statistics. Given that our models and theories at least partially mirror reality, the intricacy and opacity of the former reflects the complexity and opacity of the latter. (Though there are probably exceptions, explanations tend to get more complicated – in the sense of arcane or difficult to understand – as they become more successful. There thus seems to be little reason to attribute the complexity of our theories to the incompleteness of our knowledge. Also, while successful explanations may be simple in the sense of being elegant or parsimonious, they’re extremely unlikely to be simple in the sense of easy to understand or straightforward).

My second premise is that ‘making up one’s own mind’ responsibly requires, among other things, understanding what the hell is going on before settling on an opinion. It might be news to some people, but really ‘understanding what is going on’ requires much more than ‘I have a strong inner conviction’ or ‘I once read a book on this’ or ‘someone told me so’. It requires, in general and speaking roughly, proficiency in logic and critical thinking, competence in general and domain-specific scholarship and research, knowledge of the relevant facts and how sure we are of them, mastery of the relevant techniques, a familiarity with possible alternative explanations, knowledge of at least a large proportion of the relevant literature, and more. And if (1) is true – if the universe really is extremely complicated – fulfilling these requirements is awfully demanding. Carl Sagan provided an excellent example:
Imagine you seriously want to understand what quantum mechanics is about. There is a mathematical underpinning that you must first acquire, mastery of each mathematical subdiscipline leading you to the threshold of the next. In turn you must learn arithmetic, Euclidian geometry, high school algebra, differential and integral calculus, ordinary and partial differential equations, vector calculus, certain special functions of mathematical physics, matrix algebra, and group theory. For most physics students, this might occupy them from, say, third grade to early graduate school – roughly 15 years. Such a course of study does not actually involve learning any quantum mechanics, but merely establishing the mathematical framework required to approach it deeply. (The Demon-Haunted World, p. 249)
In other words, before you can hope to begin to understand quantum mechanics, you need to master a vast body of often-difficult mathematics even before learning masses of mind-bending physics. Sagan goes as far as to say there are no successful popularizations of quantum mechanics because it is so in conflict with our intuitions, dealing with it mathematically is our only option. If you think you understand quantum mechanics without having mastered the mathematics, in short, you’re confused about what the word ‘understand’ means.

But no one person could possibly understand or discover everything about a topic, I hear you protest. Exactly! That’s exactly my point. The universe is chock full of all sorts of preposterously complicated phenomena, and no one person could ever hope to understand all or even a significant proportion of it in full. No one person could single handedly maintain a modern lifestyle (grow, harvest and process all his own food, construct his own home, build his own computer from scratch, generate his own electricity, etc.) but is forced to rely on the division of labor. Similarly, no one person could hope to understand everything about even tiny bits of the universe – the causes of climate change on a particular planet, say – but is forced to rely on the intellectual division of labor. And such a cognitive division of labor means intellectual deference. Even an expert doesn't (and can’t) know everything about her field or sometimes even her own area of specialization. It is not unheard of for scientific papers to be published where none of the authors understand all of the methods and findings. Experts defer to other experts. Experts have to defer to other experts. Deference, then, is a non-optional virtue, the only alternatives to which are agnosticism and being intellectually irresponsible. Or, to be less fancy about it, there are inevitably a large number of topics about which you can either (1) form or express unjustified opinions (i.e. be irresponsible), (2) say "I don't know" or (3) defer to the experts.

But, it may be objected, experts are sometimes wrong! Experts can be bought! Experts disagree! Indeed they are, indeed they can be and indeed they do. Experts, of course, are human beings and are therefore subject to all the familiar human failings: they are as fallible, quarrelsome, susceptible to cognitive biases and illusions, prone to social climbing, self-interested, biased, driven by ideology and whatnot as the rest of us. (Well, maybe this isn’t quite true: people self-select into science and must jump through various hoops like defending a thesis, so perhaps individuals best adapted to the ideals of science are more likely to become scientists. What is clear, though, is that scientists are not immune to these human failings). Expertise – the mastery of the techniques and in-depth knowledge of the scholarship on some subject – is not itself a huge improvement over “making up your own mind”. (By the way, I don't suggest expertise requires formal education or credentials; non-PhDs who have mastered a subject certainly still count as experts). Given the complexity of the universe and the limitations of the human mind, expertise is (for many subjects) a necessary but not sufficient condition for having justified opinions. (For one thing, it is possible to be a kind of an expert in utter bollocks: there is, for example, a huge alchemical literature, complete with rival schools, arcane jargon, different techniques and so on. And don’t get me started on postmodernism). So individual expertise in some cases doesn’t seem particularly reliable (though, ceteris paribus, it’s certainly better than nothing) and deferring to an individual expert thus isn’t necessarily such a good idea. Help is at hand, however.

The scientific method, far from denying human failings like the ones I enumerated above, exists exactly because of them: it is because the human mind is so prone to error and bias that we need this vast, expensive and seemingly inefficient set of institutions, norms and practices we call “science”. Science, roughly and to first approximation, is a collaborative enterprise aimed at a unified description and explanation of natural phenomena where the ultimate arbiter of truth is empirical experimentation, the reliability and quality of which is evaluated by a community of scholars through peer-review and replication. A scientist, then, is a person who attempts to describe and explain the natural world by testing empirical hypotheses in collaboration with a group of other researchers by reviewing their work, and producing work that is in turn reviewed. Convincing your peers – who will criticize your ideas harshly and subject them to industrial-strength skepticism – by publishing in peer-reviewed journals, presenting papers at conferences, and, more informally, debating in seminars and pubs, is at the very heart of science. (The mark of a crank is not being embedded in such a system of cooperation, dismissing criticism as some conspiracy or another, and claiming the mantle of Galileo). The point of this collaborative enterprise is to minimize bias: an individual wants her ideas to be true, is limited by peculiar psychological traits and a particular background, knows only some fraction of the relevant facts, and suffers from a whole mĂ©nage of other cognitive biases and illusions. A community dedicated to collaboration (and competition) – whose members aim at rigorous explanation and consensus, and who agree on the primacy of empirical demonstration – can overcome many (though obviously not all) of these biases because, in a sense, one individual’s biases cancels out another individual’s biases. Manipulating the world in such a way to hold certain variables constant while varying others – i.e. doing controlled experiments – is the most powerful technique ever invented to discover nature’s secrets (Daniel Dennett aptly called it the “technology of truth”), and having an entire social system (complete with attending values) and a supporting set of institutions (universities, granting agencies, journals, professional organizations etc.) multiplies these powers by minimizing human failings in interpreting and conducting the experiments. Obviously, science is not perfect, but because of how it is organized – and crucially, because there is a way of falsifying hypotheses – it is a self-correcting process: explanations are tested, discarded and repeatedly refined, which then slowly ratchets our theories closer to the truth over time. (If you have doubts about the success of science, please stop reading, apply head to desk and repeat until you come to your senses).

So why is this whole story about science relevant in a post about deference? Simple: because when the relevant experts agree that some theory or explanation is correct, you can be reasonably confident that the theory is in fact correct. In other words, given the nature of the scientific method – given that claims are peer-reviewed, subject to intense scrutiny, tested, re-tested, refined, and so on – when there is a consensus among the relevant experts, it is reasonable to believe they are right. Of course and again, experts are people, so you can’t be certain the theory or proposition is true just because there is consensus, you can only be rather confident. At a minimum, individuals who disagree with the consensus have the burden of proof – they must show it is false, the majority does not need to refute the alternatives. (Though they often do). Since showing some consensus theory in science is false (or incomplete) is going to be extremely difficult, those who wish to disagree damn well better be experts themselves. (Consensus theories are of course sometimes overturned: witness plate tectonics). In general laypeople are not qualified to have independent opinions about complex topics – they lack the means to come to justified beliefs – and it is especially unreasonable for a non-expert to take a stance contrary to consensus. The upshot is that, firstly, it is reasonable to defer to the consensus opinion of the relevant experts – so I can justifiably say ‘E=mc2,' ‘DNA carries heritable information’, ‘there is a supermassive black hole at the center of the Milky Way’, etc. And, secondly, a layperson who disagrees with expert consensus – denying evolution by natural selection, anthropogenic global warming, that the Earth is about 4.5 billion years old, etc. – is unreasonable in the extreme. Experts get to have opinions on scientifically controversial questions, experts get to disagree with consensus; laypeople get to defer to consensus or reserve judgment. Doing otherwise, I think, shows what Bertrand Russell called (in another context) an “impertinent insolence toward the universe”. Scientists are often accused of arrogance and maybe I’ll be accused of this vice as well for telling laypeople who disagree to shut up. But I think the opposite is true. It is extraordinarily arrogant to have (independent) opinions on complex questions without being willing to pay your dues first – that is, without studying the question for years, reading the scholarly literature, mastering the relevant techniques and mathematics, and so on. Thinking you are entitled to an opinion without paying your dues is the very epitome of intellectual arrogance. And it is especially arrogant – mind-bogglingly so – for a non-expert to have opinions that contradict the consensus of the tens of thousands of intelligent, diligent and dedicated people who have spent decades studying, debating, doing research on and thinking deeply about their respective disciplines. The bottom line: be an expert, defer, or suspend judgment. (To be clear: I’m making an epistemic and not a political claim. People have a right of free speech and conscience, so they can form and express any opinion they like. But that doesn’t mean they have an intellectual warrant to do so).

To be sure, there are a whole bunch of complications here (how very appropriate, no?). For one, scientists are not the only people worth deferring to: if there is a consensus among plumbers, for example, that the best way to fix problem P is to do X, Y and Z, I’d be inclined to say that’s quite reliable. Nevertheless, while there are other groups (‘communities of practice’, etc.) that can reasonably be deferred to, for my purposes it is simply worth noting that scientists are one such group. Secondly, there are certainly degrees of expertise and deciding when someone has crossed the threshold to expert status is fraught with difficulty (though beware the false continuum). More problematical is the question of what topics are ‘sufficiently complicated’ that laypeople shouldn’t have independent opinions. Saying, for example, that only psychologists are qualified to determine whether Bob has a crush on Tamba, for example, is preposterous. The (partial) solution here, I think, is to invoke Richard Dawkins’ notion of the “Middle World” and to distinguish between explicit and implicit knowledge. Let's start with the former. The human mind, Dawkins convincingly argues, evolved to deal with and understand the everyday world we inhabit: of medium sized objects, operating at low-velocities, including animals and other people. “Folk biology”, “folk psychology”, and “folk physics”, for example, are regularly wrong in detail (sometimes spectacularly so), but they are often reliable when in our ‘natural environment’. The fact that we can, say, play football (which requires sophisticated ballistics), navigate a cluttered room (which requires sophisticated optics and physics), and cooperate and compete with other humans (which requires a complex theory of mind) and so on, shows we are far from cognitively incompetent. The human mind is (largely) good at solving the problems we encountered often in our evolutionary past: it is good, in other words, at Middle-World problems. But our ancient ancestors never traveled near the speed of light, never lived in large-scale complex societies, never interacted directly with the quantum world, never needed to understand the nature of stars and so. On certain topics, then, laypeople are reliably (though almost always incompletely) competent. There is a fundamental difference, in other words, between the statements “Bob is upset at Mary for cheating on him with John” and “E equals mc2”: the mind evolved to deal with the former, but not the latter. Important, also, is the difference between implicit knowledge (or behavioral competence) and explicit knowledge (i.e. justified true belief, with some modifications), While being a football quarterback, say, requires a brain capable of solving complex physics problems, this does not mean football players explicitly understand the relevant physics. When I move my arm to pick up my cup of coffee, my brain does damn complicated trigonometry, but I don't know that trigonometry explicitly - my brain's calculations are not consciously accessible to me. What this means is that behavioral competence or implicit knowledge in some domain (seeing, interacting with people and non-human animals, walking about) does not imply explicit knowledge of the underlying science. (A slam dunk argument for this, by the way, is our inability to build robots even remotely as competent as we are).

There are several more complications but I'll only mention one more. It is regularly extremely difficult to determine whether there is a scientific consensus on some topic and, if so, what it actually is, especially so when ideologically committed pseudoscientists muddy the waters. For example, the overwhelming majority of the relevant experts agree that evolution by natural selection – the fact of evolution and the theory of natural selection (etc.) – is established beyond all reasonable doubt. Creationists, however, have tried to argue there is no such consensus and have even compiled lists of scientists who 'disagree with evolution' (c.f. Project Steve). Laypeople who do not understand the scientific method might see two sides 'debating' and have real difficulty figuring out who to believe. They might not realize, for example, that scientific consensus is not about lists of people who agree or disagree, that only the relevant experts are important (engineers who 'disagree with evolution' qua engineers tell us nothing), that no paper critical of evolution has appeared in a mainstream peer-reviewed journal (save by fraud) for several decades, and so on. Even a layperson convinced it is important to defer to scientific consensus, then, will sometimes have real trouble determining whether there is a consensus and what the consensus actually is. There are two ways of dealing with this problem, I think. The first is to employ the much underused phrase "I don't know". Agnosticism  isn't particularly popular, but I think openly admitting what you do and do not know is one of the most important intellectual virtues. So when you can't figure out what the consensus is (or whether there is one), it doesn't suddenly become reasonable to form opinions in the absence of knowledge; agnosticism is then the reasonable course. The second answer to the above problem is having certain metacognitive skills - an understanding of the scientific method and the academic process, familiarity with cognitive biases, skepticism and an ability to assign onus appropriately, finely-honed critical reasoning skills, a basic understanding of statistics, and so on - that are useful for evaluating any claim. While these are not sufficient to understand the details of any area of science, it allows for a 'popular level' grasp of the field, which in turn enables one to identify what the findings are (and some of the reasons why they're established), and determine, with some (hard) work, whether there is a consensus on some question.

"The fundamental cause of the trouble," wrote Russell, "is that in the modern world the stupid are cocksure while the intelligent are full of doubt". While dividing the world into 'the stupid' and 'the intelligent' is probably going too far, I think Russell is on to something: it is those who are ignorant of science who are certain they're right - even when they're not. The Dunning-Kruger effect suggests why: the intellectually unskilled lack the intellectual skills needed to recognize that they are unskilled. They are, in other words, unskilled and unaware of it. Dunning and Kruger also showed, however, that people could be trained to become somewhat more competent, which then allows them to recognize the depth of their incompetence. What I have shown in this post, I hope, is that, in a sense, we are all cognitively incompetent relative to the stupendous complexity of the universe. It is science (or, more broadly, the project of secular reason) that holds out a candle in the dark: we have uncovered nature's secrets only because we invented this 'technology of truth' and those who wish to advance our knowledge or understand a particular phenomenon deeply must approach it humbly and pay their dues in long and intensive study. Those of us who have not paid our dues in a particular field can only defer to those who have or remain agnostic. There is no reasonable alternative. 

The last word goes to the great Bertrand Russell:
The demand for certainty is one which is natural to man, but is nevertheless an intellectual vice. So long as men are not trained to withhold judgment in the absence of evidence, they will be led astray by cocksure prophets, and it is likely that their leaders will be either ignorant fanatics or dishonest charlatans. To endure uncertainty is difficult, but so are most of the other virtues.

Tuesday, February 2, 2010

Telegraph Science Journalism Fail: Or, ARRRRGHHHH!!!111!

I was alerted to an absolutely daft article in the Telegraph via Derren Brown's Blog (who, disappointingly, didn't seem to notice it's daft). Basically, the article completely misrepresents a paper, "Bonobos Exhibit Delayed Development of Social Behavior and Cognition Relative to Chimpanzees", in press at Current Biology. The paper showed, roughly and among other things, that both bonobos and chimps are cooperative when they’re young, but then chimps become progressively less cooperative and more competitive with age, whereas bonobos don’t. The authors hypothesize that this may be due to pedomorphosis, that is, evolutionary changes to the developmental pattern such that juvenile characteristics persist into adulthood.

The 'science correspondent' at the Telegraph, one Richard Alleyne, however, would have you believe the researchers involved "now believe that being aggressive, intolerant and short-tempered could be a sign of a more advanced nature." How the hell Alleyne got from the paper to THAT conclusion is utterly beyond me, the researchers never even hinted that there is connection between 'civilization' and their findings. Alleyne goes on to commit a bunch of science howlers: among other things, saying chimps are "more evolved" and that chimps and bonobos are monkeys (ARGH). Anyway, I was going to blog about this in more detail, but luckily Alison Campbell at BioBlog has a most excellent take-down of the article, so go there for more (and more competent) analysis.

By the way, this is not the first time Alleyne has gotten it spectacularly wrong. Ben Goldacre has exposed his breathtaking misinterpretation of climate science (which he refused to correct) and his shameful distortion of a graduate student's MSc thesis which he claimed concluded women who get raped, essentially, were asking for it (at least this was half-heartedly and partially corrected). 

In conclusion: 

Thursday, November 19, 2009

Cyclone Roberta - FAKE

A public service announcement: the rumors and emails (example after the jump) doing the rounds that KwaZulu Natal is about to be hit by a "tempestuous cyclone" is fake, false, a hoax, bollocks, and completely made up. (There is a warning of heavy rainfall - "in excess of 50mm in 24 hours" - but there is no cyclone). Some observations: South Africa's east coast is very rarely hit by cyclones and email hoaxes are plentiful. Put these facts together, apply a bit of common sense, and you get doubt. And doubt should motivate some fact checking (Google is your friend)... If you did so, you'd find this East Coast Radio article saying it's fake, this cyclone tracking service showing no cyclones heading South Africa's way, and this blog entry by the SA Weather and Disaster Information Service saying it's a hoax.

Doubt will set you free.

Monday, October 19, 2009

Gene Callahan vs Evolutionary Psychology

So I recently had an uncharacteristic (and unpleasant) online altercation with one Gene Callahan about evolutionary psychology and, amazingly, whether Daniel Dennett should be taken seriously. I'm not blogging about this because it is inherently interesting (it's not), but because it nicely illustrates several common misconceptions about applying evolution to psychology and it reminds us that intellectual arrogance is a Bad Thing.

(I’d like to note before proceeding that it’s not as if I’m an uncritical fan of evolutionary psychology. There are, I think, numerous problems in the field, and the standards of evidence is far too often far too low. Some papers in the field are downright embarrassing (this one is the worst I’ve come across) and on my blog I have, among other things, excoriated Satoshi Kanazawa and critiqued Shermer’s application of evolutionary psychology to markets.)

Anyway, the saga in question started when a friend shared a blog post of Callahan’s on Google Reader in which he endorses John DuprĂ©’s Human Nature and the Limits of Science, an uninformed screed against evolutionary thinking in psychology. (See this critique). I won’t have that much to say about the content of Callahan’s post – I will focus on his replies to my comments – but one remark about it is in order. Callahan:
I’ve just been re-reading John Dupre’s wonderful take-down of evolutionary psychology, Human Nature and the Limits of Science. Now, Dupre never disputes the obvious truism that, say, human ethics or religion evolved. But he notes that this is remarkably uninformative, since everything humans do so (sic) evolved, including their ability to write papers on evolutionary psychology!
This is somewhat cryptic and unclear, but straightforwardly interpreted, it is obviously wrong. To see why, consider the following. (I) Phenotypic structures (more precisely, biological processes) are either adaptations or the by-products of adaptations. (II) What distinguishes evolutionary psychology (at least of the Santa Barbara School) from sociobiology is the claim (see Tooby & Cosmides, 1987 [pdf]) that manifest behavior doesn’t evolve, modular information processing systems embedded in brains do. (III) Behavior is the result of a complex interaction between the environment and these information-processing systems; including direct environmental influences (e.g. drugs, brain injury) on the physical substrate of these information-processors. Observed behavior, then, is the product of the environment interacting with information processing mechanisms in the brain, and the brain is constituted of adaptations – structures that exist just because they increased fitness relative to alternatives in evolutionary history, including by producing or facilitating certain behaviors – or the by-products of such adaptations. It is therefore false that ‘everything humans do evolved’ since behaviors themselves don’t evolve, some behaviors result from by-products of evolution (not to mention pathology), and rapidly changing environments (the appearance of development of civilization, say) can interact with evolved psychological traits to produce novel behaviors (including writing papers on evolutionary psychology). The proposition that evolutionary psychology – broadly construed – is uninformative stems from these misunderstandings, and is indistinguishable from the crazy idea that evolutionary thinking generally is uninformative. Moreover, this claim is belied by the fact that we have discovered psychological abilities and traits (e.g., e.g.) that we didn't know about until we thought about human psychology from an evolutionary perspective.

On to the actual altercation… Callahan’s post rather annoyed me, so I left an aggressive – probably too aggressive – comment to the effect that (a) he is unqualified to have an opinion and (b) that he should read Daniel Dennett’s critique of the book. On reflection, I regret making point (a) as baldly as I did: I failed to err on the side of charity and to assume good faith. (Not to mention that I took Wikipedia’s word that he’s an economist, when he self-identifies as a philosopher, though I can’t help pointing out that he has a PhD in neither, so appending “in-training” is appropriate. Note: I don’t have a PhD either, so I happily concede I’m a wannabe cognitive scientist, not the real deal... yet). Understandably, Callahan didn’t take too kindly to my comment, so he replied aggressively himself, and then headed over to my blog and threw insults around on two of my posts: here and here. (Some tangential pedagogy: as I explained at length in my Fun with Fallacies post a while back, there is a difference between the ad hominem logical fallacy and mere insult. Callahan [I think, the comment was anonymous] calling me a “rude little punk”, for example, is not an instance of the ad hominem logical fallacy; even saying ‘you’re wrong and a rude little punk’ wouldn’t be fallacious. Only if he had said (or implied) ‘you’re wrong because you’re a rude little punk’ would he have committed the fallacy. There must be some inference drawn from some purported negative quality for the fallacy to occur, merely alleging someone has a negative quality is not itself fallacious, though of course it may be false or libellous).

Anyway, Callahan’s reaction to (b) was remarkable and illustrative: he dismissed Dennett’s critique of DuprĂ© without reading it because he thinks Dennett’s work is a “rubbish heap”. Here’s what he said:
“Oh, and I’m not going to bother reading his [Dennett's] criticisms of Dupre. If I read several things by someone and they are universally rubbish, I really can’t be bothered to keep going through the rubbish heap. Anyone dull enough to have come up with the ‘brights’ idea really can be dismissed out of hand, don’t you think?”
Wow. The first sentence is the most interesting, but note that the second is factually inaccurate (Dennett endorsed the Brights idea – as did Dawkins – but neither came up with it) and invalid to boot. Worse, the suppressed premise (pdf) that would make the argument valid - ‘anyone who has one really daft idea can be dismissed out of hand (on all topics)’ – is clearly false. Granting for argument’s sake that the Brights idea was daft, it’s simply not true that if someone has one spectacularly bad idea that everything else they say will be wrong. Newton had silly ideas about alchemy and the Bible, but that doesn’t mean we can dismiss the Principia. Linus Pauling obstinately stuck to the incredibly implausible notion that ultra-high doses of Vitamin C can cure cancer, but that doesn't mean his work in chemistry was worthless. Physicists with idiotic philosophical or religious views are a dime a dozen, but that doesn’t mean their work as physicists is necessarily bad. Is it really that surprising that a philosopher and a ethologist, respectively, could be persuaded to endorse a bad marketing idea? If they did so would it mean that their professional work was all worthless?

Callahan’s first point in the above paragraph, though, is far more interesting and so worth looking into in a bit more detail. At first I thought he couldn’t possibly believe it – that perhaps he was just pissed off and said something silly in the heat of the moment – but he failed to back down in subsequent comments, so he really does seem to believe it. In summary, his argument is: ‘I read x% of Dennett’s work, what I read was universally rubbish, therefore everything by Dennett is rubbish’. (Callahan calls Dennett's work 'a rubbish heap', so he's not just making the more reasonable claim that 'he couldn't be bothered to read more of it'). This argument too is invalid - though of course I hardly expect people to make consistently logically valid arguments in blog comments. The point is that it contains at least one false suppressed premise, namely: ‘if I’ve read some proportion of a scholar’s work, I can judge all of it.’ This is both arrogant and false, the latter since for it to be true everyone would have to produce either consistent rubbish or consistent non-rubbish: it implausibly rules out a mixed bag. Newton, again, produced utter nonsense and sublime science, Jared Diamond wrote both Guns, Germs, and Steel (one of the best books of the 90s is my opinion) and Why is Sex Fun? (which was very bad indeed) and so on.

As a rule of thumb, I’d say that unless (1) you have read a good proportion of some scholar’s output, (2) you are qualified to judge all of it, and unless (3) everything you have read is entirely devoid of merit and without any redeeming qualities whatsoever, making a black-and-white inference about an entire corpus of work is just not reasonable. (People who make a priori unlikely claims in conflict with scientific consensus, show no interest in justifying their claims, and who lack relevant expertise can in most cases be dismissed out of hand. Sylvia Brown’s books, for example, are just not worth paying attention to. I take it as obvious that Dennett does not come close to fulfilling these criteria). Given how much Dennett has produced I’m willing to bet Callahan has not satisfied (1), and I have serious doubts about (2) since as far as I know not even Callahan himself claims to be a qualified cognitive scientist or philosopher of mind. More importantly, the prior probability of (3) is preposterously low and Callahan thus has a huge burden of proof to discharge. For him to do so he would not only have to demonstrate (preferably in a mainstream peer-reviewed journal) that, say, Consciousness Explained (CE) and Darwin’s Dangerous Idea (DDI) are rubbish but also explain why so many smart people – whether they agree with Dennett or not – were fooled into concluding the opposite. In other words, he must rigorously justify his initial contention not only that Dennett is wrong, but so wrong that his work is entirely worthless. And, if Dennett’s work is indeed utter rubbish, Callahan must explain why Dennett has been so influential: why, for example, CE has been cited 4700+ times and DDI 3000+ times. (Callahan objected to this point by saying it merely shows Dennett is famous, and mere fame presumably doesn’t track genuine merit. I responded that there’s a distinction between fame and influence: Dennett is both, Paris Hilton is only the former, Frege (say) is only the latter, and both Callahan and I are neither. Scholars just don’t see the need to read, let alone refer or respond to, utter rubbish so either Callahan is wrong or thousands of highly trained and really intelligent people are deluded. Of course, Callahan could be right, but I wouldn't recommend betting on it).

The moral of the preceding analysis, I think, is that intellectual arrogance is a very Bad Thing. I admit that I’m not exactly diffident, and that I have regularly fallen afoul of the principles I outline below. But I’m not nearly arrogant enough to dismiss whole disciples or declare all of an influential and prolific academic’s work utter rubbish. The common cause of such extreme beliefs, it seems to me, is overweening intellectual self-confidence, which is in turn arguably a product of an insufficient familiarity with one’s own fallibility. Cognitive biases and illusions are universal and ineradicable, the world is incredibly complicated and you can know only a fraction of the currently knowable. The mark of someone familiar with the above is scepticism, suspicion of bald assertions and hasty generalization, doubt, caution, a willingness to reconsider and admit error, and being scrupulously careful with facts and arguments. Callahan, it seems to me, fails to live up to these principles and the result is beliefs that, frankly, are downright idiotic. Or, as I put it rather more colorfully in my comments on his post, if these really are his beliefs, he should STFU, GTFO and take his FAIL with him. Srsly.

Of course, I could be wrong. Maybe I've been blinded by emotion, maybe I've been unfair, maybe I've misunderstood. If so, show me I'm wrong and I'll reconsider. Really.

Saturday, October 17, 2009

Public Service Announcement: You have an immune system

As some of you might know: you have an immune system. In fact, you have an adaptive, extraordinarily intricate and complex immune system that evolved over hundreds of millions of years because there are innumerable tiny predators (bacteria, viruses, etc.) that, in effect, want to eat you. And, as anyone with immunodeficiency (whether innate or acquired) can attest, the immune system is almost always effective and, without it, you'd be in serious trouble. Even people with functional immune systems do get sick, of course, and this happens for several reasons, including that it just needs time to adapt (by evolving responses to novel infections) or because the system simply can't deal with the infection.

Why bring this up? Doesn't everybody know this? Well, I'd hope so, but many people effectively deny that they have an immune system when they claim something along the lines of 'I took medicine X, I got better, therefore I got better because I took medicine X'. My point is just this: you simply can't know whether you got better because of your immune system or because of X. Your immune system is really good at it's job - not perfect, of course, but damn good (see immunodeficiency again). And since it's adaptive - in a quite literal sense it evolves ways to deal with new infections - when you get sick and then better, it might be because you took medicine or because your immune system found an effective response (or both, or neither). But in an individual case you simply can't know. Concluding you got better just because of taking medicine - i.e. saying without taking it you wouldn't have gotten better - is an instance of the post hoc ergo propter hoc logical fallacy. That is, you're saying just because Z happened after X, it must be the case that X caused Z to happen. But of course this doesn't follow: Z (getting better) might have nothing to do with X (taking medicine) because X could just have been incidental, the real cause of Z might have been P (your immune system) or Q (the placebo effect) or something different. In general, the only - and I do mean only - way to decide in a rational way whether some treatment is effective or not is to do science: that is, do a properly designed, large-scale, double-blind randomized clinical controlled-trail.

Saying you got better just because sometime earlier you had taken medicine, then, is in effect to deny you have an immune system. Which is dumb. Take home message: (1) Thou shalt not rely on anecdotal evidence and (2) Thou shalt rely on evidence-based medicine (or, better yet, a variant known as science-based medicine).

Wednesday, September 23, 2009

Fun with fallacies: Poisoning the well

An unfortunate byproduct of philosophical training, other than the obvious of annoying everyone at the dinner table, is that I cry inwardly every time I see terms such as “fallacy” or “invalid” misused. On the theory that I shouldn’t complain about it if I’m not doing something about it, I figured I’d start an irregular series on critical thinking and logical fallacies. So, welcome to the inaugural edition of Fun with Fallacies…

First, some background. There are two different dimensions along which to evaluate arguments: one, the truth of premises and, two, the validity of argument structure. Premises (the content of arguments – e.g. “Scotland is in the Northern Hemisphere”, “All monkeys are purple”) are either true or false. Arguments (the logical structure linking premises – e.g. “If A then B, A therefore B”, “A and B, therefore C”) are either valid or invalid. And these two dimensions, importantly, are separate. In logic, saying a premise is invalid makes no sense: it is much like saying someone has scored a touchdown in soccer. Similarly, arguments cannot be true or false; they are only ever valid or invalid. As the perceptive reader no doubt noticed, my first example of a premise was true and the second was false and my first example of an argument was valid (if you like your Latin, this particular structure is known as modes ponens) and the second was invalid. Note that you can have an invalid argument with true premises and a true conclusion (“Elephants are mammals, Elvis Presley is dead, therefore homeopathy is bollocks”), that you can have a valid argument with false premises and a false conclusion (“All women are pregnant, Angela is a woman, therefore Angela is pregnant” "All women are pregnant, Michael is a woman, therefore Michael is pregnant") and so on. These dimensions are entirely independent of each other. When an argument is (1) valid AND (2) has all true premises, we say it is sound (and therefore one you should accept); otherwise, it is unsound.

But what exactly is validity? It’s quite simple really. A valid argument is one where the truth of the premises guarantees the truth of the conclusion. In other words, if the premises are true, it follows, by the laws of logic, that the conclusion must be true. (But not vice versa). If this is the case, we say the conclusion ‘follows’ from the premises or that the truth of the premises 'transmits' truth to the conclusion. So if “A” and “if A then B” are both true, then you are forced to conclude that “B” is true (this is modes ponens again). Or, in words, if Paris is the capitol of France (“A”), and Paris being the capitol of France entails that the French seat of government is in Paris (“if A then B”), then it follows that the French seat of government is in Paris (“B”).

Okay, so what’s a fallacy? It’s just an argument that is not valid – that is, it’s an argument where the conclusion does not follow from the premises. Notice, however, that the fact that some argument is a fallacy does not mean the premises are false, nor does it mean the conclusion is false. Indeed, saying an argument is fallacious (i.e. invalid) entails nothing whatsoever about the truth of the premises or the conclusion. (You can, after all, defend a true conclusion with an invalid argument). Conversely, just because an argument is valid does not mean the conclusion is true, nor does it mean the premises are true: it’s just that if the premises were true you would have to accept the conclusion. (So if it really were the case that all monkeys are purple and that I am a monkey, I would be forced to accept that I’m purple). The upshot is that a concern with validity and detecting fallacies is only one aspect of evaluating positions but, of course, it’s an important part.

That’s about enough background, I think, so on the our first actual example… Regular readers will recall that I recently took on a local (i.e. South African) homeopath, one Johan Prinsloo. In a section of his website that he’s now edited but which is still available on Google Cache as I first saw it, Prinsloo made the following argument (emphasis in original):
The one thing that always catches my attention is the fact that generally the skeptics of Homeopathy also tend to be anti-religion or at least skeptical of religion.
What’s going on here? Well, it’s a beautiful example of poisoning the well, which is a sub-type of the ad hominem fallacy (‘arguing to the man’). Ad hominem is pretty widely misunderstood; some people seem to think that any insult or negative assertion about an opponent makes an argument fallacious. This is not correct. In fact, ad hominem has the form: “Sarah believes that P, Sarah has negative quality X, therefore P is false”. Clearly, this argument is invalid: there is no premise linking having negative quality X and the truth or falsity of P. The important bit, though, is that a conclusion is being drawn about a claim from the purported negative quality, if this is not done no fallacy is being committed. I might say, for example, that: “Homeopathy is bollocks”, “homeopaths tend to be dumb”, “the law of infinitesimals is false” and so on. As long as I’m not drawing an inference from “homeopaths tend to be dumb”, all I’ve done is thrown around an insult (which may or may not be true), I have not committed a fallacy. (Remember, truth and falsity is independent of validity and invalidity!). It’s possible, in fact, to make the argument about Sarah valid (so it’s no longer a fallacy), despite the fact that it’s still about a negative quality. All I have to do is insert the missing premise: “Sarah believes that P, Sarah has negative quality X, everything people with negative quality X believe is false, therefore P is false”. Note that the conclusion now does follow from the premises and it’s thus no longer a fallacy, but at the cost of making the ridiculous missing (or ‘suppressed’) premise explicit.

In Prinsloo’s case it’s clear that he’s attempting to preempt criticism of homeopathy by (in his mind) tarnishing the reputation of the skeptics: he is, in other words, poisoning the well. He is implying that critics of homeopathy have a negative quality (being religious skeptics), and therefore their views on homeopathy can be dismissed. This argument is obviously fallacious as it stands: there is no premise linking being a religious skeptic to having false beliefs about homeopathy, and thus the conclusion does not follow from the stated premises. To make the argument valid, Prinsloo would have to say something like "everything a religious skeptic believes is false" or "everything religious skeptics say about homeopath is false" and once you see that, it becomes obvious why the premise was kept implicit: it's ridiculous on the face of it. As far as I am aware, there is not even correlational evidence between religious skepticism and having false beliefs (indeed the opposite might be true), let alone evidence that religious skeptics are invariably wrong.