- 01 and the universe
- Acinonyx Scepticus
- Amanuensis
- ASSAf Blog
- Botswana Skeptic
- Bomoko and other nonsense words
- Communicating Science, the African Way
- Defollyant's AntiBlog
- Effortless Incitement
- Ewan’s Corner
- Expensive Beliefs **new**
- Geekery
- Grumpy Old Man
- Health Frog **new**
- Hello Universe, This is Nessie
- Ionian Enchantment
- Limbic Nutrition
- Lenny Says
- McBrolloks
- Meh Blog **new**
- Nathan Bond's TART Remarks
- Orion Spur
- Other Things Amanzi
- Pickled Bushman
- Psychohistorian
- Reason Check
- Retroid Raving
- Roy Jobson **new**
- Scorched
- Shadows Hide
- Skeptic South Africa **new**
- Stop Danie Krügel
- Sumbandlila Mission Blog **new**
- Synapses
- Tauriq Moosa
- The Joys of Atheism **new**
- The Science Of Sport
- The Skeptic Black Sheep
- The Skeptic Detective
- Word of the Blog
Sunday, February 28, 2010
African science/skepticism blogrolling for February
The updated African science and skepticism blog roll for February... If you know of blogs not listed here, please let me know. Also: add it to your blog! Do a post like this one! (Email me, and I'll send you the HTML).
Saturday, February 27, 2010
Wednesday, February 24, 2010
Encephalon #80: The Twitter Edition
Welcome to the 80th edition of Encephalon (@encephalon_), the world's best mindy/brainy/behaviory blag carnival! Since I've finally joined the whole Twitter party properly (@michaelmeadon), I figured making this the Twitter Edition would be fun. It also features an entirely gratuitous picture of a hot bird (haha), right. So, here are some 'Tweets', all < 140 characters... (The @xxx's refer to the relevant person's Twitter account, if there is one, and the link at the end of each 'Tweet' goes to the blog entry).
- @edyong209 asks: can a sniff of oxytocin improve the social skills of autistic people? Apparently yes: http://bit.ly/dqVSIV
- @neurocritic on on alien hand syndrome and agency error http://bit.ly/d8D3kY & on neuropsychoanalysis: http://bit.ly/bU3LyD
- @vaughanbell wonders whether you really can be frightened to death: http://bit.ly/d0ehYL
- @podblack on facilitated communication and Belgian Rom Houben. Newsflash: it didn't really work: http://bit.ly/aTHvWy
- @sharpbrains head honcho @AlvaroF interviews Michael Merzenich on brain training, etc. http://bit.ly/5kfG7w
- @sharpbrains says working memory is a better predictor of academic success than IQ: http://bit.ly/5Q3ZJ5
- @somatosphere has a summary of an interdisciplinary UCLA conference on the Cultural and Biological Contexts of Psychiatric Disorder: http://bit.ly/d1J1tT
- @researchdigest says swanky cars enhance men's appeal to women, but not women's to men: http://bit.ly/dr9krV
- @BrainBlogger on a BMJ article that concluded children with a high IQ have a reduced risk of mortality as adults: http://bit.ly/9T4Hhy
- @BrainBlogger argues the act of speaking in tongues as a verifiable language phenomenon: http://bit.ly/bOlKX2
- @jonahlehrer reveals sportspeople don't maximize gains like game theory predicts: http://bit.ly/bw769F. Von Neumann pissed.
Tuesday, February 23, 2010
The Cost of Truth is Eternal Vigilance
A recurring theme on this blog is that it is unwise to rely on 'everyday' or uncritical thinking because our minds are liable to innumerable biases, failures of memory, and so on. An important part of being a good thinker, then, is to submit ideas - and especially our own - to critical scrutiny. I am not, obviously, immune to these biases, in fact, I am as liable to them as anyone else. I do work hard to scrutinize my beliefs carefully, though, and I regularly give up previously held beliefs as a result. To demonstrate not only the dangers of uncritical thinking, but also that I (try to) practice what I preach, here are two recent instances of having to change my mind. Both are pretty unimportant beliefs, but they illustrate the issues nonetheless.
I moved from Johannesburg to Durban in early 2007 and my fiancée did the same in early 2009. Possibly as a result of her comments about how much it has been raining in Durban, I came to believe that 2009 had been an especially wet year: I thought it must be the wettest since I'd moved here. I knew, of course, that the only way to establish this for sure was to look at actual statistics because our memories are flawed and we use the availability heuristic to make inferences about trends. But... I didn't bother to check for a while. When I finally did, it became quite clear that my intuitive sense about Durban's weather was spectacularly wrong. The wonder that is Wolfram Alpha let me create the following two graphs: the first shows the total estimated yearly precipitation (rain, for Durban's purposes) for the last 5 years, and the second shows (I think weekly) rainfall amounts over the same period.
As should be abundantly clear, 2009 is not the wettest year since I moved to Durban, it is in fact the driest. Now, it could be the case that 2009 had less total rainfall, but more rainy days, so I could have been misled for that reason. The second graph, though, is only mildly suggestive on that front and I can find no other data (that's free). So it seems fair to conclude that I was led astray by thinking intuitively when I should have known not to trust my intuitions about trends in complex, variable systems. (For detailed evidence that people are spectacularly bad at thinking statistically, see Kahneman, Slovic & Tversky, 1982).
The second example concerns bias and rather nicely illustrates the importance of blinding. If you had asked me a while ago what the best search engine was, I would have said: "Google - and by a wide margin". Until I found BlindSearch, that is. Branding biases our judgments and Google's brand is so powerful that being objective while knowing which search engine's results you're looking at is extremely difficult. BlindSearch remedies this problem: it lets you search Bing, Yahoo and Google simultaneously, presents the results in three columns, and blinds you to which search engine produced which results. You look through the results, vote for the one you prefer, and then only are the brand names revealed. I've now used BlindSearch dozens of times and a clear pattern has emerged: Google isn't nearly as superior as I once thought it was. While I still tend to prefer Google's results a plurality of the time, Bing and Yahoo do get my vote more often than I would have thought. For the sake of concreteness, here are ten searches I did with my vote listed next to it. I tried to pick topics that were either obscure or controversial to 'test' the search engines, since search terms with obvious results aren't exactly indicative of quality. Also, I verified some of these results by checking whether my vote stayed the same later (it did in all cases).
These are just two, small, inconsequential examples, of course. They illustrate an important point though: if you want to be right, you have be be skeptical, self-critical, willing to reconsider and admit error, cautious, and scrupulously careful with facts and arguments. Or, to corrupt a glorious quote misattributed to Thomas Jefferson: the cost of truth is eternal vigilance.
I moved from Johannesburg to Durban in early 2007 and my fiancée did the same in early 2009. Possibly as a result of her comments about how much it has been raining in Durban, I came to believe that 2009 had been an especially wet year: I thought it must be the wettest since I'd moved here. I knew, of course, that the only way to establish this for sure was to look at actual statistics because our memories are flawed and we use the availability heuristic to make inferences about trends. But... I didn't bother to check for a while. When I finally did, it became quite clear that my intuitive sense about Durban's weather was spectacularly wrong. The wonder that is Wolfram Alpha let me create the following two graphs: the first shows the total estimated yearly precipitation (rain, for Durban's purposes) for the last 5 years, and the second shows (I think weekly) rainfall amounts over the same period.
As should be abundantly clear, 2009 is not the wettest year since I moved to Durban, it is in fact the driest. Now, it could be the case that 2009 had less total rainfall, but more rainy days, so I could have been misled for that reason. The second graph, though, is only mildly suggestive on that front and I can find no other data (that's free). So it seems fair to conclude that I was led astray by thinking intuitively when I should have known not to trust my intuitions about trends in complex, variable systems. (For detailed evidence that people are spectacularly bad at thinking statistically, see Kahneman, Slovic & Tversky, 1982).
The second example concerns bias and rather nicely illustrates the importance of blinding. If you had asked me a while ago what the best search engine was, I would have said: "Google - and by a wide margin". Until I found BlindSearch, that is. Branding biases our judgments and Google's brand is so powerful that being objective while knowing which search engine's results you're looking at is extremely difficult. BlindSearch remedies this problem: it lets you search Bing, Yahoo and Google simultaneously, presents the results in three columns, and blinds you to which search engine produced which results. You look through the results, vote for the one you prefer, and then only are the brand names revealed. I've now used BlindSearch dozens of times and a clear pattern has emerged: Google isn't nearly as superior as I once thought it was. While I still tend to prefer Google's results a plurality of the time, Bing and Yahoo do get my vote more often than I would have thought. For the sake of concreteness, here are ten searches I did with my vote listed next to it. I tried to pick topics that were either obscure or controversial to 'test' the search engines, since search terms with obvious results aren't exactly indicative of quality. Also, I verified some of these results by checking whether my vote stayed the same later (it did in all cases).
- Science - Bing
- Islamic terrorism - Difficult call, but Yahoo
- Evolution - Yahoo
- iPad - Google (though it was close)
- Michael Meadon - Google
- Sex - Yahoo (though Bing doesn't work for some reason)
- Homeopathy - Google (by a large margin)
- Jacob Zuma - Bing (close)
- Richard Dawkins - Google (Yahoo sucked)
- Sitcky the stick insect - Google, by far (also wins on Olaf the Hairy)
These are just two, small, inconsequential examples, of course. They illustrate an important point though: if you want to be right, you have be be skeptical, self-critical, willing to reconsider and admit error, cautious, and scrupulously careful with facts and arguments. Or, to corrupt a glorious quote misattributed to Thomas Jefferson: the cost of truth is eternal vigilance.
Sunday, February 21, 2010
Climate denial and arrogance
I recently became a bit more active on Twitter (find me at @michaelmeadon) and today I had an argument with one Ivo Vegter (@ivovegter), a South African journalist and, more to the point, climate denialist. Vegter is a staunch critic of climate science and he's penned several columns espousing his views, on Climategate, Copenhagen and one answering his critics (among others). I don't have either the time or the inclination to document all the misconceptions, false statements and fallacies Vegter makes in his columns, but I do want to hold forth on intellectual arrogance and share an analogy I put to him regarding expertise. Vegter, you see, is a journalist: he not only has no training in climate science, he has no scientific training at all. He is, apparently, "reasonably familiar with scientific subjects, and ha[s] read...a great deal on the subject of climate change." (Though this does not include, at least from what I can see, reading the scientific literature like respected science journalists do - c.f. Carl Zimmer and Ed Yong). Oddly, he claims "no particular expertise, nor qualifications to challenge professional scientists on matters of high science" but then does exactly that when taking sides between denialists (often non-climate scientist) and proper climate scientists.
As regular readers will know (I keep reminding you...) I recently wrote a long piece praising intellectual deference. The key premises of my argument there was (1) that the universe is hypercomplicated and (2) that intellectual rigor demands robust epistemic justification before forming an opinion. My conclusion was that, given (1), (2) is extremely difficult to fulfill. Intellectual responsibility, then, demands either agnosticism or deference to expert consensus when (2) is not fulfilled. I think it should be obvious that climate science is in fact preposterously complicated. To make up your own mind, all by yourself, about whether human activities cause global climate change, you would need to understand, at a minimum and as far as this particular non-expert can tell, the relevant physical and chemical processes that happen in the atmosphere and how they change, the history of and variations in climatic conditions, the extent and history of anthropogenic greenhouse gasses, and the mathematics that underlie climate models. To form justified opinions about these, in turn, you’d have to know and understand, including all the technical details, a vast body of literature about things like dendroclimatology, meridional overturning circulation, the net radiative forcing of anthropogenic aerosols, and the quasi-biennial oscillation.
Vegter, as far as I know at least, understands none of the above, let alone at an expert level. Yet he espouses an opinion contrary to scientific consensus. (While there is plenty we don't know, the vast majority of the relevant experts agree climate change is anthropogenic). Without properly understanding the methods of science, he claims, absurdly, that "Climategate" shows there is a warmist conspiracy. He adjudicates arguments between scientists and denialists - usually in favor of the latter. All of this, I think, is irresponsible and arrogant in the extreme. As I pointed out in my deference piece:
As regular readers will know (I keep reminding you...) I recently wrote a long piece praising intellectual deference. The key premises of my argument there was (1) that the universe is hypercomplicated and (2) that intellectual rigor demands robust epistemic justification before forming an opinion. My conclusion was that, given (1), (2) is extremely difficult to fulfill. Intellectual responsibility, then, demands either agnosticism or deference to expert consensus when (2) is not fulfilled. I think it should be obvious that climate science is in fact preposterously complicated. To make up your own mind, all by yourself, about whether human activities cause global climate change, you would need to understand, at a minimum and as far as this particular non-expert can tell, the relevant physical and chemical processes that happen in the atmosphere and how they change, the history of and variations in climatic conditions, the extent and history of anthropogenic greenhouse gasses, and the mathematics that underlie climate models. To form justified opinions about these, in turn, you’d have to know and understand, including all the technical details, a vast body of literature about things like dendroclimatology, meridional overturning circulation, the net radiative forcing of anthropogenic aerosols, and the quasi-biennial oscillation.
Vegter, as far as I know at least, understands none of the above, let alone at an expert level. Yet he espouses an opinion contrary to scientific consensus. (While there is plenty we don't know, the vast majority of the relevant experts agree climate change is anthropogenic). Without properly understanding the methods of science, he claims, absurdly, that "Climategate" shows there is a warmist conspiracy. He adjudicates arguments between scientists and denialists - usually in favor of the latter. All of this, I think, is irresponsible and arrogant in the extreme. As I pointed out in my deference piece:
It is extraordinarily arrogant to have (independent) opinions on complex questions without being willing to pay your dues first – that is, without studying the question for years, reading the scholarly literature, mastering the relevant techniques and mathematics, and so on. Thinking you are entitled to an opinion without paying your dues is the very epitome of intellectual arrogance. And it is especially arrogant – mind-bogglingly so – for a non-expert to have opinions that contradict the consensus of the tens of thousands of intelligent, diligent and dedicated people who have spent decades studying, debating, doing research on and thinking deeply about their respective disciplines. The bottom line: be an expert, defer, or suspend judgment.There is, I've noticed, an odd inconsistency in lay arrogance about science. I doubt very much Vegter would grab the knife away from a neurosurgeon and start cutting away himself. I doubt Vegter would tell an engineer she's Doing It Wrong and redraw her plans for a bridge. And yet Vegter seems perfectly willing to yell 'bunkum!', 'conspiracy!' and 'fraud!' when it comes to climate science, despite the fact that the topic is arguably much more complicated and the stakes several orders of magnitude higher. Why not watch MegaStructures on Natural Geographic and start an engineering firm? Why not browse through Gray's Anatomy, read a bit of Oliver Sacks and then branch out to neurosurgery? He's done neither physics nor physiology, but seems willing to advise climate scientists on the strength of his ignorance on the former, but not neurosurgeons on the strength of his ignorance in the latter. Why?
Quotes: Clifford on belief
I posted some quotes from WK Clifford's "The Ethics of Belief" a while back. Here are some more. (I'm not saying I endorse all of these - he's far too strong in places. Though, I like the sentiment and the prose is fun).
"No simplicity of mind, no obscurity of station, can escape the universal duty of questioning all that we believe."
"If I let myself believe anything on insufficient evidence, there may be no great harm done by the mere belief; it may be true after all, or I may never have occasion to exhibit it in outward acts. But I cannot help doing this great wrong towards Man, that I make myself credulous. The danger to society is not merely that it should believe wrong things, though that is great enough; but that it should become credulous, and lose the habit of testing things and inquiring into them; for then it must sink back into savagery."
"The credulous man is father to the liar and the cheat; he lives in the bosom of this his family, and it is no marvel if he should become even as they are. So closely are our duties knit together, that whoso shall keep the whole law, and yet offend in one point, he is guilty of all."
"'But,' says one, 'I am a busy man; I have no time for the long course of study which would be necessary to make me in any degree a competent judge of certain questions, or even able to understand the nature of the arguments.' Then he should have no time to believe."
"It is wrong in all cases to believe on insufficient evidence; and where it is presumption to doubt and to investigate, there it is worse than presumption to believe."
Friday, February 19, 2010
Skeptics' Circle #130
I've not pointed to Skeptics' Circles in a while (naughty, I know). So here's the latest. the 130th edition of the Skeptics' Circle is up over at The Lay Scientist. Posts to check out: Martin on the 10:23 mass homeopathy overdose, Andrea's Buzzing About on the false distinction been 'natural' and 'artificial' chemicals, and Dubito Ergo Sum on Mike Adams' (of the quackery site Natural News) stupid views about skeptics. My contribution to this edition was "In Praise of Deference".
Andreas Moritz is (also) a quack
So I wrote yesterday about one Christopher Maloney, a naturopathy quack. It turns out he was not responsible for getting Michael Hawkins' blog shut down, the villain responsible is Andreas Moritz, a cancer quack. PZ has posted the letter Moritz has sent to Hawkins, in which he threatens to sue for defamation. Yeah, right. Smart move - now the whole internets will be one your case. Streisand effect FTW.
See: Orac's takedown of Moritz, published long before this whole snafu.
Update: Orac has taken on Andreas Moritz again.
See: Orac's takedown of Moritz, published long before this whole snafu.
Update: Orac has taken on Andreas Moritz again.
Thursday, February 18, 2010
Christopher Maloney is a quack
Now this is rather shocking. Wordpress has shut down the blog of a University of Maine student, Michael Hawkins, for having the temerity to criticize a Naturopath quack cancer quack. Luckily, PZ Myers has unleashed the power of the Streisand effect. So let's just be clear. Naturopathy is bullshit. And Christopher Maloney is a quack. And... Shame on Wordpress.
Update: the quack in question has written to PZ Myers to complain, and says it wasn't him who contacted Wordpress to shut down Hawkins' blog. Also: see Steven Novella's take on the incident.
Update 2: Andreas Moritz, a cancer quack, is actually responsible for taking Hawkins' blog down. See this.
Update: the quack in question has written to PZ Myers to complain, and says it wasn't him who contacted Wordpress to shut down Hawkins' blog. Also: see Steven Novella's take on the incident.
Update 2: Andreas Moritz, a cancer quack, is actually responsible for taking Hawkins' blog down. See this.
Tuesday, February 16, 2010
Quote: Clifford on the Ethics of Belief
I'm in the process of editing my piece on deferring to experts for publication somewhere (maybe Skeptical Inquirer), so I've been doing a bit more reading in the area. I just remembered William Clifford's famous 1877 essay, "The Ethics of Belief" (a nice pdf version is here). It's well worth a read, if only for its purple prose and hardnosed conclusion: "It is wrong always, everywhere, and for anyone, to believe anything upon insufficient evidence." Another nice quote:
Belief, that sacred faculty which prompts the decisions of our will, and knits into harmonious working all the compacted energies of our being, is ours not for ourselves but for humanity. It is rightly used on truths which have been established by long experience and waiting toil, and which have stood in the fierce light of free and fearless questioning. Then it helps to bind men together, and to strengthen and direct their common action.
It is desecrated when given to unproved and unquestioned statements, for the solace and private pleasure of the believer; to add a tinsel splendour to the plain straight road of our life and display a bright mirage beyond it; or even to drown the common sorrows of our kind by a self-deception which allows them not only to cast down, but also to degrade us. Whoso would deserve well of his fellows in this matter will guard the purity of his beliefs with a very fanaticism of jealous care, lest at any time it should rest on an unworthy object, and catch a stain which can never be wiped away.
Saturday, February 13, 2010
English Libel Law Reform
I have written repeatedly about the urgent need for England to reform its libel laws (here is why). The Libel Reform Campaign has been doing a great job in pushing this issue onto the agenda, and one good way to help, urged on supporters recently by the long-suffering Simon Singh, is to sign the reform petition. Please do so if you haven't already...
Thursday, February 11, 2010
Nature on South African science
Today, February 11th, 2010, is the 20th anniversary of Nelson Mandela's release from prison. Nature has taken this opportunity to publish two articles about the state of science in South Africa: a news feature entitled "South African science: black, white and grey" and an editorial "South Africa's opportunity".
The news feature by Michael Cherry is the most interesting: it focuses on the funding and educational challenges in South African science. Writes Cherry:
The most serious problem, however, is South Africa's dysfunctional education system that is simply not producing enough capable students. Of particular concern is that this problem is especially acute for black students, largely due to a lack of qualified teachers and the legacy of Bantu education (which deliberately provided an inferior education). What is worse, retaining graduates (especially black graduates) is difficult, because more lucrative careers draw them away from academia and research. Cherry sums up these problems, so:
The news feature by Michael Cherry is the most interesting: it focuses on the funding and educational challenges in South African science. Writes Cherry:
Lack of strong science leadership, a dearth of funds and a series of well-intentioned but poorly executed schemes have left most of those hopes unrealized. In 1994, the Mandela government established a ministry of science, technology, arts and culture; later, under Mandela's successor, Thabo Mbeki, the Department of Science and Technology (DST) split off as a separate ministry. But its spending priorities have been questioned, its efforts to boost student numbers have failed to live up to expectations, and beyond that, many scars of the racially segregated education system remain.Particular problems identified includes a lack of substantial funding for basic research and too much focus on applied and prestige projects (like SALT and the SKA). Additionally, South Africa has yet to reach its 1% of GDP target for R&D spend, and the system of awarding (meager) block funding in terms of how researchers are 'rated' by the National Research Foundation (NRF) seems rather silly. (It has not helped that the Ministry of Science and Technology has long been handed to minor political partners of the ANC-led government, which meant science lacked influential advocates in government. Luckily, Naledi Pandor, the new minister, is an ANC bigwig).
The most serious problem, however, is South Africa's dysfunctional education system that is simply not producing enough capable students. Of particular concern is that this problem is especially acute for black students, largely due to a lack of qualified teachers and the legacy of Bantu education (which deliberately provided an inferior education). What is worse, retaining graduates (especially black graduates) is difficult, because more lucrative careers draw them away from academia and research. Cherry sums up these problems, so:
Stark racial differences in participation rates still exist across the board in higher education: in 2006, 59% of white and 42% of Indian 18–24-year-olds — but only 13% of coloureds and 12% of black Africans — were engaged in tertiary education. And although the NRF is justifiably proud that more than half the doctoral students it supported in 2008 were black, it declined to disclose how many of these were in science and engineering. Many black schools in the apartheid era did not offer mathematics and physical science as subjects, initially as a point of policy, and latterly on account of a teacher shortage. But the sad reality is that after almost 16 years of democracy, the proportion of black and white school leavers attaining good enough grades in these subjects to qualify for university courses in science has not risen significantly, in part because efforts to train and recruit schoolteachers in these subjects have failed.The editorial I referred to above strikes a more positive note and suggests reforms:
However, there is reason for optimism. The South African research community's long alienation from the government, which emanated largely from former president Thabo Mbeki's denialist stand on AIDS, is at last a thing of the past. And in May 2009, newly elected president Jacob Zuma appointed Naledi Pandor as his minister of science and technology. Her role as education minister in the previous cabinet has given Pandor a firm grasp of the problems facing science in South Africa. And because she is the first incumbent at her ministry to be a member of the ruling African National Congress, she has the requisite clout to effect reform.
[...]
The best way to recruit good teachers and academics is by offering much better salaries, decent working conditions and good facilities. But this will require significant financial commitment by the government, as well as cooperation between the departments of education and higher education. Nonetheless, as South Africa emerges from the recession this year, it would be one of the wisest investments Zuma's government could make. Perhaps the National Planning Commission, which is headed by the highly regarded former finance minister Trevor Manuel, could provide a mechanism by which such a huge task might be achieved.
For all of its problems in science, South Africa has a solid and productive core of university-based researchers. And since the end of apartheid, the country's universities have been enriched by significant numbers of students from other African countries. South Africa thus has the potential to become not just a major player on the international research stage, but also a catalyst for the development of science throughout the continent. There is a huge pool of talent waiting to be tapped, and it is up to Zuma, Pandor and other political leaders to put in place the money and systems with which to tap it.
Fun with Buzz
Now, I'm not nearly as much of a Google fanboi as Owen Swart, but I'm a fanboi nonetheless. So I was unreasonably excited about the launch of Buzz, Google's latest social web product. I've played around with it a bit, and, essentially, it's Twitter + FriendFeed, nicely integrated into many of Google's products, and especially Gmail. If you're not subscribed to my mini-blog (which, for the uninitiated, is a stream of links I share via Google Reader), I suggest having a look at my Buzz profile. If you have a Google Account (and you should anyway, if only for Gmail) you can "follow" me from there, which will then stream my blog posts and shared items straight into the Buzz section of your inbox. (I'm not quite sure how the mechanics work yet, but I think this is how it will go. Buzz is also supposed to PageRank items, so whether it filters stories from people you follow, or only 'recommends' extra items based on a ranking system, I don't know).
We'll have to wait and see whether Buzz takes off or not, but it certainly looks promising. My major concern at this point is duplication. Say you're (1) following me in Google Reader and (2) subscribed to my blog, and now, (3), decide to "follow" me in Buzz. When you scroll through my shared items in Reader, will you again have to do so in Buzz? What about vice versa? And what about my blog rss and Buzz? Will reading it in Buzz mark it as read in Reader? I have seen that a "like" in Buzz transfers (a while later) to Reader, so hopefully Google have found a way of avoiding such annoying possible duplications.Anyway, enough off-topic fanboi ranting from me...
See also: Ten Pressing Questions about Google Buzz at PC Magazine, the Wikipedia entry on Buzz (which will quickly evolve into something useful and informative) and ReadWriteWeb on Buzz's missing features.
We'll have to wait and see whether Buzz takes off or not, but it certainly looks promising. My major concern at this point is duplication. Say you're (1) following me in Google Reader and (2) subscribed to my blog, and now, (3), decide to "follow" me in Buzz. When you scroll through my shared items in Reader, will you again have to do so in Buzz? What about vice versa? And what about my blog rss and Buzz? Will reading it in Buzz mark it as read in Reader? I have seen that a "like" in Buzz transfers (a while later) to Reader, so hopefully Google have found a way of avoiding such annoying possible duplications.Anyway, enough off-topic fanboi ranting from me...
See also: Ten Pressing Questions about Google Buzz at PC Magazine, the Wikipedia entry on Buzz (which will quickly evolve into something useful and informative) and ReadWriteWeb on Buzz's missing features.
Wednesday, February 10, 2010
Quotes: Galileo and Darwin
While writing my piece on intellectual deference, I came across a lot of awesome related quotes. Two of my favorites didn't make it into the final version of that post. These are they:
"The less people know and understand about [matters requiring thought], the more positively they attempt to argue concerning them, while… to know and understand a multitude of things renders men cautious in passing judgment upon anything new." - Galileo
"Ignorance more frequently begets confidence than does knowledge: it is those who know little, and not those who know much, who so positively assert that this or that problem will never be solved by science." - Charles Darwin
Tuesday, February 9, 2010
Calling Africa's science nerds
In light of all of our problems - poverty, witch hunts, anti-vaccinationism, quackery, religious obscurantism of various kinds, and so on - it has long seemed obvious to me that Africa badly needs skepticism, science, logic and reason. The great Sir Francis Bacon wrote in the Novum Organum that:
Science, however, is not merely a matter of 'applications', not only relevant to policy makers, and certainly not only a way of fostering economic development. Or, to again borrow from (and somewhat adapt) Bacon, scientists are not merely concerned with "the relief of man's estate", they are also "merchants of light". Scientific and skeptical thinking - the commitment to submit all ideas (especially our own) to severe critical scrutiny, keep an open mind, aim at unified knowledge, resist obscurantism, and rely on reason and experimentation (among other things) - is the only reliable way of answering the deep questions of our origins, place in the universe, and ultimate fate. To understand the universe and ourselves, in short, we need to apply the 'technology of truth': science.
Africa, then, needs skeptical, reasoned, and scientific voices, not only to foster development and growth, but to serve as merchants of light: to hold out a candle in the dark in a demon-haunted world. It is for this reason that I have long been trying to organize, promote and otherwise advance the skeptical/scientific blogging community in South Africa, and latterly Africa as a whole. So if you are an African skeptical or scientific blogger (or know of such bloggers) please contact me on ionian.enchantment@gmail.com. Participate in our carnival, post and get listed on our blogroll, and join our email discussion group. And, of course, if you have a blog, keep up the good work! If you don't, start one!
I'll give the final word to E. O. Wilson (who I quoted in the very first post on my blog, and who gave me the idea for its name):
Human knowledge and human power meet in one; for where the cause is not known the effect cannot be produced. Nature to be commanded must be obeyed; and that which in contemplation is as the cause is in operation as the rule.Knowledge, in the words of the popular corruption, is power. Achieving our ends depends (at least in part) on our understanding of how the world works. But, as Bacon also pointed out, (1) the world is exceedingly complicated ("the subtlety of nature is greater many times over than the subtlety of the senses and understanding") and (2) the human mind is prone to error ("for the mind of man is far from the nature of a clear and equal glass, wherein the beams of things should reflect according to their true incidence, nay, it is rather like an enchanted glass, full of superstition and imposture"). Making sensible decisions in a complex world, then, depends (in part) on us applying science to our problems.
Science, however, is not merely a matter of 'applications', not only relevant to policy makers, and certainly not only a way of fostering economic development. Or, to again borrow from (and somewhat adapt) Bacon, scientists are not merely concerned with "the relief of man's estate", they are also "merchants of light". Scientific and skeptical thinking - the commitment to submit all ideas (especially our own) to severe critical scrutiny, keep an open mind, aim at unified knowledge, resist obscurantism, and rely on reason and experimentation (among other things) - is the only reliable way of answering the deep questions of our origins, place in the universe, and ultimate fate. To understand the universe and ourselves, in short, we need to apply the 'technology of truth': science.
Africa, then, needs skeptical, reasoned, and scientific voices, not only to foster development and growth, but to serve as merchants of light: to hold out a candle in the dark in a demon-haunted world. It is for this reason that I have long been trying to organize, promote and otherwise advance the skeptical/scientific blogging community in South Africa, and latterly Africa as a whole. So if you are an African skeptical or scientific blogger (or know of such bloggers) please contact me on ionian.enchantment@gmail.com. Participate in our carnival, post and get listed on our blogroll, and join our email discussion group. And, of course, if you have a blog, keep up the good work! If you don't, start one!
I'll give the final word to E. O. Wilson (who I quoted in the very first post on my blog, and who gave me the idea for its name):
Such, I believe, is the source of the Ionian Enchantment. Preferring a search for objective reality over revelation is another way of satisfying religious hunger. It is an endeavor almost as old as civilization and intertwined with traditional religion, but it follows a very different course – a stoic’s creed, an acquired taste, a guidebook to adventure plotted across rough terrain. It aims to save the spirit, not by surrender but by liberation of the human mind. Its central tenet, as Einstein knew, is the unification of knowledge. When we have unified enough certain knowledge, we will understand who we are and why we are here. (Consilience: The Unity of Knowledge, p. 7)
Labels:
Africa,
Critical Thinking,
Skepticism,
South Africa
Friday, February 5, 2010
Times Online's Top 30 Science Blogs
Note: as PZ Myers points out, the list includes one Anthony Watts, a crank global warming denialist. That's rather dumb of The Times.
Times Online has released their choice of the 30 best science blogs. Several of my favorite blogs got the nod, including, Carl Zimmer's The Loom, Vaughn Bell's Mind Hacks, Ed Yong's Not Exactly Rocket Science, Orac's Respectful Insolence, and Phil Plait's Bad Astronomy.
The picks are, I think, pretty biased towards British blogs, but maybe that's understandable. There are a couple of oversights, though. Arguably, some of these may have deserved it more than blogs that made it onto the list (*cough*Intersection*cough*). Some of the overlooked science blogs are Steven Novella's NeuroLogica, Mo's Neurophilosophy, David Colquhoun's Improbable Science, John Hawks' Anthropology Weblog and Science-Based Medicine.
Times Online has released their choice of the 30 best science blogs. Several of my favorite blogs got the nod, including, Carl Zimmer's The Loom, Vaughn Bell's Mind Hacks, Ed Yong's Not Exactly Rocket Science, Orac's Respectful Insolence, and Phil Plait's Bad Astronomy.
The picks are, I think, pretty biased towards British blogs, but maybe that's understandable. There are a couple of oversights, though. Arguably, some of these may have deserved it more than blogs that made it onto the list (*cough*Intersection*cough*). Some of the overlooked science blogs are Steven Novella's NeuroLogica, Mo's Neurophilosophy, David Colquhoun's Improbable Science, John Hawks' Anthropology Weblog and Science-Based Medicine.
Wednesday, February 3, 2010
In Praise of Deference
It is common to hear that you should make up your own mind and not let other people make it up for you. While I wholeheartedly agree with the sentiment and believe it’s silly to be obsequious to arbitrary authority, I nonetheless think intellectual deference is both unavoidable and a virtue. This notion may seem repugnant, an affront to liberal values, and perhaps even unnecessary, so I’ll defend it in some detail. (I want to stress, however, that while I’ll be defending my considered opinion, my argument for it will be sketchy – a full, precise, treatment would require far more time and space). My argument rests on (among others) the following two premises: (1) the world is staggeringly complicated and regularly in conflict with our intuitions, and (2) intellectual responsibility requires you to know what you’re talking about before you form an opinion. (The picture, by the way, is a high-resolution map of science, from this PLoS One paper).
It’s a cliché that the more you learn the more you realize how little you know, but it’s true: things are nearly always more complicated than they seem, and thus peeling away layers of ignorance usually reveals yet more complexities, uncertainties and undreamt of intricacy. Or as Douglas Adams is supposed to have said: "the universe is a lot more complicated than you might think even if you start from a position of thinking that it’s pretty damn complicated to begin with". If you’re at all in doubt that the world really is as complex as I make out, I invite you to examine the work of those whose job it is to understand it. Have a look at contents of any respected peer-reviewed journal, Science, say, or Nature Neuroscience or Physical Review Letters. Or, if you prefer, examine an advanced-level textbook in any explanatorily successful field such as molecular biology, astrophysics, or neuroscience. (It has to be a field that is at least partially explanatorily successful because incorrect theories can be arbitrarily simple given they need not reflect reality). Explanations of empirical phenomena almost invariably require us to employ sophisticated techniques and equipment, set aside ingrained or natural ways of thinking, invoke vast bodies of previously established knowledge, and express or establish our findings with abstruse mathematics or statistics. Given that our models and theories at least partially mirror reality, the intricacy and opacity of the former reflects the complexity and opacity of the latter. (Though there are probably exceptions, explanations tend to get more complicated – in the sense of arcane or difficult to understand – as they become more successful. There thus seems to be little reason to attribute the complexity of our theories to the incompleteness of our knowledge. Also, while successful explanations may be simple in the sense of being elegant or parsimonious, they’re extremely unlikely to be simple in the sense of easy to understand or straightforward).
My second premise is that ‘making up one’s own mind’ responsibly requires, among other things, understanding what the hell is going on before settling on an opinion. It might be news to some people, but really ‘understanding what is going on’ requires much more than ‘I have a strong inner conviction’ or ‘I once read a book on this’ or ‘someone told me so’. It requires, in general and speaking roughly, proficiency in logic and critical thinking, competence in general and domain-specific scholarship and research, knowledge of the relevant facts and how sure we are of them, mastery of the relevant techniques, a familiarity with possible alternative explanations, knowledge of at least a large proportion of the relevant literature, and more. And if (1) is true – if the universe really is extremely complicated – fulfilling these requirements is awfully demanding. Carl Sagan provided an excellent example:
But no one person could possibly understand or discover everything about a topic, I hear you protest. Exactly! That’s exactly my point. The universe is chock full of all sorts of preposterously complicated phenomena, and no one person could ever hope to understand all or even a significant proportion of it in full. No one person could single handedly maintain a modern lifestyle (grow, harvest and process all his own food, construct his own home, build his own computer from scratch, generate his own electricity, etc.) but is forced to rely on the division of labor. Similarly, no one person could hope to understand everything about even tiny bits of the universe – the causes of climate change on a particular planet, say – but is forced to rely on the intellectual division of labor. And such a cognitive division of labor means intellectual deference. Even an expert doesn't (and can’t) know everything about her field or sometimes even her own area of specialization. It is not unheard of for scientific papers to be published where none of the authors understand all of the methods and findings. Experts defer to other experts. Experts have to defer to other experts. Deference, then, is a non-optional virtue, the only alternatives to which are agnosticism and being intellectually irresponsible. Or, to be less fancy about it, there are inevitably a large number of topics about which you can either (1) form or express unjustified opinions (i.e. be irresponsible), (2) say "I don't know" or (3) defer to the experts.
But, it may be objected, experts are sometimes wrong! Experts can be bought! Experts disagree! Indeed they are, indeed they can be and indeed they do. Experts, of course, are human beings and are therefore subject to all the familiar human failings: they are as fallible, quarrelsome, susceptible to cognitive biases and illusions, prone to social climbing, self-interested, biased, driven by ideology and whatnot as the rest of us. (Well, maybe this isn’t quite true: people self-select into science and must jump through various hoops like defending a thesis, so perhaps individuals best adapted to the ideals of science are more likely to become scientists. What is clear, though, is that scientists are not immune to these human failings). Expertise – the mastery of the techniques and in-depth knowledge of the scholarship on some subject – is not itself a huge improvement over “making up your own mind”. (By the way, I don't suggest expertise requires formal education or credentials; non-PhDs who have mastered a subject certainly still count as experts). Given the complexity of the universe and the limitations of the human mind, expertise is (for many subjects) a necessary but not sufficient condition for having justified opinions. (For one thing, it is possible to be a kind of an expert in utter bollocks: there is, for example, a huge alchemical literature, complete with rival schools, arcane jargon, different techniques and so on. And don’t get me started on postmodernism). So individual expertise in some cases doesn’t seem particularly reliable (though, ceteris paribus, it’s certainly better than nothing) and deferring to an individual expert thus isn’t necessarily such a good idea. Help is at hand, however.
The scientific method, far from denying human failings like the ones I enumerated above, exists exactly because of them: it is because the human mind is so prone to error and bias that we need this vast, expensive and seemingly inefficient set of institutions, norms and practices we call “science”. Science, roughly and to first approximation, is a collaborative enterprise aimed at a unified description and explanation of natural phenomena where the ultimate arbiter of truth is empirical experimentation, the reliability and quality of which is evaluated by a community of scholars through peer-review and replication. A scientist, then, is a person who attempts to describe and explain the natural world by testing empirical hypotheses in collaboration with a group of other researchers by reviewing their work, and producing work that is in turn reviewed. Convincing your peers – who will criticize your ideas harshly and subject them to industrial-strength skepticism – by publishing in peer-reviewed journals, presenting papers at conferences, and, more informally, debating in seminars and pubs, is at the very heart of science. (The mark of a crank is not being embedded in such a system of cooperation, dismissing criticism as some conspiracy or another, and claiming the mantle of Galileo). The point of this collaborative enterprise is to minimize bias: an individual wants her ideas to be true, is limited by peculiar psychological traits and a particular background, knows only some fraction of the relevant facts, and suffers from a whole ménage of other cognitive biases and illusions. A community dedicated to collaboration (and competition) – whose members aim at rigorous explanation and consensus, and who agree on the primacy of empirical demonstration – can overcome many (though obviously not all) of these biases because, in a sense, one individual’s biases cancels out another individual’s biases. Manipulating the world in such a way to hold certain variables constant while varying others – i.e. doing controlled experiments – is the most powerful technique ever invented to discover nature’s secrets (Daniel Dennett aptly called it the “technology of truth”), and having an entire social system (complete with attending values) and a supporting set of institutions (universities, granting agencies, journals, professional organizations etc.) multiplies these powers by minimizing human failings in interpreting and conducting the experiments. Obviously, science is not perfect, but because of how it is organized – and crucially, because there is a way of falsifying hypotheses – it is a self-correcting process: explanations are tested, discarded and repeatedly refined, which then slowly ratchets our theories closer to the truth over time. (If you have doubts about the success of science, please stop reading, apply head to desk and repeat until you come to your senses).
So why is this whole story about science relevant in a post about deference? Simple: because when the relevant experts agree that some theory or explanation is correct, you can be reasonably confident that the theory is in fact correct. In other words, given the nature of the scientific method – given that claims are peer-reviewed, subject to intense scrutiny, tested, re-tested, refined, and so on – when there is a consensus among the relevant experts, it is reasonable to believe they are right. Of course and again, experts are people, so you can’t be certain the theory or proposition is true just because there is consensus, you can only be rather confident. At a minimum, individuals who disagree with the consensus have the burden of proof – they must show it is false, the majority does not need to refute the alternatives. (Though they often do). Since showing some consensus theory in science is false (or incomplete) is going to be extremely difficult, those who wish to disagree damn well better be experts themselves. (Consensus theories are of course sometimes overturned: witness plate tectonics). In general laypeople are not qualified to have independent opinions about complex topics – they lack the means to come to justified beliefs – and it is especially unreasonable for a non-expert to take a stance contrary to consensus. The upshot is that, firstly, it is reasonable to defer to the consensus opinion of the relevant experts – so I can justifiably say ‘E=mc2,' ‘DNA carries heritable information’, ‘there is a supermassive black hole at the center of the Milky Way’, etc. And, secondly, a layperson who disagrees with expert consensus – denying evolution by natural selection, anthropogenic global warming, that the Earth is about 4.5 billion years old, etc. – is unreasonable in the extreme. Experts get to have opinions on scientifically controversial questions, experts get to disagree with consensus; laypeople get to defer to consensus or reserve judgment. Doing otherwise, I think, shows what Bertrand Russell called (in another context) an “impertinent insolence toward the universe”. Scientists are often accused of arrogance and maybe I’ll be accused of this vice as well for telling laypeople who disagree to shut up. But I think the opposite is true. It is extraordinarily arrogant to have (independent) opinions on complex questions without being willing to pay your dues first – that is, without studying the question for years, reading the scholarly literature, mastering the relevant techniques and mathematics, and so on. Thinking you are entitled to an opinion without paying your dues is the very epitome of intellectual arrogance. And it is especially arrogant – mind-bogglingly so – for a non-expert to have opinions that contradict the consensus of the tens of thousands of intelligent, diligent and dedicated people who have spent decades studying, debating, doing research on and thinking deeply about their respective disciplines. The bottom line: be an expert, defer, or suspend judgment. (To be clear: I’m making an epistemic and not a political claim. People have a right of free speech and conscience, so they can form and express any opinion they like. But that doesn’t mean they have an intellectual warrant to do so).
To be sure, there are a whole bunch of complications here (how very appropriate, no?). For one, scientists are not the only people worth deferring to: if there is a consensus among plumbers, for example, that the best way to fix problem P is to do X, Y and Z, I’d be inclined to say that’s quite reliable. Nevertheless, while there are other groups (‘communities of practice’, etc.) that can reasonably be deferred to, for my purposes it is simply worth noting that scientists are one such group. Secondly, there are certainly degrees of expertise and deciding when someone has crossed the threshold to expert status is fraught with difficulty (though beware the false continuum). More problematical is the question of what topics are ‘sufficiently complicated’ that laypeople shouldn’t have independent opinions. Saying, for example, that only psychologists are qualified to determine whether Bob has a crush on Tamba, for example, is preposterous. The (partial) solution here, I think, is to invoke Richard Dawkins’ notion of the “Middle World” and to distinguish between explicit and implicit knowledge. Let's start with the former. The human mind, Dawkins convincingly argues, evolved to deal with and understand the everyday world we inhabit: of medium sized objects, operating at low-velocities, including animals and other people. “Folk biology”, “folk psychology”, and “folk physics”, for example, are regularly wrong in detail (sometimes spectacularly so), but they are often reliable when in our ‘natural environment’. The fact that we can, say, play football (which requires sophisticated ballistics), navigate a cluttered room (which requires sophisticated optics and physics), and cooperate and compete with other humans (which requires a complex theory of mind) and so on, shows we are far from cognitively incompetent. The human mind is (largely) good at solving the problems we encountered often in our evolutionary past: it is good, in other words, at Middle-World problems. But our ancient ancestors never traveled near the speed of light, never lived in large-scale complex societies, never interacted directly with the quantum world, never needed to understand the nature of stars and so. On certain topics, then, laypeople are reliably (though almost always incompletely) competent. There is a fundamental difference, in other words, between the statements “Bob is upset at Mary for cheating on him with John” and “E equals mc2”: the mind evolved to deal with the former, but not the latter. Important, also, is the difference between implicit knowledge (or behavioral competence) and explicit knowledge (i.e. justified true belief, with some modifications), While being a football quarterback, say, requires a brain capable of solving complex physics problems, this does not mean football players explicitly understand the relevant physics. When I move my arm to pick up my cup of coffee, my brain does damn complicated trigonometry, but I don't know that trigonometry explicitly - my brain's calculations are not consciously accessible to me. What this means is that behavioral competence or implicit knowledge in some domain (seeing, interacting with people and non-human animals, walking about) does not imply explicit knowledge of the underlying science. (A slam dunk argument for this, by the way, is our inability to build robots even remotely as competent as we are).
There are several more complications but I'll only mention one more. It is regularly extremely difficult to determine whether there is a scientific consensus on some topic and, if so, what it actually is, especially so when ideologically committed pseudoscientists muddy the waters. For example, the overwhelming majority of the relevant experts agree that evolution by natural selection – the fact of evolution and the theory of natural selection (etc.) – is established beyond all reasonable doubt. Creationists, however, have tried to argue there is no such consensus and have even compiled lists of scientists who 'disagree with evolution' (c.f. Project Steve). Laypeople who do not understand the scientific method might see two sides 'debating' and have real difficulty figuring out who to believe. They might not realize, for example, that scientific consensus is not about lists of people who agree or disagree, that only the relevant experts are important (engineers who 'disagree with evolution' qua engineers tell us nothing), that no paper critical of evolution has appeared in a mainstream peer-reviewed journal (save by fraud) for several decades, and so on. Even a layperson convinced it is important to defer to scientific consensus, then, will sometimes have real trouble determining whether there is a consensus and what the consensus actually is. There are two ways of dealing with this problem, I think. The first is to employ the much underused phrase "I don't know". Agnosticism isn't particularly popular, but I think openly admitting what you do and do not know is one of the most important intellectual virtues. So when you can't figure out what the consensus is (or whether there is one), it doesn't suddenly become reasonable to form opinions in the absence of knowledge; agnosticism is then the reasonable course. The second answer to the above problem is having certain metacognitive skills - an understanding of the scientific method and the academic process, familiarity with cognitive biases, skepticism and an ability to assign onus appropriately, finely-honed critical reasoning skills, a basic understanding of statistics, and so on - that are useful for evaluating any claim. While these are not sufficient to understand the details of any area of science, it allows for a 'popular level' grasp of the field, which in turn enables one to identify what the findings are (and some of the reasons why they're established), and determine, with some (hard) work, whether there is a consensus on some question.
"The fundamental cause of the trouble," wrote Russell, "is that in the modern world the stupid are cocksure while the intelligent are full of doubt". While dividing the world into 'the stupid' and 'the intelligent' is probably going too far, I think Russell is on to something: it is those who are ignorant of science who are certain they're right - even when they're not. The Dunning-Kruger effect suggests why: the intellectually unskilled lack the intellectual skills needed to recognize that they are unskilled. They are, in other words, unskilled and unaware of it. Dunning and Kruger also showed, however, that people could be trained to become somewhat more competent, which then allows them to recognize the depth of their incompetence. What I have shown in this post, I hope, is that, in a sense, we are all cognitively incompetent relative to the stupendous complexity of the universe. It is science (or, more broadly, the project of secular reason) that holds out a candle in the dark: we have uncovered nature's secrets only because we invented this 'technology of truth' and those who wish to advance our knowledge or understand a particular phenomenon deeply must approach it humbly and pay their dues in long and intensive study. Those of us who have not paid our dues in a particular field can only defer to those who have or remain agnostic. There is no reasonable alternative.
The last word goes to the great Bertrand Russell:
It’s a cliché that the more you learn the more you realize how little you know, but it’s true: things are nearly always more complicated than they seem, and thus peeling away layers of ignorance usually reveals yet more complexities, uncertainties and undreamt of intricacy. Or as Douglas Adams is supposed to have said: "the universe is a lot more complicated than you might think even if you start from a position of thinking that it’s pretty damn complicated to begin with". If you’re at all in doubt that the world really is as complex as I make out, I invite you to examine the work of those whose job it is to understand it. Have a look at contents of any respected peer-reviewed journal, Science, say, or Nature Neuroscience or Physical Review Letters. Or, if you prefer, examine an advanced-level textbook in any explanatorily successful field such as molecular biology, astrophysics, or neuroscience. (It has to be a field that is at least partially explanatorily successful because incorrect theories can be arbitrarily simple given they need not reflect reality). Explanations of empirical phenomena almost invariably require us to employ sophisticated techniques and equipment, set aside ingrained or natural ways of thinking, invoke vast bodies of previously established knowledge, and express or establish our findings with abstruse mathematics or statistics. Given that our models and theories at least partially mirror reality, the intricacy and opacity of the former reflects the complexity and opacity of the latter. (Though there are probably exceptions, explanations tend to get more complicated – in the sense of arcane or difficult to understand – as they become more successful. There thus seems to be little reason to attribute the complexity of our theories to the incompleteness of our knowledge. Also, while successful explanations may be simple in the sense of being elegant or parsimonious, they’re extremely unlikely to be simple in the sense of easy to understand or straightforward).
My second premise is that ‘making up one’s own mind’ responsibly requires, among other things, understanding what the hell is going on before settling on an opinion. It might be news to some people, but really ‘understanding what is going on’ requires much more than ‘I have a strong inner conviction’ or ‘I once read a book on this’ or ‘someone told me so’. It requires, in general and speaking roughly, proficiency in logic and critical thinking, competence in general and domain-specific scholarship and research, knowledge of the relevant facts and how sure we are of them, mastery of the relevant techniques, a familiarity with possible alternative explanations, knowledge of at least a large proportion of the relevant literature, and more. And if (1) is true – if the universe really is extremely complicated – fulfilling these requirements is awfully demanding. Carl Sagan provided an excellent example:
Imagine you seriously want to understand what quantum mechanics is about. There is a mathematical underpinning that you must first acquire, mastery of each mathematical subdiscipline leading you to the threshold of the next. In turn you must learn arithmetic, Euclidian geometry, high school algebra, differential and integral calculus, ordinary and partial differential equations, vector calculus, certain special functions of mathematical physics, matrix algebra, and group theory. For most physics students, this might occupy them from, say, third grade to early graduate school – roughly 15 years. Such a course of study does not actually involve learning any quantum mechanics, but merely establishing the mathematical framework required to approach it deeply. (The Demon-Haunted World, p. 249)In other words, before you can hope to begin to understand quantum mechanics, you need to master a vast body of often-difficult mathematics even before learning masses of mind-bending physics. Sagan goes as far as to say there are no successful popularizations of quantum mechanics because it is so in conflict with our intuitions, dealing with it mathematically is our only option. If you think you understand quantum mechanics without having mastered the mathematics, in short, you’re confused about what the word ‘understand’ means.
But no one person could possibly understand or discover everything about a topic, I hear you protest. Exactly! That’s exactly my point. The universe is chock full of all sorts of preposterously complicated phenomena, and no one person could ever hope to understand all or even a significant proportion of it in full. No one person could single handedly maintain a modern lifestyle (grow, harvest and process all his own food, construct his own home, build his own computer from scratch, generate his own electricity, etc.) but is forced to rely on the division of labor. Similarly, no one person could hope to understand everything about even tiny bits of the universe – the causes of climate change on a particular planet, say – but is forced to rely on the intellectual division of labor. And such a cognitive division of labor means intellectual deference. Even an expert doesn't (and can’t) know everything about her field or sometimes even her own area of specialization. It is not unheard of for scientific papers to be published where none of the authors understand all of the methods and findings. Experts defer to other experts. Experts have to defer to other experts. Deference, then, is a non-optional virtue, the only alternatives to which are agnosticism and being intellectually irresponsible. Or, to be less fancy about it, there are inevitably a large number of topics about which you can either (1) form or express unjustified opinions (i.e. be irresponsible), (2) say "I don't know" or (3) defer to the experts.
But, it may be objected, experts are sometimes wrong! Experts can be bought! Experts disagree! Indeed they are, indeed they can be and indeed they do. Experts, of course, are human beings and are therefore subject to all the familiar human failings: they are as fallible, quarrelsome, susceptible to cognitive biases and illusions, prone to social climbing, self-interested, biased, driven by ideology and whatnot as the rest of us. (Well, maybe this isn’t quite true: people self-select into science and must jump through various hoops like defending a thesis, so perhaps individuals best adapted to the ideals of science are more likely to become scientists. What is clear, though, is that scientists are not immune to these human failings). Expertise – the mastery of the techniques and in-depth knowledge of the scholarship on some subject – is not itself a huge improvement over “making up your own mind”. (By the way, I don't suggest expertise requires formal education or credentials; non-PhDs who have mastered a subject certainly still count as experts). Given the complexity of the universe and the limitations of the human mind, expertise is (for many subjects) a necessary but not sufficient condition for having justified opinions. (For one thing, it is possible to be a kind of an expert in utter bollocks: there is, for example, a huge alchemical literature, complete with rival schools, arcane jargon, different techniques and so on. And don’t get me started on postmodernism). So individual expertise in some cases doesn’t seem particularly reliable (though, ceteris paribus, it’s certainly better than nothing) and deferring to an individual expert thus isn’t necessarily such a good idea. Help is at hand, however.
The scientific method, far from denying human failings like the ones I enumerated above, exists exactly because of them: it is because the human mind is so prone to error and bias that we need this vast, expensive and seemingly inefficient set of institutions, norms and practices we call “science”. Science, roughly and to first approximation, is a collaborative enterprise aimed at a unified description and explanation of natural phenomena where the ultimate arbiter of truth is empirical experimentation, the reliability and quality of which is evaluated by a community of scholars through peer-review and replication. A scientist, then, is a person who attempts to describe and explain the natural world by testing empirical hypotheses in collaboration with a group of other researchers by reviewing their work, and producing work that is in turn reviewed. Convincing your peers – who will criticize your ideas harshly and subject them to industrial-strength skepticism – by publishing in peer-reviewed journals, presenting papers at conferences, and, more informally, debating in seminars and pubs, is at the very heart of science. (The mark of a crank is not being embedded in such a system of cooperation, dismissing criticism as some conspiracy or another, and claiming the mantle of Galileo). The point of this collaborative enterprise is to minimize bias: an individual wants her ideas to be true, is limited by peculiar psychological traits and a particular background, knows only some fraction of the relevant facts, and suffers from a whole ménage of other cognitive biases and illusions. A community dedicated to collaboration (and competition) – whose members aim at rigorous explanation and consensus, and who agree on the primacy of empirical demonstration – can overcome many (though obviously not all) of these biases because, in a sense, one individual’s biases cancels out another individual’s biases. Manipulating the world in such a way to hold certain variables constant while varying others – i.e. doing controlled experiments – is the most powerful technique ever invented to discover nature’s secrets (Daniel Dennett aptly called it the “technology of truth”), and having an entire social system (complete with attending values) and a supporting set of institutions (universities, granting agencies, journals, professional organizations etc.) multiplies these powers by minimizing human failings in interpreting and conducting the experiments. Obviously, science is not perfect, but because of how it is organized – and crucially, because there is a way of falsifying hypotheses – it is a self-correcting process: explanations are tested, discarded and repeatedly refined, which then slowly ratchets our theories closer to the truth over time. (If you have doubts about the success of science, please stop reading, apply head to desk and repeat until you come to your senses).
So why is this whole story about science relevant in a post about deference? Simple: because when the relevant experts agree that some theory or explanation is correct, you can be reasonably confident that the theory is in fact correct. In other words, given the nature of the scientific method – given that claims are peer-reviewed, subject to intense scrutiny, tested, re-tested, refined, and so on – when there is a consensus among the relevant experts, it is reasonable to believe they are right. Of course and again, experts are people, so you can’t be certain the theory or proposition is true just because there is consensus, you can only be rather confident. At a minimum, individuals who disagree with the consensus have the burden of proof – they must show it is false, the majority does not need to refute the alternatives. (Though they often do). Since showing some consensus theory in science is false (or incomplete) is going to be extremely difficult, those who wish to disagree damn well better be experts themselves. (Consensus theories are of course sometimes overturned: witness plate tectonics). In general laypeople are not qualified to have independent opinions about complex topics – they lack the means to come to justified beliefs – and it is especially unreasonable for a non-expert to take a stance contrary to consensus. The upshot is that, firstly, it is reasonable to defer to the consensus opinion of the relevant experts – so I can justifiably say ‘E=mc2,' ‘DNA carries heritable information’, ‘there is a supermassive black hole at the center of the Milky Way’, etc. And, secondly, a layperson who disagrees with expert consensus – denying evolution by natural selection, anthropogenic global warming, that the Earth is about 4.5 billion years old, etc. – is unreasonable in the extreme. Experts get to have opinions on scientifically controversial questions, experts get to disagree with consensus; laypeople get to defer to consensus or reserve judgment. Doing otherwise, I think, shows what Bertrand Russell called (in another context) an “impertinent insolence toward the universe”. Scientists are often accused of arrogance and maybe I’ll be accused of this vice as well for telling laypeople who disagree to shut up. But I think the opposite is true. It is extraordinarily arrogant to have (independent) opinions on complex questions without being willing to pay your dues first – that is, without studying the question for years, reading the scholarly literature, mastering the relevant techniques and mathematics, and so on. Thinking you are entitled to an opinion without paying your dues is the very epitome of intellectual arrogance. And it is especially arrogant – mind-bogglingly so – for a non-expert to have opinions that contradict the consensus of the tens of thousands of intelligent, diligent and dedicated people who have spent decades studying, debating, doing research on and thinking deeply about their respective disciplines. The bottom line: be an expert, defer, or suspend judgment. (To be clear: I’m making an epistemic and not a political claim. People have a right of free speech and conscience, so they can form and express any opinion they like. But that doesn’t mean they have an intellectual warrant to do so).
To be sure, there are a whole bunch of complications here (how very appropriate, no?). For one, scientists are not the only people worth deferring to: if there is a consensus among plumbers, for example, that the best way to fix problem P is to do X, Y and Z, I’d be inclined to say that’s quite reliable. Nevertheless, while there are other groups (‘communities of practice’, etc.) that can reasonably be deferred to, for my purposes it is simply worth noting that scientists are one such group. Secondly, there are certainly degrees of expertise and deciding when someone has crossed the threshold to expert status is fraught with difficulty (though beware the false continuum). More problematical is the question of what topics are ‘sufficiently complicated’ that laypeople shouldn’t have independent opinions. Saying, for example, that only psychologists are qualified to determine whether Bob has a crush on Tamba, for example, is preposterous. The (partial) solution here, I think, is to invoke Richard Dawkins’ notion of the “Middle World” and to distinguish between explicit and implicit knowledge. Let's start with the former. The human mind, Dawkins convincingly argues, evolved to deal with and understand the everyday world we inhabit: of medium sized objects, operating at low-velocities, including animals and other people. “Folk biology”, “folk psychology”, and “folk physics”, for example, are regularly wrong in detail (sometimes spectacularly so), but they are often reliable when in our ‘natural environment’. The fact that we can, say, play football (which requires sophisticated ballistics), navigate a cluttered room (which requires sophisticated optics and physics), and cooperate and compete with other humans (which requires a complex theory of mind) and so on, shows we are far from cognitively incompetent. The human mind is (largely) good at solving the problems we encountered often in our evolutionary past: it is good, in other words, at Middle-World problems. But our ancient ancestors never traveled near the speed of light, never lived in large-scale complex societies, never interacted directly with the quantum world, never needed to understand the nature of stars and so. On certain topics, then, laypeople are reliably (though almost always incompletely) competent. There is a fundamental difference, in other words, between the statements “Bob is upset at Mary for cheating on him with John” and “E equals mc2”: the mind evolved to deal with the former, but not the latter. Important, also, is the difference between implicit knowledge (or behavioral competence) and explicit knowledge (i.e. justified true belief, with some modifications), While being a football quarterback, say, requires a brain capable of solving complex physics problems, this does not mean football players explicitly understand the relevant physics. When I move my arm to pick up my cup of coffee, my brain does damn complicated trigonometry, but I don't know that trigonometry explicitly - my brain's calculations are not consciously accessible to me. What this means is that behavioral competence or implicit knowledge in some domain (seeing, interacting with people and non-human animals, walking about) does not imply explicit knowledge of the underlying science. (A slam dunk argument for this, by the way, is our inability to build robots even remotely as competent as we are).
There are several more complications but I'll only mention one more. It is regularly extremely difficult to determine whether there is a scientific consensus on some topic and, if so, what it actually is, especially so when ideologically committed pseudoscientists muddy the waters. For example, the overwhelming majority of the relevant experts agree that evolution by natural selection – the fact of evolution and the theory of natural selection (etc.) – is established beyond all reasonable doubt. Creationists, however, have tried to argue there is no such consensus and have even compiled lists of scientists who 'disagree with evolution' (c.f. Project Steve). Laypeople who do not understand the scientific method might see two sides 'debating' and have real difficulty figuring out who to believe. They might not realize, for example, that scientific consensus is not about lists of people who agree or disagree, that only the relevant experts are important (engineers who 'disagree with evolution' qua engineers tell us nothing), that no paper critical of evolution has appeared in a mainstream peer-reviewed journal (save by fraud) for several decades, and so on. Even a layperson convinced it is important to defer to scientific consensus, then, will sometimes have real trouble determining whether there is a consensus and what the consensus actually is. There are two ways of dealing with this problem, I think. The first is to employ the much underused phrase "I don't know". Agnosticism isn't particularly popular, but I think openly admitting what you do and do not know is one of the most important intellectual virtues. So when you can't figure out what the consensus is (or whether there is one), it doesn't suddenly become reasonable to form opinions in the absence of knowledge; agnosticism is then the reasonable course. The second answer to the above problem is having certain metacognitive skills - an understanding of the scientific method and the academic process, familiarity with cognitive biases, skepticism and an ability to assign onus appropriately, finely-honed critical reasoning skills, a basic understanding of statistics, and so on - that are useful for evaluating any claim. While these are not sufficient to understand the details of any area of science, it allows for a 'popular level' grasp of the field, which in turn enables one to identify what the findings are (and some of the reasons why they're established), and determine, with some (hard) work, whether there is a consensus on some question.
"The fundamental cause of the trouble," wrote Russell, "is that in the modern world the stupid are cocksure while the intelligent are full of doubt". While dividing the world into 'the stupid' and 'the intelligent' is probably going too far, I think Russell is on to something: it is those who are ignorant of science who are certain they're right - even when they're not. The Dunning-Kruger effect suggests why: the intellectually unskilled lack the intellectual skills needed to recognize that they are unskilled. They are, in other words, unskilled and unaware of it. Dunning and Kruger also showed, however, that people could be trained to become somewhat more competent, which then allows them to recognize the depth of their incompetence. What I have shown in this post, I hope, is that, in a sense, we are all cognitively incompetent relative to the stupendous complexity of the universe. It is science (or, more broadly, the project of secular reason) that holds out a candle in the dark: we have uncovered nature's secrets only because we invented this 'technology of truth' and those who wish to advance our knowledge or understand a particular phenomenon deeply must approach it humbly and pay their dues in long and intensive study. Those of us who have not paid our dues in a particular field can only defer to those who have or remain agnostic. There is no reasonable alternative.
The last word goes to the great Bertrand Russell:
The demand for certainty is one which is natural to man, but is nevertheless an intellectual vice. So long as men are not trained to withhold judgment in the absence of evidence, they will be led astray by cocksure prophets, and it is likely that their leaders will be either ignorant fanatics or dishonest charlatans. To endure uncertainty is difficult, but so are most of the other virtues.
Carnival of the Africans #13: The Zombie Edition
The lovely Angela of The Skeptic Detective has brought the Carnival of the Africans back from the dead! (She prefers to call it "The Phoenix Edition". Zombies are cooler. Evidence at right). A couple of picks: James of Acinonyx Scepticus on why playing the lotto is a really bad idea, Bongi of other things amanzi on an amazing case of a sangoma's neglected breast cancer, and Angela herself on why canola oil is not dangerous.
Tuesday, February 2, 2010
Video: Witch Trials in Africa
Reuters on witch trails in the Central African Republic. Idiocy. (Here's the direct link).
Please sign this petition against witch hunting in Africa. This evil must stop.
Please sign this petition against witch hunting in Africa. This evil must stop.
Telegraph Science Journalism Fail: Or, ARRRRGHHHH!!!111!
I was alerted to an absolutely daft article in the Telegraph via Derren Brown's Blog (who, disappointingly, didn't seem to notice it's daft). Basically, the article completely misrepresents a paper, "Bonobos Exhibit Delayed Development of Social Behavior and Cognition Relative to Chimpanzees", in press at Current Biology. The paper showed, roughly and among other things, that both bonobos and chimps are cooperative when they’re young, but then chimps become progressively less cooperative and more competitive with age, whereas bonobos don’t. The authors hypothesize that this may be due to pedomorphosis, that is, evolutionary changes to the developmental pattern such that juvenile characteristics persist into adulthood.
The 'science correspondent' at the Telegraph, one Richard Alleyne, however, would have you believe the researchers involved "now believe that being aggressive, intolerant and short-tempered could be a sign of a more advanced nature." How the hell Alleyne got from the paper to THAT conclusion is utterly beyond me, the researchers never even hinted that there is connection between 'civilization' and their findings. Alleyne goes on to commit a bunch of science howlers: among other things, saying chimps are "more evolved" and that chimps and bonobos are monkeys (ARGH). Anyway, I was going to blog about this in more detail, but luckily Alison Campbell at BioBlog has a most excellent take-down of the article, so go there for more (and more competent) analysis.
By the way, this is not the first time Alleyne has gotten it spectacularly wrong. Ben Goldacre has exposed his breathtaking misinterpretation of climate science (which he refused to correct) and his shameful distortion of a graduate student's MSc thesis which he claimed concluded women who get raped, essentially, were asking for it (at least this was half-heartedly and partially corrected).
In conclusion:
The 'science correspondent' at the Telegraph, one Richard Alleyne, however, would have you believe the researchers involved "now believe that being aggressive, intolerant and short-tempered could be a sign of a more advanced nature." How the hell Alleyne got from the paper to THAT conclusion is utterly beyond me, the researchers never even hinted that there is connection between 'civilization' and their findings. Alleyne goes on to commit a bunch of science howlers: among other things, saying chimps are "more evolved" and that chimps and bonobos are monkeys (ARGH). Anyway, I was going to blog about this in more detail, but luckily Alison Campbell at BioBlog has a most excellent take-down of the article, so go there for more (and more competent) analysis.
By the way, this is not the first time Alleyne has gotten it spectacularly wrong. Ben Goldacre has exposed his breathtaking misinterpretation of climate science (which he refused to correct) and his shameful distortion of a graduate student's MSc thesis which he claimed concluded women who get raped, essentially, were asking for it (at least this was half-heartedly and partially corrected).
In conclusion:
Subscribe to:
Posts (Atom)