Showing posts with label Research Blogging. Show all posts
Showing posts with label Research Blogging. Show all posts

Saturday, July 24, 2010

Are most experimental subjects in behavioral science WEIRD?

Note: here is a follow up post.

My supervisor David Spurrett and I have a commentary on an important paper - "The weirdest people in the world?" (pdf) - in the most recent edition of Behavioral & Brain Sciences. The authors of the paper, Canadian psychologists Joseph Henrich, Steven Heine and Ara Norenzayan, argue that most experimental subjects in the behavioral sciences are WEIRD - Western, Educated, Industrialized, Rich, and Democratic - and thus weird - not representative of most human beings. And this, if true, is a very serious problem indeed. Behavioral scientists (anthropologists, psychologists, behavioral economists and so on) are often interested in explaining the brains, minds and behavior of Homo sapiens as a species. (Some scientists, of course, are only interested in understanding specific cultures or what makes us different, but one important goal of the behavioral sciences has long been to explain universal human behavior). As evolutionary psychologists John Tooby and Leda Cosmides have put it, they "seek to characterize the universal, species-typical architecture of [the information-processing mechanisms that generate behavior]".

But... Henrich and his colleagues review a large body of literature that seems to show that, across several domains, Western undergraduates - the workhorses of the behavioral sciences - are extreme outliers. In other words, if they are correct, most of the data behavioral scientists have used to test hypothesis and to drive theorizing derives from subjects who are possibly the least suited for generalizing about the human race. Take as an example the Müller-Lyer illusion. In the diagram below, the lines labeled "a" and "b" are exactly equal in length, but many subjects perceive "b" as longer than "a".


This finding (which goes back all the way to 1889) has been used to make deductions about how the human visual system works. The Wikipedia article on the illusion, for example, states that one possible explanation for the effect is that "the visual system processes that judge depth and distance assume in general that the 'angles in' configuration corresponds to an object which is closer, and the 'angles out' configuration corresponds to an object which is far away". Plausible enough. Except that for some people - San foragers, for example - the illusion does not exist, and in many other non-WEIRD societies the effect size is significantly smaller. Henrich and his colleagues cite the work of Segall et. al. (1966), who worked out the magnitude of the illusion across 16 societies by varying the relative lengths of "a" and "b" and then asking subjects to indicate when they thought the lines were equal. The percentage by which "a" must be longer than "b" before the lines are adjudged equal - what they call the "point of subjective equality" (PSE) - varies substantially between subjects from different cultures - and, importantly, WEIRD-subjects are extreme outliers. The results are summarized in the following graph:


Both WEIRD adults and children (aged 5-11) require "a" to be 18%+ longer than "b" before they're perceived as equal, but for the San and South African miners, the illusion simply does not exist - their PSEs are not statistically distinguishable from 0. Why this difference arises is unknown, but Segall et. al. claim it is due to WEIRD people's visual systems developing differently because modern environments expose them to ("unnatural") shapes like 'carpeted corners', thus calibrating their visual systems in a way that favors the emergence of the illusion. Whatever the true explanation, however, it is clear that it is not permissible to use the existence of the illusion among WEIRD subjects to make inferences about the visual system. This is especially true since the San subjects were hunter-gatherers, just like all people for the vast majority of human evolutionary history. Given that species-typical features of the visual system would have evolved in this period, it is particularly telling that PSE seems to be positively correlated with the 'modernity' of the societies in question. (Warning: this is an "eyeball" observation; I haven't done a proper statistical analysis. Caveat emptor).

This is one example from an extremely long paper, but it conveys a flavor of the kind of evidence the authors present. (For much more, see "We agree it's WEIRD, but is it WEIRD enough?" over at Neuroanthropology). Having read the article very carefully, and despite some concerns, I think Henrich, Heine and Norenzayan are right: the Western undergraduate is often unrepresentative of humanity, and the behavioral science literature needs a lot of fixing as a result. (Most obviously, we need far more large, highly-powered, globally representative, prospectively designed, cross-cultural studies). Serious as this is, unfortunately, it gets worse... Since David and I worked extremely hard to present our argument clearly and concisely in our commentary (pdf - our piece starts on p. 44 of the pdf, paginated by BBS as p. 104), and I doubt I could improve on it, what follows is a slightly edited - simplified and somewhat de-academicized - version of the meat of our argument. (Note: each issue of BBS consists of a "target article" - in this case, Henrich et. al. - and 20 or so short peer-commentaries).

Henrich et al. underplay – to the point of missing – that how the behavioural sciences research community itself is constituted introduces biases. That the subject-pool of behavioural science is so shallow is indeed a serious problem, but so is the fact that the majority of behavioural researchers are themselves deeply WEIRD. People in Western countries have, on average, a remarkably homogeneous set of values compared to the full range of worldwide variability (Inglehart & Welzel 2005), and the data Henrich and his colleagues present suggest similarly population-level homogeneity in cognitive styles. Moreover, academics are more uniform than the populations from which they are drawn, so it is likely behavioral scientists are even WEIRDer than their most common subjects. Henrich and his colleagues review a bunch of studies and experiments that did not strike those who designed and conducted them as focused on outliers. Intelligent scientists acting in good faith conducted, peer-reviewed, and published this research, in many cases honestly believing that it threw light on human nature. This forcefully illustrates the power of the biases on the part of researchers themselves. It also suggests that, besides widening the pool of subjects, there are significant gains to be made by broadening the range of inputs to the scientific process, including in the conception, design, and evaluation of empirical and theoretical work. Given that diverse groups are demonstrably better at some kinds of problem solving, as things stand, the WEIRD-dominated literature is robbed of potentially worthwhile perspectives, critiques, and hypotheses that a truly global research community could provide. Clearly, simply increasing the number of behavioural sciences researchers will, in general, be beneficial. Our key contention, though, is that the marginal benefits of additional Western researchers are much smaller than the marginal benefits of more non-Western researchers, among other things, just because they are non-Western.

The non-Western world, in short, can contribute not only additional subjects to experiment upon – the main focus of the target article’s recommendations – but also additional researchers, with novel perspectives and ideas and who are less affected by WEIRD biases. (Naturally, these researchers will have biases of their own. Our claim is not that there is someone who consistently knows better; it is that diverse groups of investigators can avoid some kinds of error.) Clearly, these researchers will have to be educated, will likely be middle class, and, since science flourishes in politically open societies, they will tend be concentrated in liberal countries. Nevertheless, additional non-Western researchers, even if they are educated and relatively wealthy, could be a boon to the behavioural sciences.

A direct and powerful way to remedy both sources of bias – too many WEIRD subjects and too few non-WEIRD researchers – is to foster research capacity in the non-Western world. Non-WEIRD researchers tend to study non-WEIRD subjects, so increasing their number will deepen the subject pool and widen the range of inputs to the scientific process at the same time. Building research capacity, however, should not merely involve collaborations led by WEIRD researchers; it should aim to generate studies led and initiated by non-Western researchers. Committed and long-term inter-institutional collaboration between Western and non-Western universities focused on remedying the deficits in the behavioral sciences literature should include internships at Western universities for non-Western researchers, stints at non-Western universities for WEIRD researchers, and extensive student exchange programs (especially for graduate students). Unlike many existing scholarship and exchange programs in the sciences, a key point of the necessary programs should be for the learning to proceed in both directions.

----------------------------------
ResearchBlogging.org Henrich, J., Heine, S., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2-3), 61-83 DOI: 10.1017/S0140525X0999152X

Meadon, M., & Spurrett, D. (2010). It's not just the subjects – there are too many WEIRD researchers Behavioral and Brain Sciences, 33 (2-3), 104-105 DOI: 10.1017/S0140525X10000208

Wednesday, April 7, 2010

Guest post: Neuroscience through Optogenetics

A guest post from Hugh Pastollmy good friend and long-time intellectual sparring partner. Hugh's introduction follows, and then his article. 

Michael and I met while studying PPE at the University of Cape Town. Like him, I’ve completely changed direction since then and am now doing a PhD in Computational Neuroscience at the University of Edinburgh.

As part of my postgraduate studies I’ve been fortunate enough to use an exciting new and truly revolutionary technology known as optogenetics. Optogenetics permits fine-grained control of brain activity with light, dramatically increasing the range of interesting experiments we can do. Since it is likely that it will soon become the technology of choice for investigating brain function, Michael has invited me to give a short primer on optogenetics in general and channelrhodopsins in particular.

----------------------------------------------------------------------------

As a computational neuroscientist I am ultimately motivated by understanding how neural activity determines behavior. Frustratingly, for a long time even attempting to answer this sort of question has been pretty much impossible. This has been a major barrier to understanding how brains work… until recently.

To see why we have been stuck, imagine that I want to test the hypothesis that some pattern of neural activity causes a particular behavior. In order to test this hypothesis I’d need to conduct an experiment where I manipulated the animal’s neural activity and observed its behavior (simply noticing that the pattern and behavior both occur when I give the animal a stimulus only establishes correlation, not causality). Now, we’ve been able to control neural activity for quite a while - the sticking point was that we weren’t able to do it with the millisecond fidelity, neuron type specificity and sub-cubic-millimeter spatial precision we need to test most of our important hypotheses.

To illustrate, say I hypothesize that synchronized firing of excitatory neurons in the subthalamic nucleus at 20 Hz is responsible for akinesia (deficit in movement initiation). Testing this typical hypothesis would require me to synchronize only sub-thalamic excitatory neurons without changing their overall firing rate or affecting activity in nearby brain areas while the animal is behaving. I can’t think of any way we would have to able to accomplish this with drugs, electrical stimulation or any other standard technique for controlling neural activity.

Thanks to the recent development of optogenetics, though, such control is not only possible, but relatively easy. I can’t really exaggerate how completely cool this is - it is going to allow the field of computational neuroscience to hit its stride and start delivering the kinds of insights we need to understand what’s really going on in the brain.

So how does optogenetics work? To understand this, you need to know how ion channels control action potentials in neurons. Very briefly, ion channels are specialized protein channels that, when open, conduct ions (charged molecules) across cell membranes. The brief rise in membrane potential during an action potential is due to positive ions rapidly moving from the outside to the inside of a cell. Channelrhodopsins are ion channels that open when you shine blue light on them! This means we can force the membrane potential of a neuron to become more positive and generate an action potential. This is the ‘opto’ part of optogenetics.

Channelrhodopsin-2 (ChR2 - the most useful original kind) was first described in a species of green algae called Chlamydomonas reinhardtii in 2002 and found to work in mammalian neurons. Since then, genetic engineers have found that strategically mutating different amino acids changes the kinetics of the channel (how quickly it opens and closes). So, now there are different versions that allow different types of control. The fastest type (named ChETA ) opens in about 2 milliseconds and closes after about 5ms; fast enough to pulse blue light at 200 Hz and have the neuron fire at virtually every pulse. Another type (ChR2-C128S) usually takes minutes to close but shuts off very quickly if you shine green light on it. This means it can act as a kind of bi-stable on-off neuron switch. With such fine-grained control we can manipulate neuron spiking in pretty much any way we like.

Now for the ‘genetic’ part of optogenetics: Different kinds of neurons make different kinds of proteins. Since channelrhodopsin is a protein, we can use the cellular machinery that determines whether a protein is expressed in a neuron to restrict channelrhodopsin expression to a specific type of neuron.
This allows us to make one type of neuron in an area fire, without directly disrupting the normal activity of other types in the same area, giving us the neuron sub-type specificity we need for our experiments.

Furthermore, we can restrict channelrhodopsin expression to a very small area of the brain. Since we know that genes code for proteins, if only cells in one area have the channelrhodopsin gene only those cells in that area will respond to light. We can accomplish this by infecting a group of neurons with a non-replicating retrovirus that carries the channelrhodopsin gene. This gene will then be integrated into the genome of the infected neurons and expressed, introducing channelrhodopsins with spatial specificity.

However, although this combination of temporal, neuron sub-type and spatial specificity will enable a wide range of experiments, even more is possible. Another class of membrane proteins, known as halorhodopsins, have the opposite effect to channelrhodopsins. Halorhodopsins are not passive channels - they actively pump negative ions into a cell when illuminated with yellow light, making it more negative and stopping it from firing. Additionally, proteins that pump positive hydrogen ions out of cells to make their interior more negative have been described recently. These proteins are more effective than some types of halorhodopsins at preventing neurons from firing and different types respond to a different light colors – allowing researchers to pick colors that don't interfere with other rhodopsins the animal may also be expressing.

With such powerful optogenetic tools at our disposal we can imagine performing complex experiments, orchestrating neural activity with an array of different color intra-cranial LEDs. Although such experiments will be technically challenging, at the moment it feels like we are only limited by our imagination.

Selected references:
Nagel, G. et al. (2002) "Channelrhodopsin-2, a directly light-gated cation-selective membrane channel," PNAS, doi:10.1073/pnas.193619210.
Boyden, E. et al. (2005) "Millisecond-timescale genetically targeted optical control of neural activity," Nature Neuroscience, doi:10.1038/nn1525
Berndt, A. et al. (2008) "Bi-stable neural state switches," Nature Neuroscience, doi:10.1038/nn.2247
Gradinaru, V. et al. (2009) "Optical deconstruction of Parkinsonian neural circuitry," Science, doi:10.1126/science.1167093
Chow, B. et al. (2010) "High-performance genetically targetable optical neural silencing by light-driven proton pumps," Nature, doi:10.1038/nature08652
Gunaydin, L. et al. (2010) "Ultrafast optogenetic control," Nature Neuroscience, doi:10.1038/nn.2495

Thursday, November 12, 2009

Adaptations for the visual assessment of formadibility: Part II

In Part I of this series, I summarized the experiments and findings of Aaron Sell and colleagues' paper "Human adaptations for the visual assessment of strength and fighting ability from the body and face". In Part II, I evaluate their claims.

The evidence Sell et. al. present seems compelling with regards to proposition (i): adults appear to be able to make remarkably accurate estimates of upper-body strength from even degraded cues such as static images of faces. As I noted in Part I, however, the truth of propositions (ii) (that this ability is an adaptation) and (iii) (that upper-body strength determines formidability) are more doubtful. I will assess the evidence for each of these claims, starting with the latter.

Tuesday, November 10, 2009

Adaptations for the visual assessment of formidability: Part I

In the last couple of years there has been an explosion in research on faces and what can be inferred from them. It turns out, for example, that you can predict electoral outcomes from rapid and unreflective facial judgments, that women can (partially) determine a man's level of interest in infants from his face alone, that the facial expression of fear enhances sensory acquisition, and much, much else. A particularly interesting addition to this literature is Aaron Sell and colleagues' paper, "Human adaptations for the visual assessment of strength and fighting ability from the body and face". Sell et. al. hypothesized that human beings possess evolved psychological mechanisms 'designed' to estimate the fighting ability (or physical formidability) of conspecifics - i.e. other Homo sapiens sapiens - from minimal visual information. An ancillary, but important, claim the authors also make is that formidability is largely a function of upper-body strength and thus the latter is a suitable proxy for the former. To summarize for clarity, Sell et. al. claim that: 
  • (i) people can estimate the formidability of others from visual cues of their bodies and faces, 
  • (ii) this ability is an adaptation, and thus evolved by natural selection, and
  • (iii) upper-body strength is the single most important determining factor of fighting ability. 
The authors’ rationale for the first two hypotheses stems from the observation that in social species such as humans, ‘the magnitude of the costs an individual can inflict on competitors largely determines its negotiating position’ (p. 575). That is, formidability is often an important component of an organism’s ability to compete in zero-sum games (notably, access to limiting resources). Given the dangers of physical confrontation, a rapid visual assessment of the formidability of an opponent could be extremely beneficial because it would allow an individual to weigh up its chances of success, and thus choose to fight only when there is a reasonable prospect of victory. Indeed, Sell et. al. note that the widespread so-called ritualized animal contests are best interpreted as joint demonstrations and assessments of formidability, with physical violence usually ensuing only when individuals are closely matched. If the ability to visually estimate a competitor’s formidability was indeed adaptive, and if violence was frequent and recurrent throughout human evolutionary history (as is likely the case), it is not unreasonable to expect natural selection to have forged mechanisms to make such estimates. Sell and his colleagues tested hypothesis (i) empirically in a number of studies and the evidence seems to bear it out overall. While the truth of (ii) is more doubtful, I will argue that, pending further research, it is reasonable to accept it preliminarily for a number of reasons. Finally, I will argue the lack of empirical evidence in the study for (iii) is problematic but not decisively so: it is clear that there is a correlation between upper-body strength and formidability, but we do not know how strong this correlation is so it is difficult to judge how good a proxy the one is for the other.


After the jump, I summarize Sell et. al.'s primary findings (though I leave out one of their experiments). In Part II - coming later in the week - I evaluate their paper.


Wednesday, September 2, 2009

Silver fox domestication

I recently linked to an extract from Richard Dawkins’ new book in which he mentions a fascinating long-term experiment on silver foxes. The short version: starting in the late 1950s, the Russian geneticist Dmitry Belyaev selectively bred a population of silver foxes for tameness, and, surprisingly, they acquired a dog-like morphology as a by-product (floppy ears, turned-up tails, and so on). In other words, determining which foxes got to breed based solely on how tame and friendly they were produced not only successively tamer foxes, but dog-like physical traits as well. Belyaev believed (and Dawkins concurs) that the reason for this link is pleiotropy, the phenomenon of a single gene having multiple and seemingly unconnected phenotypic effects. As Lyudmila Trut, Belyaev’s successor as head of the Institute of Cytology and Genetics, explains (pdf):
Behavioral responses, [Belyaev] reasoned, are regulated by a fine balance between neurotransmitters and hormones at the level of the whole organism. The genes that control that balance occupy a high level in the hierarchical system of the genome. Even slight alterations in those regulatory genes can give rise to a wide network of changes in the developmental processes they govern. Thus, selecting animals for behavior may lead to other, far-reaching changes in the animals’ development. Because mammals from widely different taxonomic groups share similar regulatory mechanisms for hormones and neurochemistry, it is reasonable to believe that selecting them for similar behavior—tameness—should alter those mechanisms, and the developmental pathways they govern, in similar ways.
Now, this may be entirely correct but I can think of a fairly obvious alternative explanation: subtle biases in the researchers that meant the foxes were not really selected based purely on tameness. (A bit like Clever Hans in reverse). There is an Olympus Mons-sized literature on how human decision-making is influenced, entirely subconsciously, by a dizzying array of crazy things. To take one random example (also previously linked to), holding a heavier clipboard affects judgments of value and importance. Given the ubiquity of such latent biases, are we really to believe that some mutation (unconnected behavior) that merely made the affected fox look tame – made it look a bit more like a dog, say – didn't influenced judgments of tameness? To flesh this thought out a bit more, consider how the foxes were classified. Trut again:
At seven or eight months, when the foxes reach sexual maturity, they are scored for tameness and assigned to one of three classes. The least domesticated foxes, those that flee from experimenters or bite when stroked or handled, are assigned to Class III… Foxes in Class II let themselves be petted and handled but show no emotionally friendly response to experimenters. Foxes in Class I are friendly toward experimenters, wagging their tails and whining. In the sixth generation bred for tameness we had to add an even higher-scoring category. Members of Class IE, the “domesticated elite,” are eager to establish human contact, whimpering to attract attention and sniffing and licking experimenters like dogs. 
Class III seems unambiguously defined and it’s likely pretty straightforward to spot animals that belong to this category. The differences between the other classes, though, are significantly more subjective, and thus liable to all sorts of subtle biases. What, exactly, is an ‘emotional or friendly response to an experimenter’? What, exactly, is ‘eagerness to establish human contact’? It seems entirely possible – indeed likely – that animals that just looked tamer, had stereotypically domesticated features, were more likely to be assigned to Class I than to class II. If so, the foxes were not really selectively bred for “tameness and tameness alone”. No matter how scrupulous and honest the experimenters tried to be, I find it very hard to believe that they succeeded, continuously and without fail, to assign animals objectively to categories. Indeed, the researchers working on the foxes (including Trut) outlined a new scoring method in a 2007 paper, in which they admitted that a cross-breeding experiment “clearly demonstrates that the traditional scoring systems established for selection of foxes for behavior has limited resolution for measuring behavior as a continuous variable”. Assuming, as seems likely, that tameness-aggressiveness forms a continuous behavioral axis, we cannot be confident that Belyaev and his colleagues invariably selected for tameness alone. If this is correct, the pleiotropy story is somewhat undermined, though by no means refuted, of course. It seems significant, however, that the alternative explanation is more parsimonious: it need not posit nearly infallible experimenters, nor a priori unlikely pleiotropic linkages.

Of course, I’m no expert on this topic, so maybe I’ve misunderstood the protocols, or perhaps the alternative I sketch was been refuted somewhere in the literature. I would, however, be very interested to find out how the researchers ruled out this alternative hypothesis...

-------------
Trut, L. (1999). Early Canid Domestication: The Farm-Fox Experiment American Scientist, 87 (2) DOI: 10.1511/1999.2.160

Kukekova, A., Trut, L., Chase, K., Shepeleva, D., Vladimirova, A., Kharlamova, A., Oskina, I., Stepika, A., Klebanov, S., Erb, H., & Acland, G. (2007). Measurement of Segregating Behaviors in Experimental Silver Fox Pedigrees Behavior Genetics, 38 (2), 185-194 DOI: 10.1007/s10519-007-9180-1

Tuesday, September 1, 2009

Fun with a local homeopath

Note: Prinsloo has edited his website in light of our criticisms, but the version of his site that I responded to is still available on Google cache. 

A Pretoria-based homeopath, one Dr. JP Prinsloo, has taken on some local skeptics, including Owen and Angela. I'll have more to say about him in the next while, but for the moment I want to do three things: point to Owen's superb (and damn funny) response, address one of Prinsloo's arguments and demonstrate he misinterprets the medical literature on homeopathy.

In a section of his website "Answering the Skeptics", Prinsloo makes the follow argument:
Let me begin this page by stating quite emphatically that;

It is against my principles to debate the validity and efficacy of Homeopathy with ignorants (sic).

On this page, reference to the word ignorant (sic) shall mean: Any so-called scientist or "expert" that expresses him/herself on the subject of Homeopathy, it's validity or efficacy, but who -

* Is not a qualified Homeopath;
* Has not studied Homeopathy to the extent that a Homeopath does;
* Has not conducted extensive research on Homeopathy in accordance with the scientific principles of Homeopathy under the supervision of a qualified Homeopath;
* Does not possess sufficient experience in the practical application of Homeopathy in a clinical setting;
* Who is not registered as a Homeopathic Practitioner in South Africa and / or does not meet the requirements for such registration;
* Who is not an expert on applied Homeopathy.(*)

With respect to Homeopathy, that is an ignorant (sic) in my opinion and someone not worthy of my time.

(*) Howard Stephen Berg, The World's Fastest Reader, defines an expert as "someone who has read at least 25 books on a particular subject".
This is a really bad argument. But first, even if we accept these absurd requirements, there is a person who, as a former homeopath, fulfills these criteria and is nevertheless a prominent and respected critic thereof: Edzard Ernst. The key point, though, is that people self-select into homeopathy, so saying only homeopaths are qualified to say anything about it is a transparent attempt to shield it from criticism. Are only astrologers possibly qualified to say anything about astrology? Shall we dismiss all criticisms of parapsychology unless it comes from a qualified parapsychologist? Am I an ignorant (sic) for dismissing the flat earth theory despite not having read 25 books about it? Of course not; doing so would unnecessarily cede the field to the woos. Prinsloo simply misunderstands how and when to defer to experts. (A topic I'm currently writing a lengthy post about, by the way). Furthermore, the most relevant question about homeopathy is: does it work? Do large well-designed double-blind placebo-controlled trails demonstrate that it has a statistically significant clinically significant effect? That is, when you take care not to fool yourself, does homeopathy work? (Hint: the answer is no). And, as Simon also points out in a comment to Owen's post, the most relevant expertise in answering that question is in research methodology and medical statistics. Is Prinsloo a qualified medical statistician? Has he read 25 books on medical research methods and statistics?

Prinsloo also manifestly misunderstands the medical literature. (Alternatively, he's a lair -- but that would be uncharitable. Keep Hanlon's Razor always in mind). In "Homeopathy in Perspective" (based on a journal article of Prinsloo's apparently), he states:
A state of the art meta analysis reviewed 186 studies, 89 of which fit pre-defined criteria, showed that patients taking homeopathic medicines were 2.45 times more likely to experience a positive therapeutic effect than placebo.(19)
Sounds promising. So let's follow the reference, shall we? "Are the clinical effects of homoeopathy placebo effects?" (The Lancet, 2005). Prinsloo has a slight problem: this study simply doesn't conclude what he says it does. Here is an excerpt from the Discussion section:
We assumed that the effects observed in placebo-controlled trials of homoeopathy could be explained by a combination of methodological deficiencies and biased reporting. Conversely, we postulated that the same biases could not explain the effects observed in comparable placebo-controlled trials of conventional medicine. Our results confirm these hypotheses: when analyses were restricted to large trials of higher quality there was no convincing evidence that homoeopathy was superior to placebo, whereas for conventional medicine an important effect remained. Our results thus provide support for the hypothesis that the clinical effects of homoeopathy, but not those of conventional medicine, are unspecific placebo or context effects.
Prinsloo continues:
Another meta-analysis reviewed 107 studies of homeopathic medicines, 81 of which (77%) showed positive effect. Of the best 22 studies, 15 showed efficacy. The researchers concluded: "The evidence presented in this review would probably be sufficient for establishing homeopathy as a regular treatment for certain indications." Further, "The amount of positive evidence even among the best studies came as a surprise to us." (20) 
And what is this meta-analysis (sic)? "Clinical Trials of Homeopathy" (BMJ, 1991). Prinsloo does report accurately on the details, but then conveniently ignores the authors' conclusion: "At the moment the evidence of clinical trials is positive but not sufficient to draw definitive conclusions because most trials are of low methodological quality and because of the unknown role of publication bias." (It continues to say that there is a legitimate case for further research). Nor does Prinsloo mention that this study (which was a systematic review, not a meta-analysis) has subsequently been rubbished. The positive result in this study, it seems fair to conclude, was due to inappropriate weightings of trail quality (the exclusion of peer-review, for one) and biased selection. Moreover, subsequent better designed systematic reviews (like this one by Ernst) have concluded "there was no homeopathic remedy that was demonstrated to yield clinical effects that are convincingly different from placebo".

More on Prinsloo later...

-----------
Shang, A., Huwiler-Müntener, K., Nartey, L., Jüni, P., Dörig, S., Sterne, J., Pewsner, D., & Egger, M. (2005). Are the clinical effects of homoeopathy placebo effects? Comparative study of placebo-controlled trials of homoeopathy and allopathy The Lancet, 366 (9487), 726-732 DOI: 10.1016/S0140-6736(05)67177-2

Thursday, January 24, 2008

Research blogging

I have been participating in the Bloggers for Peer-Reviewed Research Reporting (BPR3) initiative, which attempts to highlight serious, thoughtful blog entries on peer-reviewed research (using icons). The aggregation system I mentioned before has now gone live on the great site Researchblogging.org. Basically, whenever a blog author wants to make use of the new system, they go to the Researchblogging site, enter citation meta-data into a form which then spits out code that gets included in the relevant blog entry. Then Researchblogging.org aggregates all the blog entries that contain the code, lists them on the website and categorizes them into subject-areas.

Have a look at the site, it's a fantastic way to discover more academic blogs and entries on serious research.

Wednesday, November 28, 2007

Your brain on politics: the bad and the better

The bad
A disturbingly bad article, entitled "This is Your Brain on Politics", appeared recently in the New York Times. It presented purported "research" about the brains of swing voters in the 2008 US Presidential Elections but, unfortunately, the article does little but illustrate the dangers of circumventing the peer-review process and the shocking state of science journalism in the mainstream media. Luckily, the NYT published an angry letter by a group of cognitive neuroscientists condemning the article and the blogosphere responded forcefully, among the blogs that attacked the piece were: Bad Science, Neurocritic, Mindhacks, Brainethics and Natural Rationality. Subsequently, Nature published an editorial also condemning the article and even Slate joined in.

The better

Blogging on Peer-Reviewed ResearchThankfully there has also been some better recent research concerning 'the brain on politics' and good media coverage thereof to boot. The subject of last week's edition of ABC Radio National's fantastic radio show/podcast, All in the Mind, was "The Political Brain" and the show discussed, among other things, an interesting study in Nature Neuroscience entitled "Neurocognitive correlates of liberalism and conservatism" (see also the supplementary materials). The study, led by NYU assistant professor of psychology David Amodio, evoked considerable interest and was widely discussed by the science blogging community. (See links below). I suspect the study has been somewhat misunderstood, so, despite it being stale by web standards, I'll look at it in some detail.

The hypothesis the authors defend is that political orientation (conservative vs. liberal) is "associated with individual differences in a basic neurocognitive mechanism involved broadly in self-regulation" (Amodio
et. al., 2007: 1246). They go about testing this proposition in a somewhat tortuous way: previous research had shown that conservatives are "more structured and persistent in their judgments and approaches to decision-making" whereas liberals "report higher tolerance of ambiguity and complexity, and greater openness to new experiences". Other research showed that psychological differences between liberals and conservatives "map onto the... self-regulatory process of conflict monitoring" (the system that detects a mismatch between habitual responses and the response required in the current situation) which in turn has been "associated with neurocognitive activity in the anterior cingulate cortext"(ACC). So, to test whether liberals and conservatives differ in their patterns of self-regulation, the authors measured the acitivity of the ACC in a situation requiring conflict-monitoring.

Amodio
et. al. conducted this test by using an electroencephalogram to record the ACC activity in 43 subjects who were asked to complete a go/no-go association task (Nosek & Banaji, 2001). For the task, participants were placed in a sound-proof room, in front of a computer screen in the center of which either an "M" or a "W" appeared. Half the subjects were instructed to "go" (i.e. hit a key) when they saw an "M" and do nothing ("no-go") when they saw a "W", while the other half were asked to do the opposite. The task consisted of 500 trails, 80% of which consisted of the "go" stimulus and 20% of the "no-go" stimulus. This meant that for half the subjects "M" became a habitual response (which needed to be inhibited when they saw a "W") and for the other half "W" became habitual (which needed to be inhibited when they saw an "M"). Additionally, before the task was administered, subjects reported their political attitudes confidentially on a scale ranging from -5 (very liberal) to +5 (very conservative).

The results were very suggestive. Firstly, however, it is important to note that there are in fact two types of finding in this study: the behavioral findings (which the authors do not focus on) and the cognitive neuroscience findings (which the authors emphasized and most of the subsequent discussion revolved around). The behavioral finding - which is interesting all by itself - is that liberals were more accurate than conservatives on the no-go trails (r(41) = 0.30, P less than 0.05) which "suggests that a more conservative orientation is related to greater persistence in a habitual response pattern, despite signals that this response pattern should change".

The neurocognitive findings were (among other things) that the response-locked error-related negativity (ERN) - a measure of conflict between a habitual tendency and an alternative - was strongly correlated (r(41) = 0.59, P less than 0.001) with political attitudes:

Additionally, liberalism was strongly associated with greater conflict-related neural activity when a habitual response had to be inhibited:


Subsequently, localization analysis was performed, which confirmed that the above mentioned ERN activity originated from the ACC. Amodio
et. al. conclude that "taken together, our results are consistent with the view that political orientation, in part, reflects individual differences in the functioning of a general mechanisms related to cognitive control and self regulation".

A couple of observations. The study is clearly preliminary and a good deal of the reporting of it in the lay press went far beyond the evidence. The authors, however, obviously cannot be blamed for this - they were careful not to stray from the evidence in their paper. Furthermore, only 43 subjects took part in the study and, worse, only 7 of those self-reported as conservative. The findings would have to be replicated by a different team in a different part in the US with a larger number of participants before too much stock can be placed in them. For now this can be filed under "interesting and suggestive but preliminary". We'll have to wait and see how the literature develops.


Links

Bibliography

Amodio, D.A., Jost, J.T., Master, S.L., Yee, C.M. (2007). Neurocognitive correlates of liberalism and conservatism. Nature Neuroscience, 10, 1246-1247. DOI: 10.1038/nn1979

Nosek, B. A., & Banaji, M. R. (2001) "The go/no-go association task,"
Social Cognition, 19(6): 161-176.

Tuesday, November 27, 2007

Blogging about peer-reviewed research

Some of you might have noticed that in my post about the recent BMJ article about active parents raising active children I started using BPR3's icons indicating which of my posts are about peer-reviewed research. The icons (which can be seen here) are designed to draw attention to thoughtful, serious blog posts on papers that have been published in journals reviewed by relevant experts. (See BPR3's guideline for more information). An upcoming feature - which I hope to participate in soon - will aggregate all blog posts using the icons at a central location on the BPR3 website, which in turn will draw more attention to quality science blogs. (Something I sure hope this blog counts as a token of...).

Monday, November 26, 2007

Peer-reviewed nonsense: Active Parents Raise Active Children

The British Medical Journal - which is highly respected and has the 6th highest impact factor of all general medical journals - has just published an almost entirely worthless study on the effect of parental physical activity on the physical activity of their 11-12 year old children (Mattocks et. al., 2007). The study is worthless, in short, because it proceeds as if the entire field of behavioral genetics does not exist; the authors simply assume their conclusions are not confounded by genetic factors. It astonishes me that such fatally flawed article can get past peer-review in such a prestigious journal. That such an obvious confound as genetics can be overlooked is a testament to the continuing detrimental effect of the blank slate on modern science (Pinker, 2002).

First a bit more about the study itself. The authors used data from the Avon longitudinal study of parents and children, which collected (and is continuing to collect) a wealth of data from 14,061 families. The specific question addressed was which factors in the child's early life (defined as before age 5) influenced the objectively measured physical activity of the same children at ages 11-12. The authors collected the physical activity data with uniaxial actigraph accelerometers from 5,451 11-12 year old children in the Avon cohort and then looked at data collected when the children were aged 5 or younger for causal variables. In other words, the researchers wanted to know which early life variables predicted physical activity at age 11-12. The conclusion of the research was:
We have shown that children are slightly more active if their parents are active early in the child’s life. This suggests that encouraging physical activity in parents may also influence their children to become more active, with the added advantage that physically active parents are healthier (Mattocks et. al., 2007: 7).
So, in other words, active parents socialize their children to be active themselves. (It's clear the authors are thinking in terms of socialization, something the following quotation perhaps illustrates a bit better: "in our study, maternal activity during pregnancy... was positively associated with physical activity in the children. It is unlikely that this is due to biological factors in utero but is more likely that physical activity during pregnancy is a marker for later maternal physical activity and that this in turn influences children’s physical activity" [Mattocks et. al., 2007: 6].)

A slight problem...

Children share 50% of their genes with each parent, and since all human behavioral traits are heritable (the so-called First Law of Behavioral Genetics, Turkheimer, 2000), genetic factors are always possible confounds when relating parenting style (or other parental behavior) to outcomes in the children. As Turkheimer explains:

It is no longer possible to interpret correlations among biologically related family members as prima facie evidence of sociocultral causal mechanisms. If the children of depressed mothers grow up to be depressed themselves, it does not necessarily demonstrate that being raised by a depressed mother is itself depressing. The children might have grown up equally depressed if they had been adopted and raised by different mothers, under the influence of their biological mother's genes (2000: 162).
The exact same problem holds for the Mattocks study: one can't simply assume parental physical activity (or lack thereof) influences children to be active (or inactive) because it's possible that sedentary children inherit sedentary genes from their sedentary parents and active children inherit active genes from their active parents. Or, to put it differently, the fact that the physical activity of parents when the children were young is correlated with the children's degree of activeness later on simply does not constitute evidence of a socialization effect.

To be clear, I'm not claiming children are not socialized in this way; my point is we cannot tell one way or the other from the data presented because it fails to distinguish between the relevant causal hypotheses. I really hope I've somehow been daft by missing how the authors controlled for genetic factors. The alternative is that a leading medical journal published an article that is scientifically illiterate, that overlooks obvious possible confounds and that is thus worthless in terms of deciding what causes 11-12 year old children's degree of physical acitivity. Frankly, that I've made a mistake is far more palatable to me.

(See also: ScienceDaily's report on this research).

----------------
Mattocks, C., Ness, A., Deere, K., Tilling, K., Leary, S., Blair, S., & Riddoch, C. (2008). Early life determinants of physical activity in 11 to 12 year olds: cohort study BMJ, 336 (7634), 26-29 DOI: 10.1136/bmj.39385.443565.BE

Turkheimer, E. (2000) "Three Laws of Behavior Genetics and What They Mean," Current Directions in Psychological Science, 9(5): 160-164.

Pinker, S. (2002) The Blank Slate: The Modern Denial of Human Nature (London: Penguin).