Sunday, June 24, 2012

Causal models of developmental disorders: the perils of correlational data








Experimental psychology depends heavily on statistics, but
psychologists don’t always agree about the best ways of analyzing data. Take
the following problem:


I have two groups each of 30 children, dyslexics and
controls. I give them a test of auditory discrimination and find a significant
difference between the groups, with the dyslexic mean being lower. I want to
see whether reading ability is related to the auditory task. I compute the
correlation between the auditory measure and reading, and find it is .42, which
in a sample of 64 cases is significant at the .001 level.


I write up the results, concluding that poor auditory skill
is a risk factor for poor reading. But reviewers are critical.
So what’s wrong with this?


I’ll deal quickly with two obvious points. First, there is
the well-worn phrase that correlation does not equal causation. The correlation
could reflect a causal link from auditory deficit to poor reading,
but we need also to consider other causal routes, as I’ll illustrate further
below. This is an issue about interpretation rather than data analysis.


A second point concerns the need to look at the data rather
than just computing the correlation statistic. Correlations can be sensitive to
distributional properties of the data and can be heavily influenced by
outliers. There are statistical ways of checking for such effects, but a good
first step is just plotting a scatterplot to see whether the data look orderly.
A tip for students: if your supervisor
asks to see your project data, don’t just turn up with numerical output from
the analysis: be ready to show some plots.




Figure 1: Fictitious data showing spurious correlation between height and reading ability


A less familiar point concerns the pooling of data across
the dyslexic and control groups. Some people have strong views about this, yet,
as far as I’m aware, it hasn’t been discussed much in the context of
developmental disorders. I therefore felt it would be good to give it an airing
on my blog and see what others think.


Let’s start with a fictitious example that illustrates the
dangers of pooling data from two groups. Figure 1 is a scatterplot showing the
correlation between height and reading ability in groups of 6-year-olds and
10-year-olds. If I pool across groups, I’m likely to see a strong correlation
between height and reading ability, whereas within any one age group the
correlation is negligible. This is a clear case of spurious correlation, as
illustrated in Figure 2. Here the case against pooling is unambiguous, and it's
clear that if you look at the correlation within either age band, there is no
relationship between reading ability and height.




Figure 2: Model showing how a spurious correlation between height and reading arises because both are affected by age





Examples such as this have led some people to argue that you
shouldn’t pool data in studies such as the dyslexic vs. control example. Or, to
be more precise, the recommendation is usually that you should check the
correlations within each group, and
avoid pooling if they don’t look consistent with the pooled correlation. I’ve
always been a bit uneasy about this logic and have been giving some thought as
to why.


First, there is the simple issue of power. If you halve your
sample size, then you increase the standard error of estimate for a correlation
coefficient, making it more likely that it will be nonsignificant. Figure 3
shows the 95% confidence intervals around a correlation of .5 depending on
sample size, and you can readily see that these are larger for small than big
samples. There's a nice website by Stan
Brown
that gives relevant formulae in Excel.




Figure 3: 95% confidence interval around estimated correlation of .5, with different sample sizes





A less obvious point is that the data in Figure 1 look
analogous to the dyslexic vs. control example, but there is an important
difference. We know where we are with age: it is unambiguous to define and measure.
But dyslexia is more tricky. Suppose we substitute dyslexia for age, and
auditory processing for height, in the model of spurious correlation in Figure
2. We have a problem: there is no independent diagnostic test for dyslexia. It
is actually defined in terms of one of our correlated variables, reading
ability. Thus, the criterion used to allocate children to groups is not
independent of the measures that are entered into the correlation. This creates
distortions in within-group correlations, as follows.


If we define our groups in terms of their scores on one
variable, we effectively restrict the range of values obtained by each group,
and this lowers the correlation.  Furthermore, the restriction will be less for
the controls than for the dyslexic group - who are typically selected as
scoring below a low cutoff, such as one SD below the mean. Figure 4 shows simulated
data for two groups selected from a population where the true correlation
between variables A and B is .5. Thirty individuals (dyslexics) are selected as
scoring more than 1 SD below average on variable A, and another 30 (controls)
are selected as scoring above this level. 




Figure 4: Correlations obtained in samples of dyslexic (red) and controls (blue) for 20 runs of simulation with N = 30 per group.


The Figure shows correlations from twenty
runs of this simulation. For both groups, the average correlation is less than
the true value of .5, because of the restricted range of scores on variable A.
However, because the range is more restricted for the dyslexic group, their
average correlation is lower than that of the controls. A correlation of .42 corresponds to the .05 significance level for a sample of
this size, and we can see that the controls are more likely to exceed this
value than the dyslexic group. All these results are just artefacts of the way
in which the groups were selected: both groups come from the same population
where r = .5.


What can we conclude from all this? Well, the bottom line is
that if we find non-significant within-group
correlations this does not necessarily invalidate a causal model. The
simulation shows that we may find that within-group correlations look quite
different in dyslexic and control groups, even if they come from a common
distribution.


So where does this leave us?! It would seem that in general,
within-group data are unlikely to help us distinguish between causal and
non-causal models: they may be compatible with both. So how should we proceed?


There’s no simple solution, but here are some suggestions:


1. If considering correlational data, always report the 95%
confidence interval. Usually people (including me!) just report the correlation coefficient,
degrees of freedom and p-value. It’s so uncommon to add confidence intervals
that I suspect most psychologists don’t know how to compute it. Do not assume
that because one correlation is significant and another is not that they are
meaningfully different. This
website
can be used to test for the significance of the difference between
correlations. I would, however, advise against interpreting such a comparison
if your data are affected by the kinds of restriction of range discussed above.


2. Study the relationship between key variables in a large unselected
sample covering a wide range of scores. This is a more tractable solution, but
is seldom done. Typically, people recruit an equivalent number of cases and
controls, with a sample size that is inadequate for getting a precise estimate
of a correlation in either group. If your underlying model predicts a linear
relationship between, say, auditory processing and phonological awareness, then
with a sample of 200 cases, a fairly precise estimate can be obtained. With this approach, one
can also identify whether the relationship is linear.


3. More generally, it’s important to be explicit about what
models you are testing. For instance, I’ve identified four underlying models of
the relationship between auditory deficit and language impairment, as shown in Figure
5. In general, correlational data on these two skills won’t distinguish between
these models, but specifying the alternatives may help you think of other data
that could be informative. 




Figure 5: Models of causal relationships underlying observed correlation between auditory deficit and language impairment


For instance:


  • We
    found that, when studying heritable conditions, it is useful to include
    data on parents or siblings. Models differ in predictions about how
    measures of genetic risk - for instance, family history, or presence of
    specific genetic variants - relate to A (auditory deficit) and B (language impairment) in the child. This approach is
    illustrated in this
    paper
    . Interestingly, we found that the causal model that is often implicitly assumed, which we termed the Endophenotype model, did not fit the data, but nor did the spurious correlation model, which corresponds here to the Pleiotropy model.

  • There
    may be other groups that can be informative: for instance, if you think
    auditory deficits are key in causing language problems, it may be worth
    including children with hearing loss in a study - see this
    paper
    for an example of this approach using converging evidence.

  • Longitudinal
    data can help distinguish whether A causes B or B causes A.

  • Training
    studies are particularly powerful, in allowing one to manipulate A and see
    if it changes B.



So what’s the bottom line? In general, correlational data
from small samples of clinical and control groups are inadequate for testing
causal models. They can lead to type I errors, where pooling data leads to a
spurious association between variables, but also to type II errors, where a
genuine association is discounted because it isn’t evident within subject
groups. For the field to move forward, we need to go beyond correlational data.



P.S. 9th July 2012

I've written a little tutorial on simulating data using R to illustrate some of these points. No prior knowledge of R required. see: http://tinyurl.com/d2868cg



Bishop DV, Hardiman MJ, & Barry JG (2012). Auditory deficit as a consequence rather than endophenotype of specific language impairment: electrophysiological evidence. PloS one, 7 (5) PMID: 22662112



If you liked this post, you may also be interested in my other posts on statistical topics:

Getting genetic effect sizes in perspective
The joys of inventing data
A short nerdy post about the use of percentiles
The difference between p < .05 and a screening test

No comments:

Post a Comment