© www.CartoonStock.com |
When I was
starting out on a doctorate, I’d look at the senior people in my field and
wonder if I’d ever be like them. It must be great, I thought, to reach the
advanced age of 40. By then you’d have learned everything you needed to know to
do great science, and you could just focus on doing it. I suspect today’s crop
of grad students are a bit more savvy than I was, but all the same, I wonder if
they realise just how wrong that picture is – for two reasons.
First, you never stop learning. The field moves on. Instead of getting easier, it gets harder. I
remember when techniques such as functional brain imaging first came along. The
most competent people in that area were either those who had developed the
methods, or young people who learned them as grad students. If you were of the
generation above, you had three choices: ignore the methods, spend time
learning them, or hire junior people who knew what they were doing. As the
methods evolve, they get ever more complex, and meanwhile, your own brain
starts to shrink. So if you are anticipating making it to a tenured post and
then settling down in your armchair, think again.
Second, the more
senior you get, the more of your time is spent, not on doing your own research,
but on evaluation. You learn that an email entitled ‘invitation’ should not
make your spirits rise: it’s just a desperate attempt to put a positive spin on
a request for you to do more work for no reward. You get regular ‘invitations’ to review
papers and grants, write job references, appraise promotion bids, sit on
interview panels and examine theses. If you are involved in teaching, you’ll
also be engaged in numerous other forms of appraisal.
I was prompted to
think about this when someone asked on an electronic forum what was a
reasonable number of doctoral theses to examine each year. The general consensus was two: though it will
obviously depend on what other commitments someone has. It also varies from
country to country. There are some jolly
places in Europe where a PhD viva is just an excuse for a boozy party with a
lot of dressing up in funny gowns and hats. In UK psychology, the whole thing
is no fun at all: you have to read a document of 50,000-70,000 words reporting
a body of work based on a series of experimental studies. You then write a
report on it and see the candidate for a face-to-face viva, which is typically
2 to 3 hours long. Although failure is uncommon, it is not assumed that the
candidate will pass (unlike in the viva-as-party countries), and weeping or
catatonic candidates are not unheard of. Taking into account travel, etc., if
you are going to do a proper job, you are probably talking about three days’
work. For this you get paid around the minimum wage – the fee for examining is
typically somewhere between £120 and £200.
So why do we do
it? The major reason is because the entire academic enterprise depends on
reciprocity: we want people to examine our students and review our papers and
grants. In addition, it’s important to maintain standards, and to ensure that
degrees, promotions, publications and grants go to those who merit them. But the demands keep growing. In the 37 weeks of this year I’ve been asked
to review 76 papers and six grants. I agreed to review 16 papers and three of
the grants. This, of course, is nothing compared with being a journal editor or
serving on a grants board, something that most of us will do at some point.
Clearly, if I
agreed to do everything I was asked, I’d have no time for anything else. Of course, one learns to say no. But
awareness of these pressures has made me look with rather a critical eye at how
we use evaluation. There is, for instance, research suggesting that job interviews aren’t very useful at identifying good candidates: we tend to be seduced by immediate
impressions, which may not be a good indicator of a person’s suitability. Like
most people, I’d be reluctant to take on an employee I hadn’t interviewed, but
if Daniel Kahneman is to be believed, this is just because I am a victim of the
Illusion of Validity.
I’m a supporter of the peer review system used by
journals, and here I feel I’m on more
solid ground, because I can point to instances where my papers have been
improved by input from reviewers. Nevertheless, where reviewing is used simply
to reject/accept papers or grant proposals,
and where fine-grained decisions have to be made between many
high-quality submissions, agreement between experts may be little better than chance
(e.g. Fogelholm et al, 2012). Nevertheless, we stick with it, because it’s hard
to know what to put in its place.
I’ve written a fair bit about that expensive and time-consuming evaluation process that UK academics
engage in, the REF. It requires experts to make
judgements of whether, for instance, papers are of 3* or 4* quality, a
distinction based on whether the research is “world leading” or “internationally
excellent…. but falls short of the highest standards of excellence.” The reliability of such judgements has not, to my knowledge, been evaluated, yet large amounts of funding depend on them. Those on REF committees are in the same situation as Pavlov’s poor dogs, having
to make distinctions that are on the one hand impossible (discriminating
circles and ellipses that become increasingly similar) and on the other hand
very important (get it wrong and you get a shock).
There is one good
thing about doing so much evaluation. You have the opportunity to see what
others are doing – you may be the first person to read an important new paper,
or examine a ground-breaking thesis. You may be forced to engage with different
ways of thinking, and confronted with new topics and ideas. You may be able to provide useful input to authors. And since you
yourself will be evaluated, it can be useful to see life from the other side of
the table, as the person doing the evaluating. But all too often, even these
advantages fail to compensate for the fact that as a senior academic you will
spend more and more time on evaluation of others and less and less doing your
own research.
Reference
Fogelholm, Mikael, Leppinen, Saara, Auvinen, Anssi, Raitanen, Jani, Nuutinen, Anu, & Väänänen, Kalervo (2012). Panel discussion does not improve reliability of peer review for medical research grant proposals Journal of Clinical Epidemiology, 65 (1), 47-52 DOI: 10.1016/j.jclinepi.2011.05.001
No comments:
Post a Comment