Measuring the impact of research, then. Seems a noble enterprise, at first glance. Nobody likes the idea of taxpayer-funded navel-gazing, so we obviously need to show that what we’re doing is useful, or informative, or at least interesting to a lot of people who aren’t us.
The REF won’t help to do any of that, of course. On current showing, it will probably just expend a titanic level of administrative energy pretending to turn subjective judgments into numbers. The numbers will not particularly represent anything, but will at least be reassuringly numerical. They will therefore be accepted as a substitute for actual insight — or, indeed, a means of defeating it.
On the general mess, I’ve got nothing to say that hasn’t been said better by Stefan Collini, James Ladyman, Ross McKibbin, Iain Pears and others. (One of the problems of voicing concerns as an academic is that the profession necessarily encourages keeping your damned mouth shut if you’ve got nothing original to say. Opponents can thus represent as an Awkward-Squad minority those who are in fact merely the most articulate exponents of a strong consensus. For the record, I don’t believe I know anyone in the academic humanities who seriously doubts that the “impact” principle is wildly incoherent and inherently corrosive. But you try writing to HEFCE or the Times Higher saying “I agree with Ladyman, and I’ve got more sensible hair than he has”, and see where that gets you.)
One point which I don’t think has been covered elsewhere, however, is this: The “research impact” agenda is pre-programmed to miss most of the useful work which humanities academics do for public audiences.
If I’m right, this is a serious problem. Check the marvel that is Annex J of the REF pilot report – which is the closest thing we’ve had so far to a concrete indication of how on earth this business is supposed to work – and you’ll find a heavy focus on two factors. Firstly, trade books (inevitably, as one of the few enterprises where the humanities generate anything you can turn into folding cash money); secondly, public engagement as traditionally defined.
Now then. By the standards of my (mainly research-oriented) group at Manchester, I do quite a lot of work for public audiences, directly and as an advisor. I think it’s an essential part of the job, and I rarely turn it down. Here are a few edited highlights of recent activities.
- October 2009. Local tour guide asks me to fact-check a Darwin-themed walk. This principally entails finding evidence to nail a few misconceptions on our old friend the Science-Religion Conflict. These, note, are questions something anyone who teaches introductory hist of sci should be able to cover with minimal prep, but are nowhere near my research area.
Query that pops up during this process: is it true that Darwin’s proposed knighthood was kyboshed by Church opposition? Thereby hangs a surprisingly complicated tale which I wasn’t able to unravel at the time, so I sought expert advice from a Serious Darwin Scholar. You may recall that it was impossible to get a minute alone with a Serious Darwin Scholar for love or money in 2009, such were the pressures of anniversary-themed lecturing, interviews and book-signing. In the end I cobbled together what I guessed was a reasonable historicist account and emailed it off with an “Is this right?” to the SDS I know best. A few days later the message came back with the electronic equivalent of a scribbled “Yup!”, which I duly forwarded. - May 2010. Contact at the British Council asks if I can advise on a Czech radio series about Manchester’s history and culture. I suggest various people who work in this area, one of whom (Terry Wyke) they end up using. However, they still want someone for a broad overview on science and technology. So I meet the producer and talk through some of the standard areas. Two topics make it to broadcast: early computers (which I know mainly via Campbell-Kelly and Lavington), and John Dalton’s atomic theory (for which, though I’ve skimmed the Greenaway, Cardwell and Thackray volumes at some point, I’m leaning heavily on the excellent synopsis in Bill Brock’s Fontana survey).
- July 2010. BBC researcher contacts me about a Horizon on “the concept of one degree of temperature.” I suggest Hasok Chang: they’ve already got him. I also stress the brewery angle, suggesting my own eighteenth-century Boerhaavians and Otto Sibum’s work on James Joule, the latter of which ends up in the running order. They’re looking for someone to interview on camera: I doubt there’s much chance of their getting Otto, but tell them to try him first. It ends up being me. I accordingly swot up from the Cardwell biog before talking through it in detail with the producer. The finished product runs through Otto’s insight briefly in the narration, and has me in vision giving some very general background.
Now, what do we notice? Correct! It’s not my research. It’s not even my institution’s research, in most cases; and it may be ten, twenty or thirty years old (though it is, on every single occasion, new and interesting to the people I’m delivering it to). Any “impact statement” I’m obliged to write about my research is going to miss most of the PE element of the general argument for keeping me on the payroll.
Isn’t this merely an indication that that I should switch my research attention to fields that resonate better? No: that’s a recipe for fossilising the field. The world is fascinated by Darwin, yet it’s also glutted with Darwin scholarship (and believe me, some of the Serious Darwin Scholars find this more frustrating than anyone). The ideal researcher knows how to take those pre-existing interests, and use them to lead audiences on into areas they didn’t know they’d be interested in. We are very much in the hands of the mediators, here: usually, we don’t get to do this. Sometimes we do.
So why can’t we all just agree to focus on promoting our own particular research? Because the researchers responsible for a lot of important work tend to be busy, or thousands of miles away, or at least moderately dead. (The other scenario that often crops up is the one where a decent overview of the field must acknowledge the work of seven different authors who each revile each of the others with a homicidal passion. In this case, it’s often best to seek an integrated view from someone positively too junior to be on any of their radars.)
But do we need to take up an active scholarly researcher’s time on describing other people’s research? Yes! Otherwise the TV producers and schoolteachers and so forth will go off and find someone to talk to who will convince them that Thomas Henry Huxley invented the Breville sandwich toaster. (I exaggerate. Faintly.) What they need is someone who knows the literature: its shape, its direction, its controversies, its holes. And you can only know the literature to that level if you are, yourself, writing bits of it. Funding research in the history of science certainly does foster useful public work in the history of science – but usually not in the atomistic, linear fashion which the whole “impact” agenda insists is the only way anything ever gets done.
I should clarify that, while I was doing all the stuff above, I was also developing PE work specifically out of my own research, chiefly through Drinking Up Time. This work has not, as I write this, picked up anything like the audience levels of the examples above. Perhaps some of it will. Perhaps in sixteen years’ time (Otto’s Joule paper is from 1995). You certainly can’t plan this stuff, except at the broad aggregate level.
The problem goes deeper. Anyone concerned with “economic” as well as “social” “impact” should note that, if anything, the work we’re competent to do gains in measurable earning potential the further away it gets from useful new scholarship. Textbook example: textbooks. How much cutting-edge research do you think we can smuggle into a work whose very purpose is to introduce the established field? (Probably up to about 10%, if the author is mightily, mightily ingenious.) Bonus literary example: consider the standard thought-experiment for hard-headed application of soft scholars’ skills, namely the industrial signage text consultancy proposed (in passing) in Nice Work. The scholars in question could, arguably, have turned out superior signs through being researchers in English. This would in no sense have been an “impact” of the research they were doing when they weren’t signwriting.
The public role of conscientious humanities researchers is to disseminate, not the outcomes of atomised, cost-coded research projects, but the insights due to the whole of their professional experience and to that of the people they work with (most of whom, in my case, know more than I do). NB: this is not a plea for the right to be exceptionally woolly and floaty and expressive. It is a plea against randomly bashing bits of approaches to auditing together to produce a process that will “work” only in the tangible but unhelpful sense of reliably using up time and money.
So what would you do instead then, eh?
Well, on this issue, obviously, I’d target any attempt at auditing public engagement to the contribution of the research group, rather than the research. More generally, I’d bin the whole proposed edifice in favour of a national light-touch peer review system mapped to much smaller discipline areas and their overlaps, with measures to acknowledge and document the inherent subjectivity of the whole process as far as possible (minority reports, institution response statements). Why? What would you do?
Pingback: People, not papers: rethinking ‘impact’ | Responsible Innovation
Pingback: KABOOM: Exploding ‘impact’ « through the looking glass
Pingback: KABOOM: Exploding ‘impact’