It’s clear that impact is growing swiftly within international research agendas. I’ve had many discussions recently with colleagues across various ponds for whom the dark cloud of impact is looming. Many seem to be looking to the UK to learn from our REF experience, and to be frank that’s not a bad idea at all. Where impact is concerned it’s fair to say the UK is both specialised and battle-worn in equal measure. Unlike many of our international peers, our sector has been driven by centralised impact assessment, rather than broader dialogues of ‘benefits’ and ‘knowledge mobilisation’. It is an approach with pros and cons, many of which we’re still unpicking. Certainly the wonderfully engaged discussion at the recent ARMA Impact Special Interest Group (SIG) meeting at the annual conference shows just how much we still need to do to integrate, normalise and support impact in its most meaningful terms.
Impact for many of us is a good thing. We welcome the focus on positively influencing the world beyond the university walls, and let’s face it, this in itself is not a new agenda. For applied researchers (myself very much included), we have always sought to qualitatively contribute solutions to social problems. However the more formal assessment driven (REF) impact agenda shifts such virtuous rhetoric towards reductionism and selectivity. REF is a bit like your mother in law who manages to completely overlook the 6 hours of cleaning you’ve done and focus instead on the speck of dust you’ve left behind the TV. It’s a one-off assessment which ignores how frantically you’ve cooked, ironed, and incentivised-your-children-to-behave-less-like-chimps. And like REF, usually results in a large glass of wine.
I don’t say this to dismiss REF. If anything, REF has accelerated the importance of impact within academia and for that I am thankful. With the puerile analogy above aside, I strongly urge those for whom impact is emerging to really take time to consider how impact ‘works’. A formal impact agenda raises challenges across the academic sector, arguably posing most difficulties for fundamental research and that with less easily measurable endpoints (eg. arts and humanities). Assessment-driven approaches risk reducing impact value to a small subset of narrowly demonstrated effects. Unless we approach impact literately* and meaningfully, we will only ever firefight paths towards social effects.
In all of this, it’s crucial too that we don’t ignore the people. Obviously it’s vital that we engage stakeholders and consider wider public benefit, and there’s excellent thought-leadership in these areas. However here I’m referring to a different group – impact practitioners themselves, be they the academic driving their own work or a research manager supporting a broader programme of work. The impact sector has grown rapidly within the UK, and – as demonstrated through the wealth of experience and expertise in the ARMA Impact SIG – the sector would be foolish not to recognise the skills and capabilities so fundamental to translating research into effects.
Reducing impact to a measured subset of effects obscures the expertise needed for knowledge brokerage, culture change, partnership management, strategic planning and reconfiguration and many other things in combination. If we are to create ‘good impact’ we need to recognise and invest in professional development amongst all those supporting this agenda. And avoid bolting impact on as an afterthought. And understand how assessment models may drive behaviour. And how this may be judged by a Mother-in-Law-dust-seeking review**.
Let’s make the research count. Properly.
*Impact literacy paper to come with the brilliant Dr David Phipps!! (@Researchimpact)
**My mother in law likes me. At least she hasn’t said otherwise