Blog Post
Using the Research Impact Framework as a tool for reflection
published 28 April 2015
published 28 April 2015
We have google analytics, altmetric.com, Sprout Social, Mail Chimp, Bit.ly, My top tweet, and any number of other funkily-named applications whose tentacles gather data to tell us how we are doing at changing the world. The rationale underpinning this emphasis on measuring impact is sound: What is the point of doing research, especially in an applied field like public health, if the results aren’t used and it doesn’t benefit anyone?
In IDEAS, we have engaged with the challenge of measuring research impact by applying the Research Impact Framework[1] (RIF) in conversations with implementation projects working in maternal and newborn health in Ethiopia, Northeast Nigeria and Uttar Pradesh, India. The RIF was developed by researchers, for researchers, to aid reflection on the impacts of their work. It’s a ‘light touch’ tool – we didn’t want to add burden to projects’ already intensive measurement regimes! – but it could be combined with other methods for added rigour. It has four impact categories (research, policy, service and societal) and encourages its users to think broadly about how their work has made a difference.
What research impacts were reported?
Projects reported impacts in all four categories. Research impacts included academic articles, conference presentations, reports and policy briefs. Policy impact was shown through pilot interventions being scaled-up by a state or national government, and policy impacts often linked directly to service impacts, for example a policy decision leading to change in health service delivery. Societal impacts were typically described through anecdotes, which would benefit from verification.
We followed up with projects 12 months after our initial conversations, in an attempt to capture impacts realised more recently, even after a project’s completion. This brought new examples and depth to the work. Sadly, in several cases key contacts had moved on, or new challenges were occupying their attention: We fell foul of the problem of the spotlight having moved forward and insufficient incentive for projects to reflect on the ‘long tail’ of impact.
However, we plan to persevere and return in a year’s time in an attempt to uncover new impacts and contribute to the discussion about impact and how to measure it.
Shirine Voller and Agnes Becker led the study on Dissemination activity and impact of maternal and newborn health projects, and produced an updated report in April 2015.