June 2017 - Impacts Data Collection Review - NOW CLOSED


#21

Thanks to the Impact WG for all your hard work on this. I am supportive of the overall structure - it reflects the way we already capture impact information and there is some healthy discussion around some of the nuances from others in the community.

To get the most out of this it work would be good to bear in mind the perspective of academics who will, after all, be doing most of the recording and have a big stake in how the information is used. Though this perspective is coming through in some of the feedback from the community, it doesn’t feel like it has been given systematic consideration.

If the aim of this work is to harmonise impact capture across organisations/systems then that’s one thing. If it’s about designing effective systems that work for all stakeholders (including those we are relying on to enter their information) then it would be worth thinking about how we bring in other perspectives. For example, how important is it to academics to distinguish between description and narrative? Do they want to use these kinds of systems to record planned impacts? It might well be that drivers such as simplicity and Researchfish compatibility are more important factors in securing academic engagement. If we don’t get the engagement bit right, we won’t have much information to manage!

Again, I think this is great (and important) work and hopefully my questions/comments will help you consider how to get the best from it.


#22
  1. Yes, it is good to explain in a few words what is the impact in a separate description field whilst the Narrative would give a better understanding of what was researched and what led to the impact.
  2. This does not seem sensible as users are unlikely to interpret a definition in the same way and provide an objective assessment.
  3. This does not make sense for future proofing as REF guidelines are likely to change and therefore the categories could become irrelevant. We may consider as well that impact may occur beyond the listed categories and limit Impact categories up to the REF guidelines would affect the quality of reporting.
  4. Any generic information that might be accessed/ shared, as Outputs and activities

#23

Thank you for the opportunity to comment on the Working Group’s draft. As a funder, we recognise the need for a glossary of terms so that there is a collective understanding, and data and information can be easily shared and mined between researchers, universities, funders and other organisations. NIHR Impact representatives have provided some responses to the five specific questions posed and also provided some other general comments.

Response to the five specific questions posed:

  1. Are separate Description and Narrative elements necessary?
    This will depend on the purpose of collecting the research impact information and who the audience is – different individuals or groups, HEIs, funders, Government, or Trusts require different levels of information in various formats. So, with this in mind, there needs to be some flexibility built in. Under ‘Description’, as it stands, ‘Paragraph’ is left open to interpretation. We suggest that it might be more helpful to consider providing a ‘Headline’ summarising the key impact, rather than a ‘Description, then provide the ‘Narrative’. Also, under ‘narrative’ – we suggest inserting ‘time periods’/ estimated ‘timescales’.

  2. Is the proposal to record impact stage sensible/feasible? How many impact stages should we have? How is each one defined? This will depend on 1) for what purpose is the information being collected; 2) the audience, and 3) how the information is going to be used? If you want to track progress over time it can be helpful to take the stage of the research and/or impact into consideration. However, attempting to define impact stage is inherently problematic – there are many diverse ways in which impact can arise or be achieved, it can take a significant amount of time for impact to be realised and it can comprise of several feedback loops. Feedback loops are complex in themselves and, even more complex, for the NIHR (and other medical research funders) when considered as part of a complex integrated health system. We suggest that the focus here should be on actual impact that can be verified, evidenced and/or triangulated – obviously, how it’s evidenced will depend on the nature of the impact claimed. Where actual impact is being claimed, evidence of external validation should be included, where appropriate. The introduction of ‘potential’ will add noise into the data systems.

  3. Do you agree with categorising impact and the WG suggestion to use the REF definitions (including plan to extend to academia learning/teaching)?We recognise the potential that the REF has to help unpick research impact, however, as a multidisciplinary research funder, we have different impact requirements, depending on our stakeholders’ and organisation’s needs – REF related-impact represents only part of the NIHR impact story. We believe that impact is a collaborative process and mainly co-produced (i.e. research users are actively engaged, as partners throughout the entire research process), and by taking an approach that encompasses patient outcomes and longer-term, indirect impacts, it will ensure the wider picture of NIHR impact is captured and demonstrated.

While defaulting to the broad REF categories makes sense these definitions are likely to change and evolve, thus, limiting the use of such systems to optimal for HEIs. We believe it would be more beneficial to have a more open definition of impact for capture that can be tailored for any appropriate assessment or use so that REF would be catered for but so would other reporting. Furthermore, part of our funding approach, as with other funders, is to support individuals along their career paths. Hence, impacts on people and careers also need to be captured and taken into consideration.

4.Should any of the related content elements be part of the core definition? If so, which ones and why? Yes, ‘links to underlying research – particularly links to research outputs and projects’ should be included along with the associated finding. Standardised unique identifiers, for example, the award/grant references, the International Standard Registered Clinical/social study number (ISRCTN) registry, the Global Research Identifier Database (GRID), Open Researcher and Contributor ID (ORCID) and Digital Object Identifiers (DOIs) should be used, as appropriate. The use of such identifiers will help improve interoperability within the research ecosystem.

5.Which, if any, if the Parked elements should be included in the core definition? Please say why and provide a definition and where we have suggested they could be included in the Narrative – please indicate why they should be separate items. If the accompanying guidance is clear and ensures the inclusion, then no need to include the other ‘parked elements’. We agree that the inclusion of ‘time period/scale’ warrants further discussion. We also suggest that the guidance should consider engagement activities, knowledge translation activities, resources, activities or support needed to further maximise the impact (i.e. ensure change continues and is sustained), unintended consequences of the research, and any factors that may block or reduce the impact and how these might be overcome. In addition, a plan for how the potential impact of a piece of work is to be communicated with stakeholders would be useful. Need to ensure as wide an audience as possible is aware of the work and adopts it.

Other points:

‘Preamble’
• It’s not just about scrutiny or justifying our investment. As a public funder, it is about proving that the research we fund and support brings meaningful, real-world benefits to patients, the public and the wider society. It is also about social responsibility.
• It would be helpful to highlight and emphasise that impact is about provable change(s) in the real-world – this is not explicit in the document as it stands.
• The longer-term, non-linear, highly contextual nature of research impact should be explicitly acknowledged.
• As a funder, we recognise the need for a glossary of terms so that everyone understands and data can be easily shared and mined.

‘Elements currently agreed for inclusion’ –
• Who is affected – also need to ask how they benefitted/were affected
• Associated Ids – The section needs to give more specific details, such as reflecting the use of standardised unique identifiers as appropriate. For example, DOIs award/grant reference numbers, GRID, ORCID, and ISRCTN.

Impact Evidence
• Given the indirect and complex nature of research impact, the underpinning mechanisms, efforts and activities associated with achieving longer-term impact need to be recognised (i.e. differentiating between attribution and contribution to impact, developing connections, knowledge mobilisation, evidence of co-production, etc.) and captured.
• Again, as mentioned above this element should reflect and use standardised unique identifiers, where possible, to be useful (e.g. DOIs, ORCID, etc).

Participants – This element needs reframing and requires further clarification. ‘Participants’ could be assumed to be the participants in a research study. Is this component intended to be more about the collaborations and partnerships involved (who? how many? rationale for their involvement?) to enable/facilitate the impact? If so, the guidance should be explicit.

• What about asking to provide some information on the ‘Context’ of the research impact?
• It is important that the impact of people, training pathways and careers is also considered: the guidance should reflect this.

‘Reporting to funders’:
• We recognise, as do other funders that no one size fits all. Different funders, universities, NHS Trusts and other organisations have different drivers and views, of the value and benefits of their research on healthcare outcomes, what the end result should be, and the types of impacts expected vary.

• ‘Keywords’ – the data should be made available in a format that means software can be used and analysed as required.
• ‘Impact statement’ – There is no such field currently in Researchfish

• We like the idea of impact prizes
• Over-claiming impact is a concern. This is an aspect that could be enhanced by, for example:
• Peer review or independent triangulation to verify the claims made by researchers.
• Mechanisms to evaluate and capture the relative attribution that the researcher could reasonably claim to generating the impact e.g. was the researcher pioneering in the world, or bringing into the nation ideas which had been matured outside the nation.
• Economists to provide return on investment/ cost-benefit calculations


#24

Just a semantic point, but one that is increasingly frustrating me…the Impact we are talking about here, increasingly being muddled with impact in the context of bibliometrics or as I prefer to put it ‘indicators of academic influence’. Perhaps the glossary just needs to make it explicit which impact we are talking about. At INORMS last year, sessions on academic and non-academic impact ran side by side and yet we are fundamentally talking about very different things. Each agenda needs a more distinct identity in my view.


#25

Thanks to everyone for a great discussion and many extremely valid points. Just a few extra comments from me:

  1. I agree that a separate ‘Summary’ (probably clearer than ‘Description’) and Narrative elements would be very useful, possibly with Summary being mandatory and Narrative optional. ‘Summary’ would make it possible to quickly review records/run concise reports, whereas the longer narrative could contain details of impact not captured anywhere else.
  2. I see the point of having several impact stages such as ‘in progress and complete’. ‘Planning/in development’ could be useful at the institutional level for flagging things up for the attention of impact administrators, however one main challenge I see here is how do we save researchers from having to rewrite the summary/narrative once the impact has occured? Or would we expect researchers to create separate impact records for the planning stage (essentially ‘Pathways to impact’) and for when impact has happened? This might be too far-fetched, but perhaps it would be useful to have a system that would allow to ‘tick off’ achieved impact goals and add information about additional/unexpected impacts? Creating an impact template that would be perceived by academics as helpful in monitoting their progress against initial impact goals could potentially have a positive effect on researchers’ engagement.
  3. I do agree with an earlier suggestion that it should be possible to assign more than one category to any impact record.
  4. Yes, definitely underpinning research outputs, preferably also activities, press clippings and projects (e.g. to allow reporting on all research outcomes from a project/grant)
  5. I don’t think ‘period’ can only be referenced in the narrative; there should be a way to report on impacts that occured within a specified time period (though I agree that instead of indicating a period for the whole impact record, it might be more practical to do it based on linked activities and evidence dates)

#26

This topic was automatically closed after 0 minutes. New replies are no longer allowed.