June 2017 - Impacts Data Collection Review

This Working Group (WG) documented how UK universities are currently collecting impact information in their various local software solutions, including commercial and in-house systems and spreadsheets. The WG also documented the various impact reporting information requirements of the UK funders, including REF2014 (the next REF likely to be reasonably similar) and the Researchfish tool. Once we have community input on output of this work (see report linked below) the next cycle of work will be to recommend how to move to standardise/harmonise the divergent information requirements.

See this post for background on the Open Review. This review is open until June 30, 2017. The draft report is NOW READY for review below.

To comment on the content below either click the blue “Reply” button at the bottom or select a passage of text in the report and click the “Quote” pop-up to create a comment about that section only.

1) Preamble

The impact of research is an increasingly important agenda for universities with funders keen to speed up the translation of funded research into measurable effects on the the economic, cultural and social lives of people and organisations, as well as their health and well-being.

This applies particularly to public funders who need to justify their expenditure to their masters in government and society at large, particularly where budgets are coming under increasing scrutiny. REF2014 (http://www.ref.ac.uk/) introduced Impact Case Study Reporting as an element of the UK national research assessment exercise with the outcome directly affecting funding, and was hailed a success by both the government funders and, broadly, by the research organisations involved.

This has galvanised the acceptance of impact as a recognised area of research strategy, policy and management at universities, which has, in turn, led to a call by these universities, represented by this CASRAI-UK working group, to improve the efficiency and effectiveness of impact reporting tools.

The first phase of work by the group has been to survey and document the current systems in use by UK universities for recording impact information. This is described in more detail below including a proposal for a set of information elements to record impact which we are making available for feedback from the wider research community, and a specific set of questions to help target this feedback.

The Working Group (WG) documented how UK Universities currently record impact information in their various local systems. These systems include: Pure, Converis, Elements, EPrints, Researchfish, VV Impact Tracker, in-house systems and individual spreadsheets. The WG also documented the various Impact requirements against which Universities are asked to provide data.

The WG used the above information to propose an initial list of information elements for recording impact and draft definitions for these elements, plus items requiring further consideration (planned for the continued work in 2017).
The WG welcomes feedback on this draft from all stakeholders including other research organisations, research funders and system suppliers. With the recommendation in the Stern review for funders to harmonise their requirements for Impact and the evidence from this WG of a convergence across systems currently in use for recording impact, this seems an ideal opportunity to work collaboratively on a clear research information template for Impact. The plan is for the WG, including new members who wish to be involved, to reconvene following the review process to produce an Impact Glossary for inclusion in the CASRAI dictionary.

2) Early Draft Glossary of Impact Elements

The following list includes the terms (and any available definitions) being used in our various impact capture tools. In its next work phase the WG will refine this list, supply missing definitions and draft a proposed harmonised data collection template that can meet all the above purposes.

Elements currently agreed for inclusion

  • Title - Short title for the impact.
  • Description - A summary, single paragraph, description of the impact.
  • Narrative - A detailed narrative of the impact. A textual/human readable form of the details of the activities which have been undertaken to create the impact and the combined result. This should be able to be read independently of other parts of the impact record. Include references to relevant periods and future developments.
  • Who is Affected? - Details of the audiences, beneficiaries and end-users.
  • Category of Impact - The area in which the change or effect is realised. (WG agreed to default to existing REF list of single broad categories including wider proposed definition so adding academia/learning and teaching.)
  • Impact Stage - The degree to which the impact has developed, been adopted and been realised. Potential candidates for a standard ‘list of stages’: Planning or developing - early stage; Involvement - mid or active stage; Change adopted - end stage; Public benefited - end stage; In progress; Impact occurred - awaiting evidence; Impact occurred - evidence obtained; Planning and early development; Involvement and engagement; Influence and change (awaiting evidence); Influence and change (evidence obtained)
  • Associated IDs - Identifier/s for associated impact in an external system. E.g. REF2014 or Researchfish
  • Impact Evidence - Demonstrable proof of the effect or change. Multiple examples of Impact Evidence can be associated with an Impact. A possible structure for each instance of Impact Evidence: Type e.g. quantitative or qualitative; Period; Evidence Title; Evidence Summary; Evidence Contact Info; Links - urls, documents
  • Participants - The people and/or organisations who contribute to the Impact. This needs to be structured further to include, for example, persistent identifier and role/contribution. The CASRAI CRediT taxonomy could be applied to this - or extended to cover additional categories for Impact contribution.
  • Restrictions - Any restrictions on impact (and associated evidence) visibility; where information may need to be withheld from public domain due to data protection, commercial, political or other sensitivity.
  • Geographical extent of the influence - Geographical reach of the impact: local, regional, national, international or multinational.

Elements parked or excluded (but open for reconsideration)

  • Related Content - Agreed to leave these out of the core elements for now - although likely to be required as link to the underlying research - particularly Links to Research outputs and Projects. E.g. Press/Media; Equipment; Research output/publications; Activities; Projects; Datasets; Prizes; Student thesis
  • Impact Hierarchy - This can provide a hierarchy or sequence of Impact i.e. this impact built on top of this other impact. How then do we determine where one impact ends and another starts.
  • Period - Agreed: to take this out altogether because the elements would be referenced in the narrative. Date of engagement - referenced by linked activities; date impact evidenced - referenced by linked evidence. More discussion would be needed.
  • Future Developments - What further impacts are possible/likely because this change has happened. What intend to do in the future to extend the impact. Suggest : include this in Narrative element above.
  • Impact Classification - One or more ways of classifying Impact. No agreement on whether to keep this or not - similarly for Keywords.
  • Keywords - See classification
  • Links - urls, documents, etc - Agreed to park this for now.
  • Reach - Extent to which the research has impacted on the potential beneficiaries of the research
  • park for now - could be included in Narrative?
  • Significance - Explanation of the importance of the impact(s) that have occurred as a result of the research. Park for now - could be included in Narrative?

Specific questions on the above information elements:

  1. Are separate Description and Narrative elements necessary? If so how would you define each clearly and distinctly?
  1. Is the proposal to record Impact Stage sensible/feasible? How many Impact Stages should we have? How is each one defined?
  2. Do you agree with categorising impact and the WG suggestion to use the REF definitions (including plan to extend to academia learning/teaching?
  3. Should any of the related content elements be part of the core definition? If so, which ones and why.
  4. Which, if any, of the Parked elements should be included in the core definition? Please say why and provide a definition, and where we have suggested they could be included in the Narrative - please indicate why they should be separate items.

Appendix) Impact Data Collection in the UK

Higher Education Institutions collect and report on impact activities for a number of purposes, in a number of ways, and using a range of metrics. These are listed below.

1) Research Excellence Framework (REF)

  • The Research Excellence Framework (REF) is the system used to assess the quality of research in UK higher education institutions. The REF is undertaken every five-seven years, with the last REF results published in 2014 (REF2014) and the next due to be published in 2021 (REF2021).
  • Peer review is at the heart of the REF assessment, covering research outputs, research environment and impact. Impact assessment accounted for 20% of the REF2014 score.
  • Impact assessment was based on an Impact Template (20% of the impact score), which described the Unit of Assessment (discipline) approach to impact overall; and a selection of Impact Case Studies, dependent on number of staff FTE submitted (80% of the impact score).
  • Impact in REF2014 was defined as ‘an effect on, change or benefit to the economy, society, culture, public policy or services, health, the environment or quality of life, beyond academia’.
  • Impact Case Studies required evidence of both the link between research in the discipline and the impact created; and evidence of the impact itself. Impact was measured in terms of reach and significance. There is an expectation that the next REF - probably REF2021 - will have a similar requirement.
  • Since the introduction of impact to the REF, various systems have been developed for curation of impact evidence – including impact modules for CRIS systems (such as Pure and Symplectic), Vertigo Ventures Impact Tracker, Researchfish, and a range of bespoke internally developed systems and approaches.
  • Many institutions will run REF mock exercises on a regular basis throughout the REF cycle, collating impact narrative statements or plans.

2) Applying to Funders

  • Many funders require a Pathway to Impact or similar statements around how the project will engage with end users. The definition of the “Pathway to Impact” is evolving with the RCUK
  • Current RCUK definition: “The demonstrable contribution that excellent research makes to society and the economy.” But the REF consultation proposed that HEFCE (http://blog.hefce.ac.uk/2017/02/09/research-is-all-about-impact-2/) and RCUK (http://www.rcuk.ac.uk/innovation/impacts/) align their definitions of impact to be adopted for REF 2021. Proposed aligned definition: “Wider impact: an effect on, change to or benefit to the economy, society, culture, public policy or services, health, the environment, or quality of life, beyond academia.”
  • H2020 (https://www.ffg.at/sites/default/files/downloads/page/horizon2020indicators.pdf) talks about different categories of impact as indicators and includes “impact broadly defines the wider societal, economic or environmental cumulative changes over a longer period of time” and “impact indicators represent what the successful outcome should be in terms of impact on the economy/society beyond those directly affected by the intervention”

3) Reporting to Funders

4) Impact Prizes

  • There are a number of impact prizes where applicants enter the impact that they have created, potentially in a number of categories. Examples:
  • ESRC
  • RCUK/PraxisUnico
  • Others
  • Internal prizes/awards

5) Higher Education and Business and Community Interaction Survey (HEBCIS)

  • The annual HE-BCI survey (https://www.hesa.ac.uk/support/definitions/hebci) examines the exchange of knowledge between universities and the wider world, and informs the strategic direction of ‘knowledge exchange’ activity that funding bodies and higher education institutions (HEIs) in the UK undertake.
  • This is an annual return and drives the allocation of Higher Education and Innovation Funding (HEIF - http://www.hefce.ac.uk/ke/heif/) to universities.
  • HEBCIS collects a range of information, though these are indicators of impact generating impact, and not necessarily evidence of impact. These include:
  • Collaborative research
  • Consultancy
  • Contract research
  • CPD and CE
  • Facilities and equipment related services
  • Intellectual property income
  • Regeneration and development programmes
  • Spin-off companies established
  • Spin-off companies active for three or more years
  • Number of new patent applications filed/granted

6) Key Performance Indicators (KPIs)

  • Aspects of impact activity may be measured as institutional KPIs, including volume/quality of REF case studies, industrial income, numbers of partnerships, etc.

7) Internal Case for Investment

  • Demonstration of return on investment may be required to secure ongoing investment for impact both within academic departments and Professional Services.
  • Linked to KPIs possibly.

8) Resource planning

  • Institutions may wish to understand the volume of impact activity being carried out and likely resource requirements in order to plan support appropriately, including distribution of resources (human/financial).

9) Case Studies (not REF)

  • Case studies showing impact stories may be requested from funders and internally to demonstrate impact in particular areas, for marketing activity, etc.

10) Marketing

  • Examples of impact may be presented on institutional websites, on newsletters and other marketing material to demonstrate to end users the potential value of engaging with universities.

11) Sharing of Best Practice

  • Nationally and internally, activity around what is best practice in impact.

    1. Performance Review
  • Increasingly institutions are using impact generation as a criterion in performance review and promotion.

13) Customer Relationship Management Systems (CRMs)

  • Institutions may use CRMs to collect information on networks between the institution (individuals/departments/Colleges/institution) and external customers/partners.
  • This may enable identification of potential partners for both bidding and sharing of research to create impact; and for targeting marketing activity.
  • Possible link to KPIs.

This reply is to notify those who expressed an interest in reviewing the Impacts outputs of CASRAI-UK. Hoping all in the @uk-impacts interest group can look at the above post and provide their comments to the WG.

This Taxonomy includes very well the contribution of researchers. But impact is often achieved in collaborative projects with non scientific actors. Is it planed to make their roles transparent? If yes it needs other types of roles for them.

1 Like

Responses to specific questions on the above information elements:

  1. The distinction between narrative and description is not necessarily a requirement but it is useful to some HEIs as far as I am aware. Whilst I believe the description box to be essential in terms of understanding the incident of impact reported, the narrative should be optional, as it has proven counterproductive to push for a more detailed narrative in cases where impacts are at an early stage of development.

  2. I believe that it is useful to record the impact stage, as this can be helpful in terms of assessing the maturity of impact developments and the potential for growth / development. However, it would be most beneficial to boil this down to a few defined categories rather than offer too much detailed choice. These should be as generic as possible and indicate maturity of impact. Suggested options: Planning or developing - early stage, In progress – engagement/ stakeholder involvement, Impact occurred - awaiting evidence, Impact occurred - evidence obtained. It would also be useful to have a status for impacts that are abandoned, never fully developed or on hold (i.e. ‘parked’?).

  3. I agree with the suggestion that use of the REF definitions would be sensible criteria, given that the ESRC impact prize and the NERC impact awards are already using the REF definition to assess impact and the related Stern recommendation is likely to bring funders’ impact definitions in the UK closer to the REF. However, would it be worth utilising REF2021 criteria rather than REF2014 criteria as these are there are likely to be updates and/ or changes in the forthcoming iteration of REF guidance. Also, I would be cautious about the term ‘academia learning/teaching’ when talking about impact. Whilst it is true that REF2014 did allow for impact on higher education and/ or teaching in HEIs outside of the submitting institution, the terminology used was actually ‘impact on education’ and it was only explicitly included as an impact type in Panel D. It might be worth checking how this features in the consultation results and the first iteration of REF2021 guidance, which will provide an indication as to what extent HEFCE is taking Stern’s recommendation further – and in what form.

  4. Don’t think so. Would potentially make the core template too convoluted.

  5. Albeit not the only purpose for the impact data collection, the REF is undeniably one of the main drivers (if not THE main driver) for the impact agenda in the UK. To be able to judge whether any specific impact data is fit for inclusion in a REF impact case study heavily depends on WHEN the impact occurred. I would be hard-pressed to trust that an indication of a relevant time period or dates will be added in impact narratives or descriptions as a matter of course (even if the data collection template explicitly asks for it). In fact, during the REF 2014 submission cycle (and since) the dates when the impacts (and/or related activities) occurred were amongst the most nebulous areas of narratives or impact statement/ drafts submitted for monitoring or data collection purposes. Suggested definition: ‘When did the impact occur? (period of time or date/ date range)’.

Other comments:

Associated IDs – not sure this is useful for developing incidents of impacts. This seems to be more applicable for impacts that have already been reported. Is this intended for imports from other systems? I don’t find it necessarily relevant. Maybe this could be parked?

Reach and Significance (along with Geographical Reach) - could easily be included in the narrative. I don’t see the benefit of including this as a separate definition.

Who is affected – we might want to term this ‘Who/ what is affected’ as impact on non-human entities is acceptable (both for funders and the REF) – i.e. the impactee can be the environment or a corporate entity. Maybe this can be taken into account?

Participants – might it be worth terming this definition ‘External Participants’? I read this definition as focusing on external organisations as we would normally assume that any internal participants would be researchers whose work or expertise is underpinning.

1 Like

Hello. I just had one comment related to ‘Category of Impact’ - often impact might fit into two categories equally say ‘health’ and ‘economic’ so maybe it could be possible to choose more than one? Having it labelled as one category can be limiting and mean academics don’t think about the multiple impacts their work has.

1 Like

Responses to the 5 questions posed, drawing on experience of working with information submitted to a research council (EPSRC) via researchfish:

  1. Are separate Description and Narrative elements necessary? If so how would you define each clearly and distinctly?
    The answer depends on how the data collected is used. Title and Description alone are sufficient if the purpose is primarily to allow what I would call ‘outline claims’ of impact to be recorded, so that specialists can then review the data and identify those with sufficient potential to justify allocation of additional resource to be worked up into a fully fledged Case Study suitable for inclusion in e.g. a REF. Including a longer (several page) Narrative element could generate a huge amount of wasted effort. On that basis, if a narrative element is included then it should be optional, while Title and Description should be mandatory. Title: not more than 100 characters incl. spaces; include at least one term or phrase to uniquely and memorably identify the record in the mind of a reader. Description: more ‘what’ than ‘how’, i.e. describe the principal output/activity leading to the impact(s) AND the impact(s), but don’t dwell on how the one led to the other - that can be done if a full Case Study is commissioned.

  2. Is the proposal to record Impact Stage sensible/feasible? How many Impact Stages should we have? How is each one defined?
    I have misgivings about impact stages - in particular a stage labelled ‘planning’ or ‘developing’ would I think will lead to innumerable submissions of ‘potential’ rather than actual impact and make it much harder to focus subsequent effort. But once again, it’s about what the data is collected for…

  3. Do you agree with categorising impact and the WG suggestion to use the REF definitions (including plan to extend to academia learning/teaching?
    I think the REF categories are useful, but there are huge potential overlaps will lead to endless debate about which category is ‘correct’ for many records - see the FAQ headed ‘WHAT IS SUMMARY IMPACT TYPE?’ at http://impact.ref.ac.uk/CaseStudies/FAQ.aspx. A more descriptive, lower level categorisation of impact would be useful. e.g. in researchfish some outcomes can be flagged as having lead to certain types of impact, for example a record describing a ‘Influence on Policy’ can be flagged as having lead to one or more of the following:

Improvements in survival, morbidity or quality of life
Changes in efficiency and effectiveness of public service delivery
Improved accessibility of public services
Improved regulatory environment
Economic impacts
Improved educational and skill level of workforce
Changed public attitudes on social issues
Effective solutions to societal problems
Improved environmental sustainability
No impacts yet
Not known

If agreement can be reached on suitable short ‘statements’ such as these (excluding the last two :wink: ) which summarise different kinds of impact it will make the process of recording ‘impact’ less burdensome to those creating the records and make the records themselves much more efficiently accessible subsequently to those seeking to extract information from them.

4 Should any of the related content elements be part of the core definition? If so, which ones and why.
Yes, an impact record should always include a link to the underpinning research output/activity, but the system should avoid being prescriptive about the type of output/activity

5 Which, if any, of the Parked elements should be included in the core definition? Please say why and provide a definition, and where we have suggested they could be included in the Narrative - please indicate why they should be separate items.

‘Period’ no, but key dates should be core elements (i.e. not left to be extracted from free text narrative) and should not be constrained by undue precision - DD/MM/YY is seldom required, often MM/YY is sufficient and occasionally YY alone is OK. They should be recorded for (I) when the underpinning research was ‘completed’ and (ii) when the impact being recorded was first identified and linked positively to the underpinning research.

‘Reach’ should be a core element, but there is potential overlap between it and the core element ‘Who is affected’ - needs to be clearer that one (as I read them) refers to geography and the other to people. In both cases I recommend constrained authority lists be used, not free text.

I don’t think the other ‘parked’ elements should form part of the core.

1 Like

Christine - many thanks for your comments - all extremely well made. On the Participants question then we do actually mean those who are contributing to the Impact it terms of the underpinning research or follow-on work. But this obviously needs clarifying - so thank you.
The CREDIT taxonomy, in case you haven’t seen it, has the following 13 contributor roles at http://dictionary.casrai.org/Contributor_Roles


Agree that more than one category should be possible and we will make that explicit in the spec.

Many thanks, Ben for your detailed and expert response.

On Impact Stages and Researchfish - it could be then that only Impact beyond a certain stage is asked for and relevant?

On categories we discussed having a hierarchy - would this work in your scenario? So we could use the top-level for REF, for example, and more detailed level for Researchfish?

On the link then that is a good point - thank you.

Seems that date/s are important - so good to get feedback in this.

I tend to agree about reach and who affected … but these would need to be optional elements and better defined as you say.

1 Like

Many thanks for your message, Anna. The contributor roles look great - and yes - with some clarification, I can see this would be valuable for both research monitoring purposes and for academics trying to piece together an impact narrative for any purpose (REF/ funders). I guess if the roles were presented as a drop-down menu this section would be fairly self-explanatory.

Question 2 - I think there might be a need for an ‘impact closed’ or similar. Reasons can be many and varied, work has been superseded, work came to its natural close, lead academic(s) left and the work [research and/or case study] did not continue, work was delayed for some reason and more.

This helps selection of cases for REF/other purposes and after a while, will provide trend data on progress (or not!) longevity of impacts being felt etc.

1 Like

Thanks and yes I see the logic for having this - I suppose depends how much detail we need as to how many reasons we include. Certainly something for further discussion I think

  1. I think the distinction should be between a (short) summary, and the longer narrative. This mirrors the REF template, and in any other write-up you are also likely to start with a short summary of what is to come. This should also be possible to extract in reports.
  2. I would raise caution against more than two, possibly three stages: “in progress”, “complete” and possibly “in planning”. I prefer not to reserve the impact module to only have projects where some impact has taken place, but I understand some institutions also want to record planned impact. A more granular approach is going to be so subjective that it won’t have any additional benefits. Crucially - it must be possible to reopen a “complete” record if even more impact occurs.
  3. I think it makes sense to use the REF definition (once we know what this is). We can hope that this definition stays, but there needs to be flexibility in changing it if the definition changes.
  4. I am not sure I understand this question.
  5. As we are talking about impact from research I think it is important that impact records link to research outputs and projects. In our system (PURE) we also distinguish between activities to create impact and the result which is the impact, and needs to be evidenced. I think this is important, otherwise we are in danger of colleagues logging public lectures etc as impact.
    For reporting purposes I think dates (period) is important. But if it is possible to run reports on say records that have been updated within the last 6 months, then that should cover it.
    Agree reach and significance is best included in the narrative.

I have some reservations about early/mid stage assignment - from experience this promotes potential/aspiration-based records. For those that do not develop further this leave a history of proposed impact that never occurred, something I would consider to be more negative in the long-term than no record at all. I am of the opinion that an impact record should record tangible change, in which case stage is perhaps not required at all

I mainly agree with Christine’s suggestions.

I find value in having an early/'planning stage but also a ‘parked’ label. Impact tends to be non-linear and so having a holder for imapct that is developing is important so that information isn’t necessarily lost if it takes a pause.

I would prefer period or start and end dates to be a separate field so one doesn’t need to search for it in the narrative and to point out to individuals to include it.

Links to research are also important if this is being based on REF so some fields for DOI or other identifiers - and for grants. These wouldn’t need to be required fields.

Reach and siginificance are a subjective and comparative measure, so not sure it should be a field.

Description and narrative do not clearly identify what is to be written. Perhaps ‘summary of impact’ and ‘research to impact narrative’?

Many thanks, Anne Sophie for your comments.
I think you are right about the number of stages and if we have too many then these will become too subjective
Q4… you answered in 5 anyway as it was about whether we should include requirement to link to other content … e.g. outputs or projects; Ben Ryan pointed out that this should be included since it is a fundamental part of any impact to be able to trace it back to the original research output/outcomes or activity

Many thanks for your comments, Laura
On Dates then this came up at the CASRAI conference last week too … so I think we need some kind of date but need to define exactly what the date (or dates if we have > 1) refer to.

For Description and Narrative then agree we need to define these more clearly and indicate what we expect to see included. Does seem to be agreement though that we do need the 2 fields.

I’d agree - it seems to depend on how an institution is using the data. We use the description field to mean the a summary of the underpinning research (ideally in lay terms) but we are looking specifically at how we can map the data captured to potential REF case studies.

I’m taking the same approach as Christine, addressing (some of) the specific questions above.

  1. I can see the logic in the separate Description and Narrative elements - to me, the ‘description’ field matches the ‘impact summary’ element of a REF case study. In practice, however, users can’t discern a difference between them and, while recording an instance of early or mid-stage impact, for example, they just find themselves repeating the same information in both boxes. I find the most important field to be ‘Narrative’ - everything that needs to be recorded, including quantitative data, can be placed there. It only becomes an issue when there is so much impact that it exposes the limitations of the recording system, and two boxes won’t answer the problem. Impact IS narrative, and often its best curator is a human! In such cases, an impact recording system is not perhaps the best place to record impact info other than a brief outline - which can be handled adequately by the Narrative field.

  2. Recording Impact Stages: I find this a useful indicator, but think it best not to go overboard. The main issue arises when/if users are employing the system to plan for impact rather than recording impact that has already occurred - I almost feel I want a different type of record altogether for that. We have adopted a ‘Record Closed’ stage - as others have pointed out, impact can be/has to be abandoned for various reasons, and it’s sometimes quite different than an instance of ‘Impact Completed’

  3. Certainly I think there needs to be a greater consistency of terminology around impact, and streamlining with the REF definitions makes sense. I, too, am uneasy about including learning/teaching in the definitions.

  4. The only ‘related content’ elements that should potentially form part of the core definition are ‘research project’ and ‘outputs’ - simply because, under REF definitions, there is no impact without research, and there is still widespread misunderstanding about this.

  5. The only Parked elements I would suggest for potential inclusion would be Reach and Significance; but, as indicated, I think they could/should most usefully be included in the Narrative. This is what I already advise users to do: keeping things simple is the better course.