The Metric Tide: Independent Review of the Role of Metrics in Research Assessment and Management
Publication Year: 2015
Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. Yet we only have to look around us, at the blunt use of metrics to be reminded of the pitfalls. Metrics hold real power: they are constitutive of values, identities and livelihoods. How to exercise that power to positive ends is the focus of this book. Using extensive evidence-gathering, analysis and consultation, the authors take a thorough look at potential uses and limitations of research metrics and indicators. They explore the use of metrics across different disciplines, assess their potential contribution to the development of research excellence and ...
- Front Matter
- Back Matter
- Chapter 1: Measuring up
- Chapter 2: The Rising Tide
- Chapter 3: Rough Indications
- Chapter 4: Disciplinary Dilemmas
- Chapter 5: Judgement and Peer Review
- Chapter 6: Management by Metrics
- Chapter 7: Cultures of Counting
- Chapter 8: Sciences in Transition
- Chapter 9: Reflections on REF
- Chapter 10: Responsible Metrics
SAGE Publications Ltd
1 Oliver's Yard
55 City Road
London EC1Y 1SP
SAGE Publications Inc.
2455 Teller Road
Thousand Oaks, California 91320
SAGE Publications India Pvt Ltd
B 1/I 1 Mohan Cooperative Industrial Area
New Delhi 110 044
SAGE Publications Asia-Pacific Pte Ltd
3 Church Street
#10-04 Samsung Hub
Introduction © James Wilsdon
Report © HEFCE 2015, where indicated.
The parts of this work that are © HEFCE are available under the Open Government Licence 2.0: www.nationalarchives.gov.uk/doc/open-government-licence/version/2
This report was originally published in 2015
This version of the report published in 2016
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as permitted under the Copyright, Designs and Patents Act, 1988, this publication may be reproduced, stored or transmitted in any form, or by any means, only with the prior permission in writing of the publishers, or in the case of reprographic reproduction, in accordance with the terms of licences issued by the Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to the publishers.
Library of Congress Control Number: 2015960254
British Library Cataloguing in Publication data
A catalogue record for this book is available from the British Library
ISBN 978-1-47397-306-0 (pbk)
Typeset by: C&M Digitals (P) Ltd, Chennai, India
Printed and bound by CPI Group (UK) Ltd, Croydon, CR0 4YY
Metrics evoke a mixed reaction from the research community. A commitment to using data and evidence to inform decisions makes many of us sympathetic, even enthusiastic, about the prospect of granular, real-time analysis of our own activities. If we as a sector can't take full advantage of the possibilities of big data, then who can?
Yet we only have to look around us, at the blunt use of metrics such as journal impact factors, h-indices and grant income targets to be reminded of the pitfalls. Some of the most precious qualities of academic culture resist simple quantification, and individual indicators can struggle to do justice to the richness and plurality of our research.
Too often, poorly designed evaluation criteria are “dominating minds, distorting behaviour and determining careers.”2 At their worst, metrics can contribute to what Rowan Williams, the former Archbishop of Canterbury, calls a “new barbarity” in our universities.3 The tragic case of Stefan Grimm, whose suicide in September 2014 led Imperial College to launch a review of its use of performance metrics, is a jolting reminder that what's at stake in these debates is more than just the design of effective management systems.4 Metrics hold real power: they are constitutive of values, identities and livelihoods.
How to exercise that power to positive ends is the focus of The Metric Tide. Based on fifteen months of evidence gathering, analysis and consultation, we propose here a framework for responsible metrics, and make a series of targeted recommendations.
Together these are designed to ensure that indicators and underlying data infrastructure develop in ways that support the diverse qualities and impacts of research. Looking [Page viii]to the future, we show how responsible metrics can be applied in research management, by funders, and in the next cycle of the UK's Research Excellence Framework (REF).From REF to TEF
When The Metric Tide was first published in July 2015, it sparked an energetic debate between researchers, managers, funders and metrics providers.5 But despite the spread of opinion and evidence that we encountered over the course of the review, we were also encouraged by the degree of consensus in support of our main recommendations. From editorials in Nature, Times Higher Education and Research Fortnight, to formal reactions by Elsevier, PLOS, Jisc, Wellcome Trust and many universities, the idea of ‘responsible metrics’ has been widely endorsed. Internationally too, there has been interest in the review's findings from policymakers and funders who are grappling with similar dilemmas in their own research systems.
In the UK, recent months have seen a raft of proposed reforms to the higher education and research system. A November 2015 Green Paper outlines a new regulatory architecture, including the replacement of HEFCE with a new Office for Students, and (most controversially) the introduction of a Teaching Excellence Framework (TEF) to “identify and incentivise the highest quality teaching.”6 Metrics are portrayed as crucial to the TEF, albeit with some scope for expert judgement alongside, and there are now fierce arguments raging across the sector about whether we need a TEF at all, and if so, how it should be designed, and what mix of quantitative indicators it should employ.
On the research side of the system, the green paper revisits the question of whether metrics should be used in future cycles of the Research Excellence Framework (REF) – an issue we explore in some depth in The Metric Tide. And a further, more comprehensive review of the REF, led by Lord Stern, President of the British Academy, is now underway, and expected to report its findings in July 2016.7
So whether for research or for teaching, the metric tide continues to rise. But unlike King Canute, we have the agency and opportunity – and in this report, a serious body of evidence – to influence how that tide washes through higher education and research.
Efforts over the next decade should focus on improving the robustness, coverage and interoperability of the indicators that we have, and applying them responsibly. We should make sure that lessons learned on the research side are used to properly inform any uses of metrics for teaching. And we should build stronger connections between [Page ix]recent initiatives in this area – of which the San Francisco Declaration on Research Assessment, the Leiden Manifesto, and The Metric Tide are just three examples. Plans by the European Commission to examine metrics in 2016 as part of its Open Science Policy Platform provide a further opportunity to build responsible metrics into whatever framework for European research funding follows Horizon 2020.
Let me end on a note of personal thanks to my steering group colleagues, to the team at HEFCE, and to all those across the community who have contributed to our deliberations.December 2015
1 This foreword was updated and expanded in December 2015 for the book edition of The Metric Tide.
2 Lawrence, P.A. (2007). The mismeasurement of science. Current Biology, 17 (15): R583–R585.
3 Annual Lecture to the Council for the Defence of British Universities, January 2015.
5 A good range of responses were published by the LSE Impact Blog at http://blogs.lse.ac.uk/impactofsocialsciences/hefcemetrics-review/
6 Department for Business, Innovation and Skills (2015) Fulfilling our Potential: Teaching Excellence, Social Mobility and Student Choice. November 2015.[Page x]
The steering group would like to extend its sincere thanks to the numerous organisations and individuals who have informed the work of the review. Metrics can be a contentious topic, but the expertise, insight, challenge and open engagement that so many across the higher education and research community have brought to this process has made it both enjoyable and instructive.
Space unfortunately limits us from mentioning everyone by name. But particular thanks to David Willetts for commissioning the review and provoking us at the outset to frame it more expansively, and to his ministerial successors Greg Clark and Jo Johnson for the interest they have shown in its progress and findings. Thanks also to Dr Carolyn Reeve at BIS for ensuring close government engagement with the project.
The review would not have been possible without the outstanding support that we have received from the research policy team at HEFCE at every stage of research, evidence-gathering and report drafting; notably Jude Hill, Ben Johnson, Alex Herbert, Kate Turton, Tamsin Rott and Sophie Melton-Bradley. Thanks also to David Sweeney at HEFCE for his advice and insights.
We are indebted to all those who responded to our call for evidence; attended, participated in and spoke at our workshops and focus groups; and contributed to online discussions. Thanks also to those organisations who hosted events linked to the review, including the Universities of Oxford, Sheffield, Sussex, UCL and Warwick, the Higher Education Policy Institute and the Scottish Funding Council.
The review has hugely benefited from the quality and breadth of these contributions. Any errors or omissions are entirely our own.[Page xii]
Steering Group and Secretariat[Page xiii]
The review was chaired by James Wilsdon FAcSS, Professor of Research Policy at the University of Sheffield (orcid.org/0000-0002-5395-5949; @jameswilsdon).
Professor Wilsdon was supported by an independent steering group with the following members:
- Dr Liz Allen, Head of Evaluation, Wellcome Trust (orcid.org/0000-0002-9298-3168; @allen_liz);
- Dr Eleonora Belfiore, Associate Professor in Cultural Policy, Centre for Cultural Policy Studies, University of Warwick (orcid.org/0000-0001-7825-4615; @elebelfiore);
- Sir Philip Campbell, Editor-in-Chief, Nature (orcid.org/0000-0002-8917-1740; @NatureNews);
- Professor Stephen Curry, Department of Life Sciences, Imperial College London (orcid.org/0000-0002-0552-8870; @Stephen_Curry);
- Dr Steven Hill, Head of Research Policy, HEFCE (orcid.org/0000-0003-1799-1915; @stevenhill);
- Professor Richard Jones FRS, Pro-Vice-Chancellor for Research and Innovation, University of Sheffield (orcid.org/0000-0001-5400-6369; @RichardALJones) (representing the Royal Society);
- Professor Roger Kain FBA, Dean and Chief Executive, School of Advanced Study, University of London (orcid.org/0000-0003-1971-7338; @kain_SAS) (representing the British Academy);
- Dr Simon Kerridge, Director of Research Services, University of Kent, and Chair of the Board of the Association of Research Managers and Administrators (orcid.org/0000-0003-4094-3719; @SimonRKerridge);
- Professor Mike Thelwall, Statistical Cybermetrics Research Group, University of Wolverhampton (orcid.org/0000-0001-6065-205X; @mikethelwall);
- Jane Tinkler, Social Science Adviser, Parliamentary Office of Science and Technology (orcid.org/0000-0002-5306-3940; @janetinkler);
- Dr Ian Viney, MRC Director of Strategic Evaluation and Impact, Medical Research Council head office, London (orcid.org/0000-0002-9943-4989, @MRCEval);
- Paul Wouters, Professor of Scientometrics & Director, Centre for Science and Technology Studies (CWTS), Leiden University (orcid.org/0000-0002-4324-5732, @paulwouters).[Page xiv]
The following members of HEFCE's research policy team provided the secretariat for the steering group and supported the review process throughout: Jude Hill, Ben Johnson, Alex Herbert, Kate Turton, Tamsin Rott and Sophie Melton-Bradley. Hannah White and Mark Gittoes from HEFCE's Analytical Services Directorate also contributed, particularly to the REF2014 correlation exercise (see Supplementary Report II). Vicky Jones from the REF team also provided advice.
Executive Summary[Page xv]
This report presents the findings and recommendations of the Independent Review of the Role of Metrics in Research Assessment and Management. The review was chaired by Professor James Wilsdon, supported by an independent and multidisciplinary group of experts in scientometrics, research funding, research policy, publishing, university management and administration.Scope of the review
This review has gone beyond earlier studies to take a deeper look at potential uses and limitations of research metrics and indicators. It has explored the use of metrics across different disciplines, and assessed their potential contribution to the development of research excellence and impact. It has analysed their role in processes of research assessment, including the next cycle of the Research Excellence Framework (REF). It has considered the changing ways in which universities are using quantitative indicators in their management systems, and the growing power of league tables and rankings. And it has considered the negative or unintended effects of metrics on various aspects of research culture.
Our report starts by tracing the history of metrics in research management and assessment, in the UK and internationally. It looks at the applicability of metrics within different research cultures, compares the peer review system with metric-based alternatives, and considers what balance might be struck between the two. It charts the development of research management systems within institutions, and examines the effects of the growing use of quantitative indicators on different aspects of research culture, including performance management, equality, diversity, interdisciplinarity, and the ‘gaming’ of assessment systems. The review looks at how different funders are using quantitative indicators, and considers their potential role in research and innovation policy. Finally, it examines the role that metrics played in REF2014, and outlines scenarios for their contribution to future exercises.
[Page xvi]The review has drawn on a diverse evidence base to develop its findings and conclusions. These include: a formal call for evidence; a comprehensive review of the literature (Supplementary Report I); and extensive consultation with stakeholders at focus groups, workshops, and via traditional and new media.
The review has also drawn on HEFCE's recent evaluations of REF2014, and commissioned its own detailed analysis of the correlation between REF2014 scores and a basket of metrics (Supplementary Report II).Headline findings8
There are powerful currents whipping up the metric tide. These include growing pressures for audit and evaluation of public spending on higher education and research; demands by policymakers for more strategic intelligence on research quality and impact; the need for institutions to manage and develop their strategies for research; competition within and between institutions for prestige, students, staff and resources; and increases in the availability of real-time ‘big data’ on research uptake, and the capacity of tools for analysing them.
Across the research community, the description, production and consumption of ‘metrics’ remains contested and open to misunderstandings. In a positive sense, wider use of quantitative indicators, and the emergence of alternative metrics for societal impact, could support the transition to a more open, accountable and outward-facing research system. But placing too much emphasis on narrow, poorly-designed indicators – such as journal impact factors (JIFs) – can have negative consequences, as reflected by the 2013 San Francisco Declaration on Research Assessment (DORA), which now has over 570 organisational and 12,300 individual signatories.9 Responses to this review reflect these possibilities and pitfalls. The majority of those who submitted evidence, or engaged in other ways, are sceptical about moves to increase the role of metrics in research management. However, a significant minority are more supportive of the use of metrics, particularly if appropriate care is exercised in their design and application, and the data infrastructure can be improved.
Peer review, despite its flaws and limitations, continues to command widespread support across disciplines. Metrics should support, not supplant, expert judgement. Peer review is not perfect, but it is the least worst form of academic governance we have, and should remain the primary basis for assessing research papers, proposals and individuals, and for national assessment exercises like the REF. However, carefully selected and applied quantitative indicators can be a useful complement to other forms of evaluation and decision-making. One size is unlikely to fit all: a mature research system needs a variable geometry of expert judgement, quantitative and qualitative indicators. [Page xvii]Research assessment needs to be undertaken with due regard for context and disciplinary diversity. Academic quality is highly context-specific, and it is sensible to think in terms of research qualities, rather than striving for a single definition or measure of quality.
Inappropriate indicators create perverse incentives. There is legitimate concern that some quantitative indicators can be gamed, or can lead to unintended consequences; journal impact factors and citation counts are two prominent examples. These consequences need to be identified, acknowledged and addressed. Linked to this, there is a need for greater transparency in the construction and use of indicators, particularly for university rankings and league tables. Those involved in research assessment and management should behave responsibly, considering and pre-empting negative consequences wherever possible, particularly in terms of equality and diversity.
Indicators can only meet their potential if they are underpinned by an open and interoperable data infrastructure. How underlying data are collected and processed – and the extent to which they remain open to interrogation – is crucial. Without the right identifiers, standards and semantics, we risk developing metrics that are not contextually robust or properly understood. The systems used by higher education institutions (HEIs), funders and publishers need to interoperate better, and definitions of research-related concepts need to be harmonised. Information about research – particularly about funding inputs – remains fragmented. Unique identifiers for individuals and research works will gradually improve the robustness of metrics and reduce administrative burden.
At present, further use of quantitative indicators in research assessment and management cannot be relied on to reduce costs or administrative burden. Unless existing processes, such as peer review, are reduced as additional metrics are added, there will be an overall increase in burden. However, as the underlying data infrastructure is improved and metrics become more robust and trusted by the community, it is likely that the additional burden of collecting and assessing metrics could be outweighed by the reduction of peer review effort in some areas – and indeed by other uses for the data. Evidence of a robust relationship between newer metrics and research quality remains very limited, and more experimentation is needed. Indicators such as patent citations and clinical guideline citations may have potential in some fields for quantifying impact and progression.
Our correlation analysis of the REF2014 results at output-by-author level (Supplementary Report II) has shown that individual metrics give significantly different outcomes from the REF peer review process, and therefore cannot provide a like-for-like replacement for REF peer review. Publication year was a significant factor in the calculation of correlation with REF scores, with all but two metrics showing significant decreases in correlation for more recent outputs. There is large variation in the coverage of metrics across the REF submission, with particular issues with coverage in units of assessment (UOAs) in REF Main Panel D (mainly arts & humanities). There is also evidence to suggest statistically significant differences in the correlation with REF scores for early-career researchers and women in a small number of UOAs.
Within the REF, it is not currently feasible to assess the quality of UOAs using quantitative indicators alone. In REF2014, while some indicators (citation counts, and supporting text to highlight significance or quality in other ways) were supplied to some [Page xviii]panels to help inform their judgements, caution needs to be exercised when considering all disciplines with existing bibliographic databases. Even if technical problems of coverage and bias can be overcome, no set of numbers, however broad, is likely to be able to capture the multifaceted and nuanced judgements on the quality of research outputs that the REF process currently provides.
Similarly, for the impact component of the REF, it is not currently feasible to use quantitative indicators in place of narrative impact case studies, or the impact template. There is a danger that the concept of impact might narrow and become too specifically defined by the ready availability of indicators for some types of impact and not for others. For an exercise like the REF, where HEIs are competing for funds, defining impact through quantitative indicators is likely to constrain thinking around which impact stories have greatest currency and should be submitted, potentially constraining the diversity of the UK's research base. For the environment component of the REF, there is scope to enhance the use of quantitative data in the next assessment cycle, provided they are used with sufficient context to enable their interpretation.
There is a need for more research on research. The study of research systems – sometimes called the ‘science of science policy’ – is poorly funded in the UK. The evidence to address the questions that we have been exploring throughout this review remains too limited; but the questions being asked by funders and HEIs – ‘What should we fund?’ ‘How best should we fund?’ ‘Who should we hire/promote/invest in?’ – are far from new and can only become more pressing. More investment is needed as part of a coordinated UK effort to improve the evidence base in this area. Linked to this, there is potential for the scientometrics community to play a more strategic role in informing how quantitative indicators are used across the research system, and by policymakers.Responsible metrics
In recent years, the concept of ‘responsible research and innovation’ (RRI) has gained currency as a framework for research governance. Building on this, we propose the notion of responsible metrics as a way of framing appropriate uses of quantitative indicators in the governance, management and assessment of research. Responsible metrics can be understood in terms of the following dimensions:
- Robustness: basing metrics on the best possible data in terms of accuracy and scope;
- Humility: recognising that quantitative evaluation should support – but not supplant – qualitative, expert assessment;
- Transparency: keeping data collection and analytical processes open and transparent, so that those being evaluated can test and verify the results;
- Diversity: accounting for variation by field, and using a range of indicators to reflect and support a plurality of research and researcher career paths across the system;
- Reflexivity: recognising and anticipating the systemic and potential effects of indicators, and updating them in response.
This review has identified 20 specific recommendations for further work and action by stakeholders across the UK research system. These draw on the evidence we have gathered, and should be seen as part of broader attempts to strengthen research governance, management and assessment which have been gathering momentum, and where the UK is well positioned to play a leading role internationally. The recommendations are listed below, with targeted recipients in brackets:Supporting the effective leadership, governance and management of research cultures
Improving the data infrastructure that supports research information management
- The research community should develop a more sophisticated and nuanced approach to the contribution and limitations of quantitative indicators. Greater care with language and terminology is needed. The term ‘metrics’ is often unhelpful; the preferred term ‘indicators’ reflects a recognition that data may lack specific relevance, even if they are useful overall. (HEIs, funders, managers, researchers)
- At an institutional level, HEI leaders should develop a clear statement of principles on their approach to research management and assessment, including the role of quantitative indicators. On the basis of these principles, they should carefully select quantitative indicators that are appropriate to their institutional aims and context. Where institutions are making use of league tables and ranking measures, they should explain why they are using these as a means to achieve particular ends. Where possible, alternative indicators that support equality and diversity should be identified and included. Clear communication of the rationale for selecting particular indicators, and how they will be used as a management tool, is paramount. As part of this process, HEIs should consider signing up to DORA, or drawing on its principles and tailoring them to their institutional contexts. (Heads of institutions, heads of research, HEI governors)
- Research managers and administrators should champion these principles and the use of responsible metrics within their institutions. They should pay due attention to the equality and diversity implications of research assessment choices; engage with external experts such as those at the Equality Challenge Unit; help to facilitate a more open and transparent data infrastructure; advocate the use of unique identifiers such as ORCID iDs; work with funders and publishers on data interoperability; explore indicators for aspects of research that they wish to assess rather than using existing indicators because they are readily available; advise senior leaders on metrics that are meaningful for their institutional or departmental context; and exchange best practice through sector bodies such as ARMA. (Managers, research administrators, ARMA)
- HR managers and recruitment or promotion panels in HEIs should be explicit about the criteria used for academic appointment and promotion decisions. These criteria should be founded in expert judgement and may reflect both the academic quality of outputs and wider contributions to policy, industry or society. Judgements may sometimes usefully be guided by metrics, if they are relevant to the criteria in question and used responsibly; article-level citation metrics, for instance, might be useful indicators of academic impact, as long as they are interpreted in the light of disciplinary norms and with due regard to their limitations. Journal-level metrics, such as the JIF, should not be used. (HR managers, recruitment and promotion panels, UUK)[Page xx]
- Individual researchers should be mindful of the limitations of particular indicators in the way they present their own CVs and evaluate the work of colleagues. When standard indicators are inadequate, individual researchers should look for a range of data sources to document and support claims about the impact of their work. (All researchers)
- Like HEIs, research funders should develop their own context-specific principles for the use of quantitative indicators in research assessment and management and ensure that these are well communicated, easy to locate and understand. They should pursue approaches to data collection that are transparent, accessible, and allow for greater interoperability across a diversity of platforms. (UK HE Funding Bodies, Research Councils, other research funders)
- Data providers, analysts and producers of university rankings and league tables should strive for greater transparency and interoperability between different measurement systems. Some, such as the Times Higher Education (THE) university rankings, have taken commendable steps to be more open about their choice of indicators and the weightings given to these, but other rankings remain ‘black-boxed’. (Data providers, analysts and producers of university rankings and league tables)
- Publishers should reduce emphasis on journal impact factors as a promotional tool, and only use them in the context of a variety of journal-based metrics that provide a richer view of performance. As suggested by DORA, this broader indicator set could include 5-year impact factor, EigenFactor, SCImago, editorial and publication times. Publishers, with the aid of Committee on Publication Ethics (COPE), should encourage responsible authorship practices and the provision of more detailed information about the specific contributions of each author. Publishers should also make available a range of article-level metrics to encourage a shift toward assessment based on the academic quality of an article rather than JIFs. (Publishers)
Increasing the usefulness of existing data and information sources
- There is a need for greater transparency and openness in research data infrastructure. A set of principles should be developed for technologies, practices and cultures that can support open, trustworthy research information management. These principles should be adopted by funders, data providers, administrators and researchers as a foundation for further work. (UK HE Funding Bodies, RCUK, Jisc, data providers, managers, administrators)
- The UK research system should take full advantage of ORCID as its preferred system of unique identifiers. ORCID iDs should be mandatory for all researchers in the next REF. Funders and HEIs should utilise ORCID for grant applications, management and reporting platforms, and the benefits of ORCID need to be better communicated to researchers. (HEIs, UK HE Funding Bodies, funders, managers, UUK, HESA)[Page xxi]
- Identifiers are also needed for institutions, and the most likely candidate for a global solution is the ISNI, which already has good coverage of publishers, funders and research organisations. The use of ISNIs should therefore be extended to cover all institutions referenced in future REF submissions, and used more widely in internal HEI and funder management processes. One component of the solution will be to map the various organisational identifier systems against ISNI to allow the various existing systems to interoperate. (UK HE Funding Bodies, HEIs, funders, publishers, UUK, HESA)
- Publishers should mandate ORCID iDs and ISNIs and funder grant references for article submission, and retain this metadata throughout the publication lifecycle. This will facilitate exchange of information on research activity, and help deliver data and metrics at minimal burden to researchers and administrators. (Publishers and data providers)
- The use of digital object identifiers (DOIs) should be extended to cover all research outputs. This should include all outputs submitted to a future REF for which DOIs are suitable, and DOIs should also be more widely adopted in internal HEI and research funder processes. DOIs already predominate in the journal publishing sphere – they should be extended to cover other outputs where no identifier system exists, such as book chapters and datasets. (UK HE Funding Bodies, HEIs, funders, UUK)
- Further investment in research information infrastructure is required. Funders and Jisc should explore opportunities for additional strategic investments, particularly to improve the interoperability of research management systems. (HM Treasury, BIS, RCUK, UK HE Funding Bodies, Jisc, ARMA)
[Page xxii]Using metrics in the next REF
- HEFCE, funders, HEIs and Jisc should explore how to leverage data held in existing platforms to support the REF process, and vice versa. Further debate is also required about the merits of local collection within HEIs and data collection at the national level. (HEFCE, RCUK, HEIs, Jisc, HESA, ARMA)
- BIS should identify ways of linking data gathered from research-related platforms (including Gateway to Research, Researchfish and the REF) more directly to policy processes in BIS and other departments, especially around foresight, horizon scanning and research prioritisation. (BIS, other government departments, UK HE Funding Bodies, RCUK)
Coordinating activity and building evidence
- For the next REF cycle, we make some specific recommendations to HEFCE and the other HE Funding Bodies, as follows.(UK HE Funding Bodies)
- In assessing outputs, we recommend that quantitative data – particularly around published outputs – continue to have a place in informing peer review judgements of research quality. This approach has been used successfully in REF2014, and we recommend that it be continued and enhanced in future exercises.
- In assessing impact, we recommend that HEFCE and the UK HE Funding Bodies build on the analysis of the impact case studies from REF2014 to develop clear guidelines for the use of quantitative indicators in future impact case studies. While not being prescriptive, these guidelines should provide suggested data to evidence specific types of impact. They should include standards for the collection of metadata to ensure the characteristics of the research being described are captured systematically; for example, by using consistent monetary units.
- In assessing the research environment, we recommend that there is scope for enhancing the use of quantitative data, but that these data need to be provided with sufficient context to enable their interpretation. At a minimum this needs to include information on the total size of the UOA to which the data refer. In some cases, the collection of data specifically relating to staff submitted to the exercise may be preferable, albeit more costly. In addition, data on the structure and use of digital information systems to support research (or research and teaching) may be crucial to further develop excellent research environments.
- The UK research community needs a mechanism to carry forward the agenda set out in this report. We propose the establishment of a Forum for Responsible Metrics, which would bring together research funders, HEIs and their representative bodies, publishers, data providers and others to work on issues of data standards, interoperability, openness and transparency. UK HE Funding Bodies, UUK and Jisc should coordinate this forum, drawing in support and expertise from other funders and sector bodies as appropriate. The forum should have preparations for the future REF within its remit, but should also look more broadly at the use of metrics in HEI management and by other funders. This forum might also seek to coordinate UK responses to the many initiatives in this area across Europe and internationally – and those that may yet emerge – around research metrics, standards and data infrastructure. It can ensure that the UK system stays ahead of the curve and continues to make real progress on this issue, supporting research in the most intelligent and coordinated way, influencing debates in Europe and the standards that other countries will eventually follow. (UK HE Funding Bodies, UUK, Jisc, ARMA)[Page xxiii]
- Research funders need to increase investment in the science of science policy. There is a need for greater research and innovation in this area, to develop and apply insights from computing, statistics, social science and economics to better understand the relationship between research, its qualities and wider impacts. (Research funders)
- One positive aspect of this review has been the debate it has generated. As a legacy initiative, the steering group is setting up a blog (www.ResponsibleMetrics.org) as a forum for ongoing discussion of the issues raised by this report. The site will celebrate responsible practices, but also name and shame bad practices when they occur. Researchers will be encouraged to send in examples of good or bad design and application of metrics across the research system. Adapting the approach taken by the Literary Review's “Bad Sex in Fiction” award, every year we will award a “Bad Metric” prize to the most egregious example of an inappropriate use of quantitative indicators in research management. (Review steering group)
8 These are presented in greater detail in Section 10.1 of the main report.
9www.ascb.org/dora. As of July 2015, only three UK universities are DORA signatories: Manchester, Sussex and UCL.
Annex of Tables[Page 149][Page 150]Chapter 1Table 1 Independent Metrics Review Workshops
Table 2 Sector consultation/engagement activities Date Event Location 12 May 2014 Scientometrics workshop Paris 9–11 June 2014 ARMA Conference Blackpool 26 June 2014 Roundtable with Minister for Universities and Science London 10–13 August 2014 National Council of University Research Administrators (NCURA) (USA) Washington DC 15 August 2014 Roundtable at Melbourne University Melbourne 3–5 September 2014 Science and Information Technology meeting Leiden 8–9 September 2014 Higher Education Institutional Research Conference Oxford 9–10 September 2014 Vitae Researcher Development International Conference Manchester 25–26 September 2014 Wellcome Trust altmetrics conference London 18–22 October 2014 Society of Research Administrators International (SRA) 47th Annual Meeting San Diego 30 October 2014 Russell Group meeting London 30 October 2014 HEPI dinner London 6 November 2014 Science 2.0 – Science in Transition event London 14 November 2014 SpotOn London event London 21 November 2014 UKSG 2014 Forum London 21 November 2014 British Sociological Association event London 26 January 2015 Vitae Every Researcher Counts conference London 2 February 2015 Heads of Chemistry UK, REF2014 Review Meeting London 23–24 February 2015 Middle East and North Africa (MENA) Summit Doha 9 March 2015 The Political Studies Association and British International Studies Association REF meeting London 10 March 2015 Humanities and Social Sciences Learned Societies and Subject Associations Network Meeting London 25 March 2015 HEFCE REFlections conference London 30 March 2015 UKSG conference Glasgow 31 March 2015 HEPI-Elsevier Annual Research Conference ‘Reflections on REF2014 – Where Next?’ London 23 April 2015 Westminster Higher Education Forum on ‘The Future of the REF’ London 6 May 2015 ‘In Metrics We Trust? Impact, indicators and the prospect for social sciences’ Oxford Impact seminar series Oxford 18–19 May 2015 ORCID-CASRAI joint conference Barcelona 1–3 June 2015 ARMA conference Brighton 7 June 2015 Consortium of Humanities Centers and Institutes annual meeting Madison, U.S. 10 June 2015 ‘Approaches to Facilitating Research Impact’, Oxford Impact seminar series Oxford 11 June 2015 IREG Forum on university performance Aalborg, Denmark 22–23 June 2015 European Commission conference on ‘A new start for Europe: opening up to an ERA of innovation’ Brussels 24 June 2015 Thomson Reuters 3rd Annual Research Symposium Tokyo 28 June–1 July 2015 EARMA conference: Global Outreach: Enabling Cultures and Diversity in Research Management and Administration Leiden, South Holland[Page 154]Chapter 3Table 3 Summary of alternative indicators[Page 156]Chapter 4Table 4 Output types submitted to the REF across the 36 units of assessment[Page 160]Chapter 5Table 5 Summary of studies of correlating indicators and outcomes of peer review (using data from the 2001 and 2008 RAEs). Note: all references are provided in the references section of Supplementary Report I.[Page 164]Chapter 9 Table 6 Background information on the use of citation data by REF2014 sub-panels The units of assessment (UOAs), grouped by main panel, are listed below; those that were provided with citation data within the REF process are highlighted. Main Panel Unit of Assessment A 1 Clinical Medicine 2 Public Health, Health Services and Primary Care 3 Allied Health Professions, Dentistry, Nursing and Pharmacy 4 Psychology, Psychiatry and Neuroscience 5 Biological Sciences 6 Agriculture, Veterinary and Food Science B 7 Earth Systems and Environmental Sciences 8 Chemistry 9 Physics 10 Mathematical Sciences 11 Computer Science and Informatics 12 Aeronautical, Mechanical, Chemical and Manufacturing Engineering 13 Electrical and Electronic Engineering, Metallurgy and Materials 14 Civil and Construction Engineering 15 General Engineering C 16 Architecture, Built Environment and Planning 17 Geography, Environmental Studies and Archaeology 18 Economics and Econometrics 19 Business and Management Studies 20 Law 21 Politics and International Studies 22 Social Work and Social Policy 23 Sociology 24 Anthropology and Development Studies 25 Education 26 Sport and Exercise Sciences, Leisure and Tourism D 27 Area Studies 28 Modern Languages and Linguistics 29 English Language and Literature 30 History 31 Classics 32 Philosophy 33 Theology and Religious Studies 34 Art and Design: History, Practice and Theory 35 Music, Drama, Dance and Performing Arts 36 Communication, Cultural and Media Studies, Library and Information Management
List of Abbreviations and Glossary[Page 167]
Arts & Humanities Research Council
Association of Medical Research Charities
National Agency for the Evaluation of the University and Research Systems (Italy)
Association of Research Managers and Administrators
Department for Business, Innovation and Skills
Consortia Advancing Standards in Research Administration Information
Common European Research Information Format
Committee On Publication Ethics
Current Research Information System
Centrum voor Wetenschap en Technologische Studies (Centre for Science and Technology Studies)
Digital Object Identifier
(San Francisco) Declaration on Research Assessment
European Association of Research Managers and Administrators
Excellence in Research for Australia (also European Research Area in annex)
European Research Council
European Reference Index for the Humanities
Economic & Social Research Council
Gross expenditure on research and development
Higher Education Funding Council for England
Higher education institution
Higher education provider
Higher Education Policy Institute
Higher Education Statistics Agency
International Standard Book Number[Page 168]
International Standard Name Identifier
International Standard Serial Number
Journal Impact Factor
Formerly the Joint Information Systems Committee
London School of Economics and Political Science
Middle East and North Africa
Medical Research Council
National Council of University Research Administrators
Natural Environment Research Council
National Institute for Health
National Information Standards Organization
Open Researcher and Contributor ID
Overview of System Interoperability Project
Public Library of Science
ONE A multidisciplinary open-access journal published by PLOS
Performance-based research funding
A free search engine accessing primarily the MEDLINE database of references and abstracts on life sciences and biomedical topics
Pro Vice Chancellor
Research Assessment Exercise
Research Councils UK
Research Excellence Framework
Responsible Research and Innovation
Science Citation Index
Bibliographic database containing abstracts and citations for academic journal articles owned by Elsevier
SCImago Journal Rank
Science Policy Research Unit
Society of Research Administrators International
Social Sciences and Humanities
Science, Technology, Engineering and Mathematics
UK Provider Reference Number
Unit of Assessment
Web of Science