Case Study: Library Research Metrics Service

This is the first of a series of qualitative case studies exploring the work and impact of Library Research Support activities and services. This case study focuses on the Library Research Metrics Service.

What we do

The Library Research Metrics Service provides support to individuals with research metrics queries, via training on a range of research metrics platforms, and education and outreach to ensure the university’s commitments to responsible use of research metrics are upheld. This is designed to complement support offered by the Department of Research, Enterprise and Innovation’s Research Information and Evaluation team which has a wider remit covering strategic research intelligence and support for large grant bids.

As well as an email enquiry service and web guidance, the Library Research Metrics Service provides training via online workshops, open to all academics and postgraduate researchers. These serve as an introduction to the concept of citation metrics and alternative metrics, what they can and cannot be used for, the principles of responsible metrics, and the importance of data accuracy – including how this may be improved through the use of ORCID researcher identifiers. Sessions also include live demonstrations on the tool, platform, or process of attendees’ choice: for example, how to create bespoke reports in SciVal, how to find alternative metrics in Scopus or Altmetric Explorer, or how to clean up author profiles in Scopus and other bibliographic databases.

Outreach activities are a key part of the support service; currently the ORCID promotion campaign is the main focus for outreach activities. This campaign seeks to increase ORCID signup rates among research staff and PGRs, which with support from Faculty Research Directors will be achieved in a variety of ways:

  1. Direct communication with the small subset of researchers that have an ORCID but have not fully synchronised it to their Pure profile
  2. Talks at School assemblies and other relevant gatherings
  3. PGR-led promotion activities
  4. Passive communication via posters and banners in key locations
  5. Active encouragement via a prize draw for new ORCID signups
Enquiry types

The email enquiry service receives a range of enquiry types: primarily these relate to 1) use of specific metrics platforms, 2) requests for metric support for grant for promotion bids, 3) queries about the use of metrics to support decisions on journal choice. Often a large part of the response to these enquiries is educational rather than direct provision of the resources requested. For example, both DORA and the University’s own statement on Responsible Research Evaluation state that research outputs must be considered on their own merits rather than the reputation or ranking of the journal or publisher. Therefore, a significant part of enquiry work is responding sensitively to researchers with these types of queries, to explain why metrics may not necessarily be helpful in making these decisions and to signpost to alternative tools and methods for journal selection. There are some instances where specific metrics can be useful: for example, establishing proportions of article types published in a given journal to identify titles most likely to be receptive to submission of similar manuscripts. In these instances, the Library Research Metrics Service will demonstrate how these metrics can be obtained or provide bespoke reports.

Another common query category comes from researchers who are finding unexpected results when seeking metrics data on their own publications: typically, missing publications or missing citations. Support in these instances usually takes two formats: 1) an investigation into and explanation of any data inaccuracies and suggestions for how these may be addressed, and 2) education on the limitations of metrics platforms – which is particularly relevant for researchers working in disciplines that are not covered well by the main bibliometrics platforms (arts, humanities, and those working in languages other than English, to name a few).

Outcomes and next steps

Responses to these education and outreach activities have largely been positive, with researchers praising the service for providing “really helpful” information. Certain departments or units are frequent flyers to the service – for example ALSPAC – but generally users tend to have a single query only. It remains to be seen whether the raised profile of the Library Metrics Service provided by the ORCID promotion campaign will result in larger volume of enquiries. In future, workshops will be run in person as well, and online workshops will be provided asynchronously to enable wider uptake.

Library launches new researcher metrics support service

Why metrics support?

Research metrics or indicators are quantitative measures designed to evaluate research outputs.  The term encompasses citation metrics, also known as bibliometrics, which are based on traditional scholarly citations, and ‘alternative’ metrics based on attention in social media, news, policy documents, code repositories and other online sources.  These metrics are increasingly being used to benchmark research performance and provide an indication of research impact in funding applications, by promotion and progression boards, and feed into university league table rankings and REF2021 assessments.

It’s attractive to think that the complexities of evaluating one piece of research against another could be simplified by using metrics, but these indicators have serious limitations that must be acknowledged if they are to be used effectively.  Metrics are significantly affected by differences in citation patterns across disciplines, sub-specialities, and researcher career stage, and can be subject to ‘gaming’ – deliberate inflation of citation counts.  As a result, qualitative review must always be used alongside a range of indicators to give a true picture of research impact.

“Carefully selected indicators can complement decision-making, but a ‘variable geometry’ of expert judgement, quantitative indicators and qualitative measures that respect research diversity will be required.”

Wilsdon, J., et al. (2015). The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management. DOI: 10.13140/RG.2.1.4929.1363

With this in mind, the library’s Research Support team has launched a researcher metrics support service to help researchers access accurate metrics data, and select and interpret appropriate indicators.  The scope of the service was determined in consultation with Research and Enterprise Development (RED); library support will focus on individual researchers, whereas RED will retain support for strategic bids and projects requiring metrics information.

Service priorities

Useful research metrics are dependent on the quality of the source data: accurately attributed publications.  A key task for the metrics service will be to help researchers correct attribution information for their publications.  The University subscribes to SciVal for access to research management information based on Scopus citation data, so initially this support will focus on Scopus author profiles, although other systems will be added later if there is demand.  Additionally, we promote the use of ORCiD researcher identifiers to easily link author profiles in different systems, including Scopus and the University’s current research information system, Pure.

Other library support offered to researchers includes:

      • web guidance
      • workshops (in development)
      • SciVal deskside training
      • enquiry service

The online guidance covers a range of topics, including an overview of important indicators and where they can be accessed, suggested use cases for metrics, an introduction to different tools available to access and analyse indicators, and signposts to the support available from RED and other departments.

Access our guidance at bris.ac.uk/staff/researchers/metrics/ or email lib-metrics@bristol.ac.uk for support.

Research Support at the Jean Golding Institute Data Week 2019

The Jean Golding Institute’s Data Week 2019 (20th – 24th May) is a week of workshops, talks and other events on data science and reproducible research, including data analysis, visualisation, coding and more.  The Library’s Research Support team is running two events as part of Data Week – our regular “Introduction to Open Research” and a brand new workshop on sensitive data: “Managing ethically sensitive data: from planning to sharing”.  The full programme for Data Week is available at https://www.bristol.ac.uk/golding/get-involved/data-week-2019/.

 

The “Introduction to Open Research” workshop is aimed at postgraduate and early career researchers and acts as a basic introduction to Open Access (OA), research data management, and research metrics.  Attendees will cover OA and research data sharing requirements of the Research Excellence Framework (REF), the University, and of major funding bodies.  There will be an overview of the different research support systems in place at the University to help researchers meet these requirements, including the Research Data Storage Facility (RDSF), the data.bris Research Data Repository, and a live demonstration of how to add publications to Pure.  Finally, attendees will be given a brief introduction to key research metrics, how these are calculated, and how they can access and benchmark their personal research metrics data.

 

“Managing ethically sensitive data: from planning to sharing” is a new workshop aimed at researchers at any career stage who are dealing with ethically sensitive data; that is, data involving humans or at-risk species.  In practice, it is likely to be most relevant for researchers in health and social sciences working with human research participants.  Attendees will learn how to safely deal with personal data in a research context, including participant rights and researcher responsibilities with regards to the General Data Protection Regulation (GDPR) and the Data Protection Act 2019 (DPA 2018), what services the University provides to help researchers collect and store sensitive data, and how to construct consent forms and patient information sheets that will permit data sharing at the end of a project.  Finally, attendees will be given an introduction into methods for preparing sensitive data for sharing: this will include an overview of documentation requirements and the different data sharing platforms and mechanisms available to researchers, as well as an introduction to the concepts of formal, statistical and functional anonymisation and how these can be applied to datasets to reduce disclosure risk.

 

We’ve had a great deal of interest in the latter workshop, and while this weeks’ event is sold out we do plan to repeat the sensitive data workshop later in the year.  We’ll also be expanding our online sensitive data bootcamp to include some of the issues covered in the live workshop.

Library Carpentry Bristol

Bristol Research Data Service recently hosted a Library Carpentry workshop in collaboration with colleagues at the University of the West of England and the University of Bath – read on to find out what it was about and what’s happening next.

What is Library Carpentry?

Library Carpentry is a set of software skills lessons created by librarians, for librarians, covering a variety of subjects including Git, SQL, Python, and using the Unix command line. The course materials are developed and maintained by volunteers, and are intended to be taught by one or more instructors (also volunteers). Each lesson is also suitable for self-study. All the course materials are freely available online under a permissive use licence (CC-BY) and can be reused and remixed if needed.

Library Carpentry is an offshoot of Software Carpentry and Data Carpentry (the latter are aimed at academic researchers) and follows the same basic lesson structure – short introductions to key concepts followed by examples to work through to solidify learning. The tools and software packages taught are typically open source to maximise opportunity of use.

Why Bristol?

Naked self-interest! Bristol Research Data Service staff were keen to learn more about data manipulation techniques, and an informal poll of the GW4 Alliance Data Services Working Group showed that there was interest in other local institutions as well. We ended up advertising the event to all GW4 and AULIC institutions.

Event specifics

We asked potential attendees which modules they would most be interested in, and on that basis selected the Introduction to data and OpenRefine lessons. Library Carpentry has a very active Gitter group, so after submitting a plea there and contacting the organisers of past workshops (thank you to Tabitha Witherick!) we had two instructors volunteer: Dave Rowe (introduction to data) and Owen Stephens (OpenRefine).

We also needed helpers: people to assist on the day with general troubleshooting (for example, network connection issues). We had volunteers from both library staff and IT Services across Bristol, Bath and UWE. It’s safe to say that the workshop would not have run anywhere near as effectively as it did without them, so thank you very much to all the helpers – and of course, our instructors!

Over seven hours we covered the two chosen modules, with time for questions, general discussion and working through examples using library-related data.

Outcomes and follow-up

Feedback from attendees has so far been excellent, with several people indicating that they’ll be using OpenRefine for specific tasks the near future, and even teaching colleagues how to use it. At Bristol we had several requests for a second workshop from people unable to attend the first one; whilst another formal workshop might not be possible, we’re looking into a ‘flipped classroom’ or collaborative learning approach of short 1-2 hour sessions to pass our learning on to other colleagues and keep our skills fresh.

All in all, it was a very useful day, and we’d highly recommend attending or hosting a Library Carpentry workshop if you have a chance.

 

Thesis data in Russell Group universities

Guest blog post from the University of Bristol Research Data Service (data.bris.ac.uk), describing a recent investigation into retention of thesis data in Russell Group universities.

Thesis data at the University of Bristol

The University of Bristol research data repository, data.bris, has been running since 2014, and has been core funded since 2015. Use of the repository is becoming well-integrated into the research cycle, but whilst we accept deposits from postgraduate researchers (PGRs) as well as academics, the vast majority of these are data underpinning journal articles. Increasing the intake of data relating to other types of publications is an obvious next step.

Currently, Bristol PGRs may only deposit a hardbound copy of their thesis with the University Library; supporting data may be attached in physical storage media, but there is no requirement to make this available (in any format) as part of the final deposit process. Similarly, the University’s Research Data Management and Open Data Policy applies to PGRs and research staff, but doesn’t specifically address retention of thesis data. An investigation into how electronic final deposit of theses may be managed is underway, and we wanted to both support the wider investigation and increase use of the repository by extending the data.bris remit to data underpinning PhD theses.

The question

We wanted to understand how peer institutions across the UK handled thesis data, in order to see whether we could adopt an existing policy and/or infrastructure model. I was tasked with finding out whether relevant institutions had policies on thesis data retention, and if so, how this was managed and enforced. For brevity, I focussed my query on Russell Group universities.

Existing information

Some work had already been done in this area – the Unlocking Thesis Data (UTD) project [1] phase 1 survey included a question on depositing thesis data:
Q4. “If supplementary data files relating to the thesis are also deposited, where are these held?”
The survey results included responses from thirteen Russell Group universities [2], including Bristol, which have been summarised in the infographic [Figure 1]. Most respondents indicated that thesis data could be deposited alongside the thesis itself, with some institutions having multiple storage locations both linked and unlinked to the thesis record. At the time of the survey, no Bristol PGRs had attempted to deposit thesis data; we now have a number of PGR data deposits in the repository but as noted above, these are linked to journal articles rather than theses. Whilst UTD was a very useful starting point, it focussed largely on the thesis as an information object, rather that the data underpinning it. More information was needed.

Infographic summarising thesis data retention in Russell Group universities, 2015-2016
Figure 1: Thesis data in Russell Group universities, 2015-2016

 

New information

Twenty of twenty-three other Russell Group (RG) universities responded to an informal email enquiry on their thesis data deposit policies. Their responses are summarised in the infographic [Figure 1] and are discussed below. Anonymised aggregated results are shown in [Figure 2].

Table containing 2016 responses to enquiry on retention of thesis data in Russell Group universities
Figure 2: Thesis data retention in Russell Group universities – 2016 responses

Of the eleven universities with an explicit or implicit policy on thesis data, five had optional deposit of thesis data, and six mandated deposit (thesis data ‘should’, ‘must’ or ‘is expected to’ be retained in a suitable repository for a minimum time). However, the majority of universities with a mandatory retention policy reported that the policies were unenforced and compliance was very low, and/or that there was no infrastructure for retention in an institutional repository. Seven universities had policies on management of thesis data, either as an explicit part of their general RDM policy, their good research practice policy, or a separate thesis submission policy. Four universities indicated that their general research data policies were intended to cover thesis data although this was not explicitly mentioned in the policy text. Nine had no policy on thesis data.

Conclusion

There is little uniformity across RG institutions in their approach to thesis data retention. In particular, there is a discordance between policy and the infrastructure required to support it. When both exist, only a minority of institutions report that the policy is enforced, meaning that in this instance there isn’t a great deal of sector knowledge to draw upon! However, many institutions reported that they were, like Bristol, actively considering the topic, so it seems likely that this situation will change over the next year or two.

Next steps

As noted previously, we are currently investigating ways to facilitate electronic deposit of theses at Bristol. Alongside this, we’re also looking to pilot a process for depositing thesis data and are planning to assess requirements for this based on the input of departments participating in the trial. Both projects are in their infancy at the moment, but I hope to provide further updates as they progress.

Acknowledgements

With thanks to colleagues at responding institutions and the Unlocking Thesis Data project for supporting information.

References

[1] Grace, Stephen and Whitton, Michael and Gould, Sara and Kotarski, Rachael: Unlocking Thesis Data phase one. DOI: 10.15123/PROJECT.15

[2] Cardiff University, Durham University, King’s College London, London School of Economics and Political Science, University of Birmingham, University of Bristol, University of Cambridge, University of Edinburgh, University of Liverpool, Newcastle University, University of Oxford, University of Southampton, University of Glasgow