How Libraries can get Started with Impact Metrics: workshop report

In spite of the excitement over the past few years over bibliometrics it has taken some time for research libraries to develop sustainable services around them. Impact metrics is a major focus for LIBER this year, which set up a working group, led by LIBER president, Kristiina Hormia-Poulainen (National Library of Finland), to help deliver its strategic priority “Enabling Open Science”.  One of the key issues is what the Leiden Manifesto on research metrics means for libraries.

The LIBER 2017 workshop on impact metrics began by presenting developments in four libraries as a pointer to getting started on started on the four areas identified as priorities for libraries: Discoverability, Showcasing achievements, Research(er) assessments, and Service development.   Each set me thinking about Cambridge implications: is the Library the right place for impact metrics (yes), do we have all the competencies required (some but we need to grow them), where would they sit in the structure, and would a distributed model involving affiliated libraries but with a centralised infrastructure work?  And of course, how would it be resourced?

In the discoverability strand Peter Kraker (Open Knowledge Maps) outlined his use of Mendeley as a database to create interactive maps of publication use that relate co-readers and allow the viewer to drill down by research topic. The maps were created through looking at clickstreams of users. He turned this into the basis of Open Knowledge Maps to visualise research topics. Kraker is also using visualisations for Project Lending History to show most borrowed (i.e. downloaded) papers.

The Universitätsbibliothek Wien has been developing services around bibliometrics since 2006, starting out with a working group and maturing into a dedicated department with a staff equivalent of 3.5 people and occasional interns. Their initial challenge was to convince the university that the library was the right place for this activity and that it had the competencies. In the course of development the team name changed from “bibliometrics” to “bibliometrics and strategies”, underlining the fact that it is not just about assessment. Departments were concerned about potential misuse of metrics.

Bibliometric services are offered to three audiences: research departments and individuals, research management, and the library itself.

Departmental services are always tailored to researchers themselves, who are offered basic training, individual consultations, personal bibliometric profiles, and bibliometrics reports for individuals.

Research managers are provided with reports for individual institutions within the university, reports for faculty evaluations, and reports to support professorial appointments, e.g. for faculty administrators to analyse applications for vacant chairs.

The main service to the Library is data to drive decisions, which I take to relate to collection development, particularly journal cancellations.

Framework requirements are based on the selection of appropriate data sources, the use of multiple databases, and of subject-specific databases.

Feedback from researchers and departments has been good. It has increased the service portfolio of the library and the visibility of library services. The department has capacity to get involved in a number of national and international OA projects., runs the OA journal BibCal and is running a (sold-out) summer school on bibliometrics in Berlin in 2017.

The Universitätsbibliothek Duisburg-Essen (Anja Lopez) library has automated bibliometric reports based on the University bibliography and Scopus, building their own tool to do so. Why build their own tool? 17% of Duisberg-Essen professors and a large number of researchers are in Engineering but the subject is not well represented in WoS, especially civil engineering. Engineering also uses a wider range of publication media than just articles: conference proceedings in particular.

Other factors are the strong resistance to evaluation in Germany. There is therefore no central control, no centralised data structure, and a lack of CRIS systems, although may be about to change as German universities become more aware of the value of research rankings.

Why the UL? They run the Bibliography of the university and the university publication service/repository, and license databases such as Scopus and WoS. The Bibliography of the university is automatically fed from Scopus etc., publication lists, the EVALuna bibliography, and a web front end where researchers can add their own publications.

The service on offer from the library consists currently of database training, bibliometrics consultations, project-based further analysis, and bibliometric reports for individual researchers.  UB Duisberg-Essen has a Bibliometrics team of three but this equates to 1 FTE.

They want to add further databases into the service such as IEEE and are developing the web front end so that researchers can run their own bibliometric analysis. This is going live in summer and comes with warnings about the scope of the data and the Leiden Manifesto principles, i.e. “data needs interpretation”.  It also recognises the fifth principle: “Allow those evaluated to verify data and analysis”.

Univ- und Staatsbibliothek Göttingen (Najko Jahn) has a new Scholarly Communication Analytics post, both to support researchers and guide decision-making in university. The university supported it since it was interested in increasing rankings. Researchers are not using addresses in a helpful way so lots of data work is involved.

The library is extending its existing data infrastructure for the study of scholarly communications, especially in providing analytics. Jahn sees analytics as a data science practice, discovering and communicating meaningful patterns in data to produce actionable recommendations rather than measuring, which is metrics.

His work includes:

  • Exploring collaborations, e.g. European and US universities, using Web of Science etc. to identify co-authors and their institutions. Disaggregating author fields was the main task. Mapping took one day out of the two weeks for the project.
  • Monitoring the transition to Open Access – analysed research groups to produce a chart of OA proportion by PI.
  • Creating open and reproducible tools.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s