Understanding Academics at York

Vanya Gallimore’s presentation to LIBER 2018 on the York University UX project that she led with Michelle Blake fitted well with the theme of cultural change for the research community.  Rather than focussing specifically on researchers’ relationship with library services the project looked at how academics at York approach their research and teaching activities.  The library was then able to consider how is its services currently facilitated and supported those activities and how to integrate the ‘academic voice’ into future service planning and development of support.

York librarians selected as their ethnographic methodologies two techniques that put academics at the centre of the process: cognitive mapping followed by semi-structured interviews.  The resulting data was coded and analysed in NVivo qualitative software against a set of key themes.

Researchers talked about their motivations around research and teaching, frustrations and pressures, and aspiration, shedding light not only on their own ways of working but also on the changing nature of students and how they were adjusting to teaching them.  Largely unprompted, they also described how their interactions with the library have changed.

The research results have fed into both some ‘quick-wins’ for the library and a series of longer-term recommendations, captured in the new Library Strategy for 2018-21.

The presentation will be available shortly but the research has been written up in an article for the New Review of Academic Librarianship.  Understanding academics: a UX ethnographic research project at the University of York is available on open access at https://doi.org/101080/13614533.208.1466716

 

Advertisements

LIBER 2018: Leading the Transition to Open Science

Paul Ayris, Pro-Vice-Provost (UCL Library Services), and Tiberius Ignat (Scientific Knowledge Services) addressed the need for wider cultural change in universities to deliver the transition to Open Science.  They argued that this starts with university leadership and with Open Science being embedded in university vision and strategy.  It should be implemented transparently, with accountability and monitoring, having agreed targeted measures, and the vision should be shared.

The cultural change is about moving researchers from the purely competitive mindset to recognising the value of combining competition and collaboration, e.g. shared infrastructure.  The key to this is leadership: Paul argued for a national co-ordinator of Open Science, echoing the French national strategy, for national task forces, for the HR Strategy for Researchers to reflect Open Science principles, and for universities to embark on cultural change programmes with designated leaders at senior level and advocacy programmes.

Their presentation can be found at https://zenodo.org/record/1306140#.W1jtVthKjVp

Maastricht University, which aims to become a “FAIR University” by 2025 or 2023, made the case for Open Science on the basis greatly improved research data management and the ability to gain data-driven insights, accelerating scientific discoveries.

Their Community for Data-Driven Insights (CDDI) brings together researchers, the University Library, the Institute for Data Science, the DataHub, and the ICT Service Center, in a partnership to deliver Open Science.  The Library’s roles included linking open access from data to different publications (FAIR), training “data stewards”, running the CRIS system (PURE in this case), and managing smaller datasets.

Their approach appeared to be based on the “if we build it the researchers will come” principle but they embarked on a community engagement programme, looked at the potential benefits of combining RDM and e-science in different disciplines, and are running a number of pilots which they plan to extend into the humanities and areas of qualitative data.

It will be instructive to look at the progress of the two institutions taking contrasting approaches.

Henk Van den Hoogen’s presentation on Maastricht: Towards a “FAIR” university is at https://zenodo.org/record/1306148#.W1jt8thKjVp

 

 

Taking research skills to the next level: librarians teaching data literacy

I accidentally caught a presentation at LIBER 2014 by Don McMillan at the University of Calgary Libra2014-07-03 14.32.33ry that showcased a great example of deep collaboration between a library and academic departments (Developing data literacy competencies to enhance faculty collaborations).  At Calgary science support librarians collaborated with faculty members in genetics and biochemistry to develop instructional sessions in which library staff taught students how to extract data from bioinformatics databases and protein repositories and use it to answer  structured questions.

The students gained real-life experience of working with data while the librarians developed domain expertise.  One reason for the success of the programme was that the data skills sessions were fully integrated with the students’ courses and participation gained them credits.  The second was that it built on previous collaboration between the library and the faculties, integrating genetics and biochemistry content into an existing information literacy programme and taking them a step further.

Libraries meeting the challenge of research support

At the LIBER 2014 conference, as so often, one of the most thought-provoking contributions to the discussion on how libraries can develop their workforces came fro2014-07-03 08.51.08m Professor Sheila Corrall, in this case suggesting a fresh approach to looking at how existing strengths can be mapped to the new challenges of supporting research in innovative ways (Mobilizing Invisible Library Assets for Innovative Research Support in the 2020 Information Landscape). The main challenges are familiar: networked data-­driven science, digital humanities, interdisciplinary research, dealing with policy developments and funding body mandates – open access, data sharing, and research impact.  Libraries need to change their offering and move to fill gaps in research support, moving from “service as support” to a deeper and more collaborative relationship.

She put forward two propositions.  The first that libraries should use their “intangible” or invisible assets to gain strategic advantage, the second that they should overextend themselves, undertaking activities that require more than their current capabilities. What she meant by “intangible assets” became clearer when they were broken down into human, relational and structural assets and she looked at how they were used at case sites:

  • Human assets (in library terms this might be expertise in collection development/archives administration, information organization/retrieval know-­how and teaching/training abilities, reference interviewing skills)
  • Relational assets (professional networks, trust and credibility built from from previous interactions with researchers, liaison librarians, cross-­unit collaborations, e.g. Research Office, Computing Services, )
  • Structural assets (Institutional-­level commitees/groups endorsed library role in research process, hybrid structure of subject liaison librarians and (new) functional specialists used to provide subject-related support)

This sounds pretty familiar, as does the idea of over-extending the library’s role.  The challenges of the previous decade – digital preservation projects, establishing an institutional repository, publishing linked data, digital humanities – all required us to re-deploy existing skills and learn by doing, as well as bringing in staff with new skills. Sheila Corrall argues that “In a dynamic, technology-driven environment, libraries cannot afford to wait until they are completely ready to act” although not advising us to be reckless in the process.  We already have some of the assets we will need to meet research support needs, just not all of them.

OAPEN deposit service: is it time to build a central infrastructure for Open Access monographs?

Last week’s JISC workshop on next steps for OAPEN explored the possibility of a European deposit service for OA monographs, potential benefits for participants and institutions, and the features that the latter, particularly libraries, would like to see in such a service.

The OAPEN Library, founded in 2010, has been very successful, attracting 60 publishers so far, with around 2,000 monographs.  On the reasonable assumption that the volume and value of OA monographs will continue to grow and that researchers may be required by funders to comply with OA mandates, the workshop looked ahead at what infrastructure would be needed to support these developments and the role that OAPEN might play at European level.   Areas of potential benefit to libraries in managing higher volumes of OA monographs include:

  • Content aggregation and discovery  OAPEN already aggregates content and promotes discovery but for libraries it could offer the facility to harvest metadata from a single source in a variety of formats  – ONIX, MARC XML, CSV, and MARC 21 through OAI-PMH and FTP – and metadata conversion and enhancement (adding DOIs, ORCID ids, grant information)
  • Integration with library catalogue and services  including OCLC WorldCat and commercial LMS suppliers
  • Preservation through e-Depot at the Koninklijke Bibliotheek and a partner such as CLOCKSS

One of the most valuable roles that OAPEN might play is in the deposit and publication workflows, ensuring that publishers are aware of funder mandate requirements, in the quality assurance process, and supporting communications between funders, authors, and publishers.  A variety of workflows can be supported depending on how funders prefer to work – examples were shown from existing participants including the European and Austrian research councils.  A central register of funder requirements can be maintained which avoids duplication of effort in explaining them to each publisher.  Discussions amongst participants revealed a wide variety of workflows in OA publishing, both from experience of journals and monographs, depending on publisher, funder, author and institutional preferences.  There was a recognition that the author-publisher relationship is often more complex when dealing with monographs.

A number of institutional participants are interested in taking part at the launch stage in the UK, Netherlands, and Austria.  Possible business models were explored.  Workshop participants supported the idea of a UK pilot with JISC in a lead role without committing to a particular business model.

National Monograph Strategy: scoping the problems

The_Shard  Credit: Ben Griffin.  Creative Commons Attribution-Share Alike 3.0 Unported

I spent last Friday in the shadow of The Shard with an assorted group of librarians, publishers (commercial and Open Access), JISC programme managers and project team, and representatives of RLUK and SCONUL working jointly on one of the strands in the National Monograph Strategy project led by JISC.  For those new to the project it is described as “exploring the potential for a national approach to the collection, preservation, supply and digitisation of scholarly monographs”.   The project blog page at http://monographs.jiscinvolve.org/wp/about-the-project/ provides more background but the three main outputs will be:

  1. A landscape study: A report that provides a coherent picture of the monographs issue.  This is complete and very comprehensive.
  2. The monograph problem: A report defining and assigning value for the problems that need to be addressed by a national monograph strategy.
  3. The monograph solutions: An outline of the possible solutions which could address the problems identified.

Last week’s workshop focussed on identifying the problems.  A number clustered around the unstable nature of publishing: the problem of sustaining monograph publishing, both commercial and OA, but particularly small university presses, when business models are breaking down; fragmentation of the monograph itself and its changing value in the eyes of academics; demand for publishing outstripping supply; pressure from the REF.

Business models also featured in discussion of the problem areas around shared acquisition, licensing, and access. While shared acquisition has the potential to deliver a better return on public investment there are significant challenges – how would we build a collaborative shared model acceptable to all stakeholders, particularly publishers? A true national collection needs to be widely accessible.  Licensing and copyright legislation were seen as barriers, although it’s possible to see licensing as part of the solution.  National licenses for substantial electronic collections already exist and there are models beyond the UK that would be worth exploring, particularly in Scandinavia.

Having seen collaborative collection development initiatives come and go over the years, I put down a marker for sustainability as one of the key problems, both in terms of long-term commitment from the libraries to maintain collections and in terms of preservation.   There was still a surprising amount of scepticism about digital preservation, which seems like a short-medium issue.  Sustaining the storage, maintenance, and acquisition of print collections is a costly problem with no easy solutions.  In a response to an earlier document from the project John Tuck and I pointed out that not all UK libraries are purchasing only to meet local needs.  The legal deposit libraries, those with a national research support role, and the specialised libraries with unique and distinctive collections, take a much wider view. Any funding model to support a NMS has to recognise the long-term costs of sustaining their role, which by implication would be enhanced.

The workshop concluded with participants voting on their personal top 3 priority problems and explaining why they had been chosen.  At a glance, voting was heaviest for defining the scope and purpose of a national monograph strategy and discovering what stakeholders want from it, followed by that hardy perennial, “uncatalogued stuff” (understanding what we have), and the lack of co-ordination in digitisation, where there is a danger of expensive duplication.  The absence of reliable methods of knowing what has been digitised and whether it can be re-used was lamented. If we are moving to large-scale supply of digital surrogates the problem will need to be overcome.