Weighing up opportunities and challenges: The potential for a grant application analytics service

By Andrea Chiarelli, Rob Johnson and Ian Carter 

In the context of the Jisc Research Analytics project, Research Consulting and Carter Research Navigation have been engaging universities, funders and technology providers to investigate views around a grant application analytics service leveraging artificial intelligence (AI) and text and data mining (TDM) techniques. We used a mix of desk research, interviews and a survey to gather stakeholder views and understand what role Jisc might play in this area.  

This work was the second phase of a project looking at research analytics more broadly, with the ultimate aim of supporting universities in making better decisions. The choice to look at grant applications in particular was made based on interest from research leaders and managers based at higher education institutions during the first phase of work, but also because this area appeared to offer concrete opportunities in the near future.  

The fact that Jisc is seeking to play a role in this landscape is in line with their aim to support the education sector with digital resources and technology services, particularly in light of the ongoing digital transformation of research.  Their recently-commissioned report Research 4.0’ notes that “research has been changed by […] new technologies and in particular the growing use of machine learning and other artificial intelligence techniques to augment the work of research staff and alter their experience of research. This statement isn’t surprising: researchers have been looking into AI for the last 70 years, and almost 330,000 documents on the topic have been indexed online to date (source: Scopus) 

In the following sections, I share the most interesting findings of our research on the potential of grant application analytics for the higher education sector and leave it to Jisc’s Christopher Brown to discuss next steps. 

Pains and gains 

Before delving into the details of what analytics services on grant applications could do, we focused on people’s experiences and tried to form a view of what issues were most commonly experienced at UK universities.  

In our research, we identified two main perspectives: 

  • Analytics on grant applications in aggregate could support institutional strategy and financial forecasting, as limited information is currently available on success rates and benchmarking (whether inward or outwardfacing) is not easy in practice. 
  • Analytics on individual grant applications could help applicants and the staff supporting them improve the quality of submissions, particularly in terms of meeting technical criteria and identifying appropriate academic or industry partners. 

The potential users of analytics services could include individual researchers, research support staff and senior managers, from both higher education institutions and research funders. Even though this did not emerge directly from our research, we also note that academic libraries may play a role in the research analytics landscape: this is chiefly thanks to their significant expertise dealing with metrics, data management, software vendors and licensing. This shows that any solutions considered for further development need to be carefully assessed in terms of what tasks they would perform in practice and who would be the target audience. 

Opportunities and challenges 

Based on our interviews and survey, which reflect the views of over 50 individuals, the two cases above would face different opportunities and challenges. Figures 1 and 2 include a range of considerations shared by project contributors: every opportunity appears to be hiding some challenges but, at the same time, almost every challenge seems possible to overcome. 

Figure 1. Opportunities and challenges for analytics in aggregate form. 

Opportunities  Challenges 
HEIs are happy to outsource complex analysis tasks if the price is right and the value delivered is sufficiently clear.  The analysis process is likely to require amalgamation of various datasets, which in turn requires fuzzy data matching and data cleaning. This is complex to implement in practice.  
Collaboration with existing stakeholders and initiatives such as Snowball Metrics could provide focus and a solid starting point.  As Snowball Metrics already play a role in the landscape, there is limited space for Jisc to offer an independent/standalone solution. 
Jisc could explore the potential to secure access to applications (content and metadata) directly from UKRI. However, UKRI are reviewing their funding service by looking at international comparators and existing software solutions (some of which include TDM).  Institutions and researchers may be reluctant to share full texts of grant applications with competitors, even if the sharing of metadata appears possible. 
Existing tools fail to meet HEIs needs in this area: analytics could allow better understanding of performance compared to peers or comparators.  Cross-sector analytics do not generally offer actionable insights, particularly if only focused on grant applications, as these are only one dimension of university research performance. 

When it comes to grant applications in aggregate, project participants highlighted the concern that this is but one of the many dimensions of university research management: analytics considering only grant applications at the institutional or organisational unit level could only ever paint a partial picture if they aren’t combined with additional and richer information.  

Furthermore, we highlight three additional considerations: 

  • Some stakeholders and initiatives are already active in this area particularly Snowball Metrics and, potentially, UKRI, with their recent work on a revised grant applications management solution. 
  • Analytics in aggregate form beg the question of what definitions will be accepted by all or most prospective users: for example, what is a ‘successful application’ and at what stage in the process can an application be marked as such? 
  • Analytics in aggregate form would require a relatively large corpus of data, and it isn’t clear where this should come from. The information that is currently provided to all institutions by funders was criticised by project participants because it is not complete (e.g. because it based on the PI only, hence under-representing collaborators such as smaller institutions that are more likely to be partners rather than leaders). On the other hand, data held by institutions tends to be of limited value due to the lack of standards and the fact that it is often used to look inwards rather than for benchmarking purposes. 

Clearly, if analytics on applications in aggregate forms are pursued, liaising with numerous stakeholders will be a priority to develop shared definitions and secure buy-in. The timeframe of these analytics would need discussion, too, as a prospective system could either look at live or historical data, which further complicates the matter. 

Figure 2. Opportunities and challenges for analytics on individual grant applications. 

Opportunities  Challenges 
Many applications of AI and TDM are likely to be ‘quick wins’ and could be implemented within a relatively short timeframe.  Some applications of AI and TDM would require sophisticated analysis, and a critical mass of applications would be needed to kick things off. 
There is significant room to provide automatically-generated recommendations to authors and research support staff.  Cultural resistance might play a role. People tend to trust computers only in cases where humans are manifestly unfit, such as scanning many documents. 
A wide range of technology providers appear to have the systems and skills to build this type of service (at different levels of maturity).  If third-party providers are involved, some concerns will likely arise in terms of IP ownership, anonymity and GDPR compliance. 
Feedback on individual applications would be highly actionable and could lead to measurable time savings and practical improvements.  Cooperation with funders would be desirable to develop actionable compliance and check algorithms underpinning the analytics service. 

The main message we took from looking at the two approaches to grant application analytics side-by-side is that analytics on individual applications might solve more practical issues and deliver some quick wins. Particularly, universities would be able to see the impact of a potential service more easily, as time saved, and more successful applications would be measurable gains.  

The possible role of analytics on individual grant applications 

As analytics on individual grant applications appear more desirable, it is worth discussing what they could achieve in practice. An analytics service working with individual applications would follow a somewhat simple process:  

  1. the user (e.g. a researcher or a research manager) uploads an application; 
  2. the application is analysed by the system; 
  3. insights are generated, for example in the form of scores; and 
  4. insights are explored or exported, as appropriate. 

In this context, some forms of analysis likely require limited use of AI and natural language processing techniques, e.g. 

  • identifying similar awards, applications or research in the area; 
  • identifying duplicate or plagiarised work; 
  • identifying the themes and topics of an application; 
  • identifying partners, reviewers or collaborators; 
  • improving the quality of writing; and 
  • checking cost breakdowns/pricing. 

The above use cases would require the definition of rules for algorithms to follow but are relatively simple to implement.  

On the other hand, more sophisticated approaches are needed where the analytics service might need to understand and interpret the text, which is closer to what a human reviewer would do. This might include more subtle forms of analysis aiming to improve individual sections from a technical standpoint or conquer the holy grail of understanding the characteristics of successful applications. 

Encouraging users to ‘trust the machine’ 

Ernst & Young discussed the issues of acting on the outputs and decisions of AI algorithms in their 2018 report “How do you trust the machine?”. In many areas, AI does show game-changing potential, but the greatest barrier remains the extent to which end users will value algorithmically generated insights. 

In particular, research development is seen by most as a human-centred process rather than a system-driven one. It is therefore not surprising that the idea of a machine doing the job of a researcher or a research manager will be seen with scepticism, at least initially. Some suggested that a good starting point will be to use AI and TDM to tackle challenges where machines are ‘obviously’ better than humans: these would include, for example, sifting through a vast number of documents to identify potential collaborators or to check for plagiarism and similarity.  

Cultural resistance and scepticism appear to be the most critical roadblocks when it comes to research analytics more broadly, and we also note the issue of data interpretation: quantitative insights can be easily represented using charts and tables, and this might create the illusion that they are more actionable. However, it should be noted that an extent of qualitative interpretation is usually needed to fully understand the meaning of numbers and financial figures – this indicates that current technological solutions are still far from replacing people and human insight. 

Next steps 

By Christopher Brown, Jisc 

The discovery phase consisted of two parts, defining the problem and refining the solution. The results from these two distinct pieces of work have been published in this and the previous blog post. The purpose of the overall discovery phase was to give us a better understanding of what a research analytics service could and should do, and how such a service would support universities in making better, more informed decisions.  

The focus on grant applications has allowed us to narrow down on a specific area rather than looking at a much broader and harder to deliver solution. In the long term a service would provide support in other areas. The assessment of the latest interviews and surveys has provided us with a number of options to deliver a research analytics service, initially focussed on grant applications. These include potential framework agreements, partnerships with suppliers and start-ups, Jisc providing a trusted unified platform for data (similar to learning analytics) and Jisc in-house developments.

The objective of the discovery phase was to deliver the evidence that could be put together in a solution pitch, which could be put to Jisc’s investment committee for funds and resources to develop a research analytics service. The fact that 26 institutions have expressed an interest in trialling any prototype or pilot service shows there is interest in such a Jisc service. We are now assessing the information gathered from this work and finalising the solution pitch. If Jisc decides that this service should be developed, we will share the timeline and plan with the institutions who expressed an interest in piloting the service and then provide an update via this blog. 

By Christopher Brown

Product Manager (Research) at Jisc.