Three essential changes to evidence that will drive adoption of digital innovation

Aerial view at group of business people working together and preparing new project on a meeting in the office
Image © boggy22 | Istock

Digital innovation can help create something that addresses a need and something that can be put into practice. The missing link between ideation and innovation is application

To be able to put something into practice, and to do so at scale, it’s vital to know that it does in fact work. This is especially true of digital tools designed to help support vulnerable people. Robust evidence standards, then, are key to digital innovation making a real difference: without a way of knowing whether a tool does what it claims to do, it is understandably difficult to justify ever implementing that tool.

Brain in Hand’s self-management system, which combines digital tools with human support, underwent two invaluable studies while still in its pilot stages: an independent study of autistic adults by Devon Partnership Trust and a National Autistic Society study of students both found that Brain in Hand had a positive impact on people’s lives, including improvements in confidence and ability to implement coping strategies. The former study also provided early evidence that the use of Brain in Hand could help service users reduce their level of contact with clinical support, thus delivering savings for services.

Digital innovation in support of people with autism

More recently, an independent clinical study funded by the Small Business Research Initiative (SBRI Healthcare) showed that our approach to digital innovation and support had significant benefits for autistic people in the areas of anxiety, self-injurious behaviour, and quality of life. Following this evidence for the effectiveness of our support system, we have also achieved NHSX Digital Technology Assessment Criteria (DTAC) compliance, which looks to ensure that digital tools meet standards in the areas of clinical safety, data protection, technical security, interoperability and usability and accessibility.

In practice, meeting set standards and providing evidence of impact is often not sufficient to enable decisions to be made on commissioning a new tool at scale. This can be frustrating when there is both evidence of need and impact and a solution is available that can ease the burden on stretched resources, but there continues to be barriers to wide-scale adoption of digital innovation.

We believe this practice could be changed if the following three things were to happen:

  1. Standards became more agile and flexible, keeping pace with evolving needs, and fit for purpose.
  2. Incentives for researchers focused on resulting action and change rather than publications per se.
  3. Decision-makers acted on evidence-based information.

Digital innovation standards fit for purpose

The purpose of evidence standards is to help decision-makers trust a product or service. In support services, this needs to be to the degree that there can be confidence in its scalability.

For some time, randomised control trials (RCTs) have been considered the gold standard of research. Although they are undeniably powerful when used correctly, recent thinking suggests that there is perhaps an assumption that any RCT is inherently useful by virtue of its format, when in fact inadequate planning and reporting of RCT studies are contributing to avoidable wasted research. Studies are being conducted in a vacuum rather than with consideration for their practical application.

The value of an evidence standard is not simply that it ticks a box to say “yes, X requirement has been met”, but that it provides proof of the work that went into it. For our DTAC accreditation, for example, we compiled an extensive suite of evidence not only on our technical systems but on our teams’ ways of working and how we provide a genuine solution to our users’ needs. Compiling this information was a valuable exercise, and the fact that that information is now easily available to potential purchasers is enormously positive.

Those responsible for setting the standards must collaborate across the board to ensure that they are meaningful

Ultimately, those responsible for setting the standards must collaborate across the board to ensure that they are meaningful. This means working with the innovators who need to meet those standards, as well as with the services who will use them to make commissioning decisions. If a standard is unachievable for a solution, or not of genuine use to a decision-maker, it is not serving its purpose.

process flow diagram on a computer
Image © Khanchit Khirisutchalual | istock

Incentives for researchers to innovate

In the current academic system, research influence is often measured indirectly through citation data. The Times Higher Education (THE) World University Rankings have citations at 30% of the total score for a university, pushing this as a metric of how well a university is doing in “spreading new knowledge and ideas”. But a more meaningful indicator is the influence of research on the positive change as opposed to its mere existence. We want evidence to be meaningful, for the purpose of studies to yield actionable insights and innovation that will make things better.

This might involve a move towards embracing different types of evidence. The RCT has its place, of course, but it is far from the only type of evidence that could prove useful. The type of research ought to depend on the type of product or service; in mental health support, for example, the single-case study is a powerful tool for demonstrating effectiveness. Researchers, therefore, need to be open to exploring different ways of testing and proving.

There is some indication that things may be moving in this direction already. With SBRI and Innovate UK for example, their focus is centred on the impact of their funding to go as wide and deep as possible in digital innovation.

Decision-makers acting on evidence

There is a ‘pilotisation’ endemic in UK health and social care, and understandably so. Every service wants to be certain that a solution works before committing funds to it, and before rolling it out to potentially vulnerable people. This results in great caution in purchasing decisions, with solutions commonly trialled for a year with only, say, ten end users.

The biggest issue here is that new ways of doing things cannot make a difference unless implemented at scale. Small, limited pilots often fail to provide sufficiently powerful evidence for decision-makers to go bigger, precisely because the sample is too small to generate what the commissioner is looking for. It becomes a circle of continually looking for more and more evidence, never progressing.

If the perception of risk is a barrier to adoption, commissioners and solution providers need to find a way to build evidence thresholds into the system whereby decision-makers can commit to stepping up scaling to the next level. Digital innovation that has demonstrated it can meet these standards at level 1, ought to go from ‘a new thing to try’ to ‘a proven solution we can immediately roll out at the next level’. In so doing, services can move away from endless trials, establishing early on what would count as sufficient evidence and then acting on it as soon as that threshold is reached.

Evidence standards will soon become must-haves rather than nice-to-haves

We think that evidence standards will soon become must-haves rather than nice-to-haves, with solutions only commissioned if they meet them, in turn unlocking rapid deployment at scale. Decision-makers will need to be brave and break out of the cycle of pilotisation. They will instead need to focus on proactively seeking out tools that can demonstrate a high level of evidence.

Robust, meaningful standards that motivate action

We believe that having robust, meaningful standards that motivate action is the clearest path to truly doing things differently – and better. Evidence that decision-makers accept and act on can be the difference between ‘innovation’ and ‘proven solutions’ that can be deployed at scale. Only by embracing new but proven ways of doing things at scale can real progress be made.

Some of the work is incumbent on those who produce and sell solutions, of course. Appealing to emotions with anecdotal evidence in the form of end-user stories and lived experiences is powerful, too and so, the value of these less rigorous forms of evidence ought not to be underestimated.

Evidencing effectiveness should be a continual journey. Solution providers should continually strive to learn, never assuming that the work has been completed; there are always limitations of context, population, methodology, equipoise, application, and any number of other things. Those innovators who keep adding to their evidence base should be worthy of further trust. Meanwhile, services should look to regularly review whether their current tools meet standards and make changes where necessary.

When evidence standards enable innovations to prove their effectiveness, and when services act to adopt these new, proven solutions, we should truly start to see large-scale change for the better.

This piece was written and provided by Helen Guyatt, Head of Research, Evaluation, and Insight, Brain in Hand.

LEAVE A REPLY

Please enter your comment!
Please enter your name here