Professor Duncan McHale discusses neurodegenerative disease research…
Neurodegenerative diseases are a growing global challenge as medical advances ensures more individuals live longer. By 2020 there will be > 40 million individuals in the world with Alzheimer’s disease and by 2040 without the development of truly disease modifying drugs this will be > 80 million. Similar trends are also seen for Parkinson’s disease. The annual treatment and social care of individuals with Neurodegenerative diseases is estimated to be > 1 trillion dollars by 2050 making it one of the most important socioeconomic challenges of this century. Research spending in this area is currently running into billions of dollars per annum. Discovering and developing disease modifying drugs i.e. those that prevent progression of the disease has been very challenging with many programs failing.
This growing healthcare challenge has been a focus of biomedical research in most developed nations leading to large investments in research from both public and private funders. Despite billions of dollars of research funding we are still without safe and effective therapies to modify the progression of the disease. Significant biological findings have been reported and new diagnostic tools have been developed increasing our ability to identify the disease early and even predict subjects at significant risk before significant symptoms begin. This early diagnosis is important as it offers the opportunity to at least delay the onset of symptoms and ideally to prevent them occurring.
Despite these breakthroughs in identifying individual biomarkers or small numbers of biomarkers which predict the disease diagnosis or progression our understanding of the disease mechanisms is poor. It is likely that in almost all people the amyloid tau pathway is a final common pathway resulting in cell death and disease symptomatology but it is less clear what early pathways are driving the pathogenesis of the disease. It is this understanding which will allow the prevention of the disease or delaying of the symptoms. So why is it that after all of the research investment that we still have a limited understanding of what is causing Alzheimer’s disease.
It is true that genetic research has highlighted major genetic causes in familial AD and PD. This has confirmed the key role of amyloid beta processing in AD and alpha synuclein in PD as well as some other key biology. However these autosomal dominant familial cases still only make up a small number of the cases of neurodegenerative diseases. In addition, even when we look at the families where we understand the key cause of their disease there is a lot of variability with the symptoms people get, the age it starts and how quickly it progresses. There are other factors which are important.
The need to share data
The question therefore is how to really get to a greater understanding of the causes of the disease. One option is to increase spending on research even further to see if this will generate the answers. However given the very large sums currently being spent in this area it is hard to imagine that increasing funding by 10 or 20 per cent will lead to a step change in our understanding and doubling or tripling our research spending is unfeasible. So if the answer is not more money it needs to be better use of current research funds and knowledge.
The strength and weakness with how current research is being done is that funded laboratories and groups tend to focus on one area of research or approach such as the genetics of Alzheimer’s disease or the role of ApoE4 in the risk of AD. This gives a great deal of depth in this area but reduces the breadth of hypotheses which can be studied. Even in studies where large cohorts are collected the number of hypotheses that the individual labs or collaborators can investigate are limited and often based on prior knowledge. This means we are often looking to reinforce current knowledge which can lead to bias. This is amplified by funding bodies who are very keen to get good value for their research in terms of publications and patents and so more readily fund ideas based on current established hypotheses which may be flawed. The weakness in all of this is that if the answer is very novel and /or falls across many of the established area and hypotheses then it can be much harder to see.
The answer then is not to double or triple the funding levels to see if the current way we are working can get to the answer but to develop a culture of sharing research data in a secure way which allows researchers to access patient level data and apply novel analytical approaches and hypotheses to add to the body of knowledge. We need to move from a culture of data generation to one of knowledge generation. There is a huge amount of support for this at a patient level which is probably the most important stakeholder and at a funder and political level. Meetings of the G7 and OECD have espoused this direct aim as part of the “War on AD”. There is more of a challenge when talking to ethical committees and researchers who have understandable concerns. From an ethical committee perspective there is a concern about how this personal data will be used and importantly how it could be abused to the detriment of the individual. This is an understandable concern but in an era of transparency and big data is it too paternalistic. We need to protect people from anyone using their data to their detriment but locking up the data is counter to the current trends and more importantly often prevents the research value being generated which is why they agreed to let researchers have the data in the first place. The clinical researchers have often spent many years collecting valuable cohorts of patients and datasets and will rightly feel proud and proprietorial about this. They will often be the basis of their current and future academic success. However it is time to step back and say this is about driving the science forwards to the benefit of patients and how can we do this whilst ensuring our academic colleagues are rightly rewarded and recognised for their contributions.
AETIONOMY is based almost solely on the premise that we can share data and develop approaches which extract knowledge across multiple datasets which cannot be seen in any one of them. We have built the infrastructure to do this and tested and applied it. It can be done and it works. However to really make a difference we need to populate the knowledgebase with as many high quality datasets as possible. Our colleagues in the EMIF-AD (European Medical Informatics Framework – AD) and the Human Brain Project have all been trying to solve these issues of sharing patient level data. The IT technical issues can all be solved and many have been. We are looking to work together as a community to address the other barriers and we would like support and help from others in the field who would like to join us on this journey. We aim to build integrated data sets in a federated model which would allow researchers to access 10s to 100s more data than then could from their own resources to look for new answers and hypotheses as to the causes of these debilitating diseases. This will lead to new insights and new therapies and can be achieved with minimal or no increases rather than fold changes in the current research spending in this area. Contact AETIONOMY if you want to join us on this journey.