Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Animal testing is controversial, on the one hand it helps get drugs to market and on the other kills and harms millions of animals, but AI be a solution.
The use of animals in science experimentation is a double-edged sword – the research has saved countless human lives and has advanced our scientific understanding, but at the same time, has come at a high cost paid by the animals involved. Animal testing has been carried out for centuries with the first references made as early the 3rd and 4th centuries BC in the writings of Greek physician-philosophers such as Aristotle who used animals to reveal important internal features in mammals. Without animal tests, it would be hard to imagine the advent of life changing medical innovations, such as vaccines, antibiotics, drugs like insulin to treat diabetes, and a myriad of other conditions and diseases.
With the advent of genetic engineering and labs on chips scientists have been able to better model human diseases and disorders such as diabetes, Alzheimer’s disease, various cancers, and Parkinson’s disease, to name just a few. But, while immensely important from the standpoint of modern medicine and research, the fact is that in recent years in the European Union alone 9 million animals are used every year in the lab.
This data is disturbing and is the subject of continuous controversy due to the ethical implications related to the suffering of these animals, giving rise to many initiatives against animal testing and in favour of alternative methods, but until now alternatives have been very few and far between.
Additionally though, aside from the complex ethical questions being raised, there is also the concern regarding the accuracy of animal models in mimicking what a specific drug would do in a human body – especially when it comes to mimicking toxicology.
Data actually shows that when extrapolated to humans many animal models have limited reliability when it comes to predicting how a drug will behave. This leads to time and money wasted and unnecessary suffering during drug discovery as a drug will perform well in animal models and then fail in clinical trials or vice versa.
Even within the context of toxicity tests for new drugs or chemical compounds, there is growing pressure to avoid animal experimentation. This is something positive if we take into account that we must seek a balance between our own development and our responsibility to act as stewards of our planet. And it turns out that Artificial Intelligence (AI) could provide a better alternative, with one solution being to use computer models instead — a field which has grown exponentially in recent years and integrates the perspectives of statisticians, toxicologists, biologists, chemists, engineers, and mathematicians to analyse existing data and generate reliable predictions. This was explored in a recent article published in the journal WIREs Computational and Molecular Science.
With the enormous progress made within the realm of AI and its application in many fields such as medicine and chemistry, this alternative to animal models is revealing its full potential. This is especially evident with new deep learning techniques that are capable of self-learning using millions of known chemical compounds and data collected in a growing number of freely accessible databases which allows them to predict how a new substance will behave in humans or in the environment, thus boosting already impressive predictive capabilities.
Early in the pandemic caused by the SARS-CoV-2 virus, a group at BenevolentAI identified a potential therapeutic for COVID-19 using their machine learning approach, which scoured the medical literature and known disease mechanism. As a result, the kinase inhibitor baricitinib, used commonly to treat arthritis was selected because of its effects on endocytosis, which hypothetically could be used to also disrupt entry of the virus into healthy cells. It was given emergency authorization by the FDA and used in clinical trials, showing a mild degree of improvement in COVID-19 patients. The question among experts is, does the drug work via the mechanism predicted, and how accurate was this search?
Regardless, this was an exciting example of database searches to identify potential drug candidates to solve pressing real-world problems without having to run animal studies first. Granted, this example did not completely circumvent the need for animals as Baricitinib was tested on them before being used to treat arthritis, but the hope is that one day we will be able to realise a world where the only tests that are performed are performed using AI.
These preliminary applications are also just the beginning, and with quantum computing now making waves with recent breakthroughs and enhanced computational capabilities, deep learning applications will become better and better, allowing us to one day in the future move away from what many see as a divisive and unethical, albeit often unfortunately necessary, practice.