Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures and deep futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, GEMS, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
- There are many controversial, and polarised, views on the impact that AI will have on society, this new business unit, which is one of the first of its kind, will hopefully help clear some of the weeds
In a move to counter the run in it had with the UK government last year, Google owned DeepMind has decided to form a new business unit called DeepMind Ethics and Society (DMES) that will look into the ethical and moral implications and impact that Artificial Intelligence (AI) is having, and will have on society, in the hope that it will help them better judge when AI is pushing the boundaries too hard, too fast and too far.
DeepMind is considered to be one of the heavyweights in the AI industry and in the past eighteen months it’s taught it’s platform how to beat the world champion Go player, and annihilate online Go players, think like a human, dream and fight other AI’s for resources, lipread, talk and translate, and much much more. But that said it’s also had a few run ins with Government, particularly the UK Government.
In 2015 it agreed a data-sharing deal with three hospitals in the UK to process the data of 1.6 million patients for “new methods of clinical detection, diagnosis and prevention application,” but the project went awry when the NHS regulator said the deal had breached data protection laws.
In response, DeepMind said it was “working hard” on data transparency both internally and externally, and the new committee seems to be the fruit of its labours. In the blog post explaining the formation of DMES, DeepMind notes that, “As scientists developing AI technologies, we have a responsibility to conduct and support open research and investigation into the wider implications of our work.”
That indicates that it believes it needs to take a stronger role in regulating how it applies its powerful algorithms to situations where it doesn’t have much expertise. This particularly comes into play when considering the NHS debacle, where DeepMind said it “underestimated the rules around patient data” and the “complexity of the NHS.”
The new business unit is DeepMind’s attempt to demonstrate that the company cares about AI’s impact on society, and not just the tech industry, in both the long and the short term, and as the company says in its blog post, the ethical considerations of AI is not a new thing. It highlights Julia Angwin’s groundbreaking study of the ethical quandary of racism in criminal justice algorithms, in short, looking at how algorithms are racist against people of colour in criminal justice situations, and Kate Crawford and Ryan Calo’s in depth paper on how AI impacts society on wider, broader scale.
To help DeepMind with this issue they’ve enlisted the help of six fellows, including Nick Bostrom, the University of Oxford professor who wrote the book of existential risk that influenced both Elon Musk and Bill Gates.