Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
There is no doubt that AI is a powerful tool that can help improve outcomes, but using it to rank and rate individuals and organisations is still seen by many as a step too far.
Artificial Intelligence (AI) is quickly, it seems, making its way into almost every corner of our lives, but now a plan to use AI to help identify failing schools in the UK have been criticised by the country’s National Association of Head Teachers (NAHT).
Over the past year a data science unit, which is part owned by the UK government, has been training algorithms to rate schools using machine learning that will be used to help the UK’s education watchdog Ofsted prioritise school inspections, but the NAHT has said that “effective inspection of schools should not be based on data.”
“We need to move away from a data led approach to school inspection,” they said in a statement, adding “it’s important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly.”
Given the black box nature of AI and algorithms, where their decision making is often opaque, something that several companies including Google, MIT and Nvidia, as well as several universities, including Columbia University, are trying to solve, it could be argued the union has a point.
“When it’s in the field it will be used to prioritise which schools should be inspected and we’re hoping to work with Ofsted over the next 12 months to improve the algorithm and tailor it to suit that purpose,” said lead author of the report Michael Sanders.
The data used to train the AI includes past Ofsted inspections, other data from schools and census information, all of which is publicly available, and it’s also been analysing responses about individual schools that have been provided by parents via Ofsted’s Parent View review system.
However, for all the calls from the NAHT for transparency, something that I agree with, it turns out that the data produced by the algorithm won’t be shared with any of the schools, and Sanders says it wouldn’t be helpful to do so.
“If we chased down the findings of the algorithms and offered five things that would make a school better that would be disingenuous,” he said, “Ofsted inspectors who do holistic inspections are in a much better place to provide schools with advice.”
Currently the algorithms are designed purely as a tool to help Ofsted and its inspectors but Sanders acknowledges there could be future additional applications.
“Predictive grades for GCSEs are based on teachers’ judgements, but there is research that suggests they aren’t all that accurate,” he said, “and using data to give a better picture [of a school] might be a better way to help young people [with] their education, but any additional applications like these would first require ethical and practical oversight.”
When we have a look at the benefits that AI is bringing to other industries and fields, such as healthcare, it’s hard to argue against it being used as a tool to help improve a country’s education system and naturally there will be resistance, and pros and cons, so governments must tread carefully.