Matthew Griffin, award winning Futurist and Founder of the 311 Institute, a global futures think tank working between the dates of 2020 and 2070, is described as "The Adviser behind the Advisers." Regularly featured on AP, CNBC, Discovery and RT, his ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past five years as one of the world's foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive future. A rare talent Matthew sits on the Technology and Innovation Committee (TIAC) for Centrica, Europe’s largest utility company, and his recent work includes mentoring XPrize teams, building the first generation of biocomputers and re-inventing global education, and helping the world’s largest manufacturers envision, design and build the next 20 years of devices, smartphones and intelligent machines. Matthew's clients are the who’s who of industry and include Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, the USAF and many others.
WHY THIS MATTERS IN BRIEF
There is no doubt that AI is a powerful tool that can help improve outcomes, but using it to rank and rate individuals and organisations is still seen by many as a step too far.
Artificial Intelligence (AI) is quickly, it seems, making its way into almost every corner of our lives, but now a plan to use AI to help identify failing schools in the UK have been criticised by the country’s National Association of Head Teachers (NAHT).
Over the past year a data science unit, which is part owned by the UK government, has been training algorithms to rate schools using machine learning that will be used to help the UK’s education watchdog Ofsted prioritise school inspections, but the NAHT has said that “effective inspection of schools should not be based on data.”
“We need to move away from a data led approach to school inspection,” they said in a statement, adding “it’s important that the whole process is transparent and that schools can understand and learn from any assessment. Leaders and teachers need absolute confidence that the inspection system will treat teachers and leaders fairly.”
Given the black box nature of AI and algorithms, where their decision making is often opaque, something that several companies including Google, MIT and Nvidia, as well as several universities, including Columbia University, are trying to solve, it could be argued the union has a point.
“When it’s in the field it will be used to prioritise which schools should be inspected and we’re hoping to work with Ofsted over the next 12 months to improve the algorithm and tailor it to suit that purpose,” said lead author of the report Michael Sanders.
The data used to train the AI includes past Ofsted inspections, other data from schools and census information, all of which is publicly available, and it’s also been analysing responses about individual schools that have been provided by parents via Ofsted’s Parent View review system.
However, for all the calls from the NAHT for transparency, something that I agree with, it turns out that the data produced by the algorithm won’t be shared with any of the schools, and Sanders says it wouldn’t be helpful to do so.
“If we chased down the findings of the algorithms and offered five things that would make a school better that would be disingenuous,” he said, “Ofsted inspectors who do holistic inspections are in a much better place to provide schools with advice.”
Currently the algorithms are designed purely as a tool to help Ofsted and its inspectors but Sanders acknowledges there could be future additional applications.
“Predictive grades for GCSEs are based on teachers’ judgements, but there is research that suggests they aren’t all that accurate,” he said, “and using data to give a better picture [of a school] might be a better way to help young people [with] their education, but any additional applications like these would first require ethical and practical oversight.”
When we have a look at the benefits that AI is bringing to other industries and fields, such as healthcare, it’s hard to argue against it being used as a tool to help improve a country’s education system and naturally there will be resistance, and pros and cons, so governments must tread carefully.