Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Today your boss is human, probably, but tomorrow he or she could be an Artificial Intelligence.
“You’re fired!” so said Donald Trump during his days heading The Apprentice. Now though, as companies become more automated and reliant on data and technology, more of them are using technologies like Artificial Intelligence (AI) to hire, manage, monitor, and fire human staff without the need to get characters like Donald, who now has better things to do, needing to get involved. And at some level some of these AI’s are also starting to run the companies themselves as well … but that’s another story for another post.
In an age of remote working especially, new tools are being peddled that enable employers to monitor staff activity, enabling a top-down, metric-based view of performance. Not just a virtual “peak over the shoulder” as it would have been in the olden days, these tools can generate data that, in some cases, can feed automated decision making systems that can say, award a performance bonus in the form of a $25 voucher, or fire them on the spot.
For better or worse, performance monitoring and surveillance have the potential to put a rift in the power dynamics between workers and employers – a report by Clutch found that just 10 percent of workers would trust their employer’s use of performance tracking and monitoring software. And that’s before we get anywhere near the subject of what happens when AI bias creeps into these systems …
For years now, retail juggernaut Amazon has been notorious for using data and a stringent set of metrics to not only track the performance of workers in its warehouses but to automatically terminate their employment if they fail to meet certain standards.
In one facility in Baltimore, an investigation found that approximately 300 full time workers were fired for inefficiency between 2017 and 2018 – a figure that represents 10 percent of the factory’s overall workforce.
Now, in the first legal challenge of its kind, a group of former Uber drivers hailing from the UK and Portugal, are set to sue taxi-hailing company Uber for using an algorithm to automatically fire them.
The automated dismissals highlighted recently come as a test to the protections of GDPR Article 22. The regulation which came into force in 2018, imposes obligations on companies that collect people’s personal information and relates to all EU consumers regardless of where the company is located.
Considered as the biggest case on Article 22 of the GDPR to have reached the courts, the court ruling could lead to an unprecedented outcome following thousands of complaints from Uber drivers all over the world saying they had been dismissed automatically for committing fraud without an explanation.
The case has been backed by both the International Alliance of App-based Transport Workers (IAATW), and the App Drivers and Couriers Union (ADCU), a trade union for app-based workers in the gig economy.
“Uber has been allowed to violate employment law with impunity for years and now we are seeing a glimpse into an Orwellian world of work where workers have no rights and are managed by a machine. If Uber is not checked, this practice will become the norm for everyone,” said ACDU president Yaseen Aslam.
In response to the claims, however, Uber has said that the drivers’ accounts had been deactivated following a manual review by humans.
“Uber provides requested personal data and information that individuals are entitled to. We will give explanations when we cannot provide certain data, such as when it doesn’t exist or disclosing it would infringe on the rights of another person under GDPR. As part of our regular processes, the drivers, in this case, were only deactivated after manual reviews by our specialist team,” said an Uber spokesperson.
The provisions provided by Article 22 have been debated for years, however, the right to an explanation about why an automated decision was made could give people more power to challenge if it is incorrect.
“This case demonstrates the impact of automated decision making for the millions of people that are working in the platform economy. For the first time, Uber drivers are challenging these decisions based on the GDPR,” said the four driver’s lawyer, Anton Ekker.
A study published in 2017 by Prof Edwards and Dr. Michael Veale of University College London, called Slave to the Algorithm? Why a ‘Right to an Explanation’ is probably not the Remedy you are Looking for, argues that the legal conception of explanations as “meaningful information about the logic of processing” may not be provided by the kind of ML “explanations” computer scientists have developed in response.
While companies everywhere are racing to benefit from using new technologies like AI this latest news serves to underscore just how important it is to remember the dangers of relying too heavily on performance monitoring tech, and more specifically what dangers lie in wait for us all as our more of our livelihoods and futures are monitored, managed, and impacted by the ubiquitous black box algorithms that “empower” our increasingly algorithmic society. And then there’s the impact that these increasingly impersonal, data-driven decisions have on workers’ daily lives and their impact on employer-employee trust which, if things go too far, can damage the reputation and integrity of the companies using them. But that’s another story for another time.