Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
The future of work is an increasingly important topic, particularly as the spectre of AI driven automation looms, but the ability to read employees emotions and thoughts fundamentally changes the employer-employee relationship.
Today I have the tech to stream images, movies and sentences directly from your mind… I even have a couple of videos of it that I use in my keynotes. Last year the UK bank RBS started reading the minds of it’s interviewee’s using Brain Machine Interface (BMI) tech, but this week the tech took another step into the limelight when it was announced that workers in China are being hooked up with brain-reading devices that feed information about their moods at work directly back to their employers. And while this technology can be used for good, namely to help improve employee’s working conditions and well-being, it’s also understandably raising significant privacy concerns, which coming on the back of China’s recent Social Credit Scoring development, where people with a low social credit score were banned from trains and planes, simply adds to many people’s worries in China that some important societal boundary lines are being crossed – let the debate rage…
Electronic sensors, from Deayea Technology, that fit into hats and helmets are being used in China on an “unprecedented” scale to read employees’ emotions, the South China Morning Post reports, in what firms say is part of a drive to increase efficiency and productivity, but the efforts to tap into the data is sparking concerns that powerful companies are reading the minds of their employees, with one Chinese psychology professor warning that the systems could represent a “whole new level” of privacy abuse.
Although details about how the technology works are not clear, reports suggest devices use lightweight sensors and Artificial Intelligence (AI) algorithms to monitor brainwaves and detect spikes in emotions such as rage, anxiety and depression. They can be concealed in safety helmets or uniform hats, and stream data to computers accessed by employers.
The Post reports that the technology is government-backed and is known to be used in the electronic equipment, electric power supply and telecommunications industries, plus in the military in China.
Some companies told the Post that they use them to monitor workers’ stress levels so they can adjust their shift patterns and breaks accordingly, increasing efficiency and potentially improving the livelihood of their staff.
Cheng Jingzhou, an official in charge of the monitoring system at the State Grid Zhejiang Electric Power in Hangzhou, in the eastern Zhejiang province, said the system had been hugely successful.
Cheng told the SCMP that his company, which has 40,000 employees, saw its profits boosted by around two billion yuan (£230 million) after the system was rolled out in 2014.
“There is no doubt about its effect,” he said.
Workers using the brain-scanning equipment have showed concern that it represents an even more invasive privacy breach. Jia Jia, associate professor of brain science and cognitive psychology at Ningbo University, where one of the brain surveillance projects took place, said that some users “thought we could read their mind. This caused some discomfort and resistance in the beginning”.
“After a while they got used to the device. It looked and felt just like a safety helmet. They wore it all day at work,” she added.
Qiao Zhian, professor of management psychology at Beijing Normal University, said that the lack of regulation for the technology put users’ privacy at great risk.
“The employer may have a strong incentive to use the technology for higher profit, and the employees are usually in too weak a position to say no,” he said, before adding, “the selling of Facebook data is bad enough. Brain surveillance can take privacy abuse to a whole new level.”
The reports come amid a wider climate of Chinese authorities meddling in private data. Chinese messaging apps and social media sites such as WeChat and Weibo, which comply with government censorship and security demands, are widely viewed as being insecure, and this month the ruling communist party’s anti-corruption watchdog revealed that it can access deleted private WeChat messages of people it investigates, and punishment for people posting material the party disapproves of in private messages is becoming increasingly common.