Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the 311 Institute, a global futures think tank working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future.” Regularly featured in the global media, including AP, BBC, CNBC, Discovery, RT, and Viacom, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring several Education and Lunar XPrize teams, building the first generation of biological computers and re-envisioning global education with the G20, and helping the world’s largest conglomerates ideate the next 20 years of intelligent devices and machines. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Bain & Co, BCG, BOA, Blackrock, Bentley, Credit Suisse, Dell EMC, Dentons, Deloitte, Du Pont, E&Y, HPE, Huawei, JPMorgan Chase, KPMG, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, UBS, and many more.
WHY THIS MATTERS IN BRIEF
It’s nearly impossible to predict the impact that competing, or even collaborating, algorithms will have on business, and as our society depends on them more, that’s a problem.
Last year Facebook’s Artificial Intelligence (AI) agents made up their own language to talk to one another, and elsewhere Google’s AI agents learned how to encrypt their communications so they couldn’t be spied on, but now there’s another area of concern for companies who want to embed AI into their digital fabrics – AI’s that collude. When you’re browsing online, who sets the prices? An algorithm, most likely. A study from 2015 showed that a third of all items on Amazon had prices set by an algorithm, and chances are that percentage has only risen. A new study shows how easy it would be for price-setting algorithms to learn to collude with each other and keep prices at a disadvantage for customers.
This sort of collusion would stem from a certain type of algorithm, the researchers say. Reinforcement algorithms learn through trial and error. In the simplest terms, a walking robot would take a step, fall, and try again, and these algorithms have often been used to teach algorithms to win games like Go.
“From the antitrust standpoint,” say professors Emilio Calvano, Giacomo Calzolari, and others from the University of Bologna in Italy, “the concern is that these autonomous pricing algorithms may independently discover that if they are to make the highest possible profit, they should avoid price wars. That is, they may learn to collude even if they have not been specifically instructed to do so, and even if they do not communicate with one another.”
To test their theory they built two AI pricing agents and let them interact with each other. They found that “even relatively simple pricing algorithms systematically learn to play sophisticated collusive strategies.”
At the beginning of their trial, the two algorithms fought against each other to lower prices. But that turned out to be temporary, rising again. Eventually, one of the algorithms started displaying “self-reactive behavior,” which they call “a distinctive sign of genuine collusion, and it would be difficult to explain otherwise.”
On the plus side, the researchers found that the collusion was “partial—the algorithms do not converge to the monopoly price but a somewhat lower one.” On the less positive side, the collusion was stubborn and left no trace of evidence.
“They learn to collude purely by trial and error,” the researchers say, “with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude.”
Their sample algorithms weren’t trying to break any rules, they were simply learning the basics of capitalism through trial-and-error. Mistakes like that are common in algorithms – Amazon recently knocked one out of its system that accidentally excluded women from its job searches, and to prevent price collusion, a similar sort of watchful eye would be needed.
Source: MIT Technology Review