Scroll Top

Coup prediction algorithms are here to save and wreck democracy

Futurist_riotech

WHY THIS MATTERS IN BRIEF

Being able to predict peoples behaviours before they occur can be useful, but it can also be dangerous if they’re in the wrong governments hands.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

Having marked the one year anniversary of America’s Capiol riots that almost destabilized the nation, many Americans are probably wondering just how we can prevent such a terrible, violent event from ever happening again. Well, according to the Washington Post, those in the data science community believe they may have a solution.

 

RELATED
By 2021 this AI could help us talk to dolphins

 

While DARPA, the US military’s bleeding edge research arm is creating an Artificial Intelligence (AI) called KAIROS that can “see” chaos and predict its impact on society and terrorist activity, for example, elsewhere other data researchers are currently hard at work on something called “unrest prediction” – an effort to use algorithms to understand when and where violence may break out in a given nation or community. And unsurprisingly it has dystopian overtones.

Key to this effort are organizations like CoupCast, a project at the University of Central Florida, which uses a combination of historical data and machine learning to analyze the likelihood that a violent transition of power will take place in one country or another, on any given month. According to Clayton Besaw, who helps run CoupCast, these forecasting models have traditionally been aimed at foreign countries but, unfortunately, America is looking more and more like a reasonable candidate for just such an event.

 

RELATED
OpenAI thrashes DeepMind using an AI from the 1980's

 

“It’s pretty clear from the model we’re heading into a period where we’re more at risk for sustained political violence – the building blocks are there,” said Besaw, speaking with the Post.

While this may all sound very novel, efforts to use data to predict unrest aren’t particularly new. They generally involve gathering immense amounts of information about specific populations and then inputting it into projection models. The real question isn’t how this is all works but rather: “Does it actually work?” and also “Do we really want it to?”

As far back as 2007 DARPA was working on an Integrated Crisis Early Warning System (ICEWS) – a data driven program meant to predict social unrest in countries around the world. Produced with the help of researchers from Harvard University and Lockheed Martin, the program claimed to have created forecasting models for a majority of the world’s nations and could supposedly produce “highly accurate forecasts” as to whether a country would, say, witness a deadly riot or not. The program worked by feeding huge troves of open-source data – such as regional news stories – into its system, which would then use the data to calculate the likelihood of some sort of regional incident.

 

RELATED
Scientists have created the coldest object in the Universe

 

“The secret sauce in all of this is the fact that we use what’s called a mixed model approach,” said Mark Hoffman, senior manager at the Lockheed Martin Advanced Technology Laboratories, during a 2015 interview with Signal Magazine. “For any one event, say, a rebellion in Indonesia, we will turn around and have five models that are forecasting whether that’s going to happen.” According to Hoffman, the program eventually saw adoption by “various parts of the government” (read: the intelligence community) and also saw interest by “the insurance, real estate and transportation industries.”

Around the time ICEWS was in development, there was also work being done on the EMBERS Project, a large data program launched in 2012 (once again with federal tax dollars) that uses gargantuan caches of open-source data from social media to enable threat forecasting. According to a Newsweek article from 2015, “an average of 80 to 90 percent of the forecasts” EMBERS generates have “turned out to be accurate.” This algorithm was allegedly so good at its job that it predicted events like the 2012 impeachment of Paraguay’s president, an outbreak of violent student protests in Venezuela in 2014, and 2013 protests in Brazil over the cost of the World Cup.

 

RELATED
Liberatus AI whips world's top poker players to take $1.7m pot

 

If you believe these claims, it’s truly stunning stuff, but it also inspires a pretty basic question: Uh, what the hell happened last year, guys? If this kind of algorithmic prediction exists – and is readily available (indeed, there’s currently an entire market devoted to it) – why didn’t anybody in the U.S. intelligence community foresee a riot that was blatantly advertised all over Facebook and Twitter? If it’s so accurate, why wasn’t anyone using it on that fateful day in January? We have a word for that kind of technical fumble and it’s, uh… not “intelligence.”

According to the Post article, one thing that could explain the historic fumble is that most of these programs and products have been aimed at forecasting events in other countries – the ones that might pose a strategic threat to US interests overseas. They haven’t so much been trained inwards on Americans.

 

RELATED
Smart appliances will disrupt banking

 

On one hand, it feels like a good thing that these sorts of predictive powers aren’t being broadly aimed at us because there’s a lot we still don’t know about how they do or do not work. Beyond the potential slippery slope of civil liberty violations this kind of algorithmic surveillance could spark, the most obvious concern with this sort of forecasting technology is that the algorithms might be wrong – and that it would send governments off to respond to things that weren’t ever going to happen in the first place. As the Post points out, this could lead to things like governments cracking down on people who would’ve otherwise just been peaceful protesters.

However, an even more concerning issue might be: What if the algorithms are right? Isn’t it just as creepy to imagine governments using immense amounts of data to accurately calculate how populations will behave two weeks in advance? That puts us firmly in Minority Report territory and pre-crime technology which is also a growing area. Either way, we probably need to think a little more about this kind of technology before we let it out of the barn.

Related Posts

Leave a comment

EXPLORE MORE!

1000's of articles about the exponential future, 1000's of pages of insights, 1000's of videos, and 100's of exponential technologies: Get The Email from 311, your no-nonsense briefing on all the biggest stories in exponential technology and science.

You have Successfully Subscribed!

Pin It on Pinterest

Share This