Scroll Top

Researchers taught an AI to successfully detect people’s moral principles

futurist_moral_principles

WHY THIS MATTERS IN BRIEF

Being able to determine a persons moral principles lays the foundation for a whole new method to pre approve, censor, and vet people – and it’s as dangerous as it is interesting.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

Today we can use Artificial Intelligence (AI) to gauge someone’s character, health, personality, and trustworthiness, as well as determine whether or not they’d be a good hire or be likely to commit a crime just by analysing a mixture of biometric cues. Now though, in another giant leap forwards for the technology, researchers in Brazil say that they’ve found a way to use AI to gauge your moral principles.

 

RELATED
MIT's new artificial synapse brings "Brain on a chip" hardware closer

 

The death penalty, abortion, gun legislation: There’s no shortage of controversial topics that are hotly debated today on social media. These topics are so important to us because they touch on an essential underlying force that makes us human, our morality.

In order to achieve their breakthrough the team say they developed and analyzed three models that can describe the morality of individuals based on the language they use. The results were published last month in IEEE Transactions on Affective Computing.

Ivandré Paraboni is an associate professor at the School of Arts, Sciences and Humanities at the University of São Paulo who led the study. His team choose to focus on a theory commonly used by social scientists called Moral foundations theory. It postulates several key categories of morality including care, fairness, loyalty, authority, and purity.

 

RELATED
Featured Futurist: How 4D Printing Will Shift the Shape of Manufacturing, SAP

 

The aim of the new models, according to Paraboni, is to infer values of those five moral foundations just by looking at their writing, regardless of what they are talking about.

“They may be talking about their everyday life, or about whatever they talk about on social media,” Paraboni says. “And we may still find underlying patterns that are revealing of their five moral foundations.”

To develop and validate the models, Paraboni’s team provided more than 500 volunteers with questionnaires. Participants were asked to rate eight topics (e.g., same sex marriage, gun ownership, drug policy) with sentiment scores (from 0 = ‘totally against’ to 5 = ‘totally in favor’). They were also asked to write out explanations of their ratings.

 

RELATED
China claims new anti-stealth radar systems neutralise stealth

 

Human judges then gave their own rating to a subset of explanations from participants. The exercise determined how well humans could infer the intended opinions from the text.

“Knowing the complexity of the task from a human perspective in this way gave us a more realistic view of what the computational models can or cannot do with this particular dataset,” says Paraboni.

Using the text opinions from the study participants, the research team created three machine learning algorithms that could assess the language used in each participant’s statement. The models analyzed psycholinguistics, emotional context of words, words, and word sequences, respectively.

All three models were able to infer an individual’s moral foundations from the text. The first two models, which focus on individual words used by the author, were more accurate than the deep learning approach that analyzes word sequences.

 

RELATED
MIT's latest Human-AI hybrid cybersecurity platform blitzes trials

 

Paraboni adds, “Word counts such as how often an individual uses words like ‘sin’ or ‘duty’ turned out to be highly revealing of their moral foundations, that is, predicting with higher accuracy their degrees of care, fairness, loyalty, authority, and purity.”

He says his team plans to continue to incorporate other forms of linguistic analysis into their models in order to improve their models accuracy, and they’re also exploring other models that focus more on the text, independent of the author, as a way to analyze Twitter data which then could be used to help companies and governments identify extremist and radical content as well as misinformation campaigns.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This