WHY THIS MATTERS IN BRIEF
Creative machines that are capable of generating all forms of new content can be used for good, bad, and also “fun” purposes – from writing books and fake news to cheating at school.
Interested in the Exponential Future? Connect, download a free E-Book, watch a keynote, or browse my blog.
As I’ve said many times before Artificial Intelligence (AI) looks set to revolutionise the education industry, whether it’s thanks to adaptive learning technologies, AI avatar teachers, like Will from Soul Machines, helping lecturers automatically convert text based course notes into rich synthetic narrated videos, monitoring children’s attention in class and then marking their coursework, or a million more things besides.
A little while ago when I delivered a keynote on the Future of Education to the teachers at world’s largest private school company, GEMS, I discussed how one AI in particular from OpenAI could be, neigh would be, used to help students cheat in the future – and that moment has arrived.
Let’s face it, quite a lot of students have probably thought about using AI to help them cheat, but until recently they’ve lacked the tools to do it. Now though an anonymous grad student known only as Tiago claims he used OpenAI’s GPT-2 neural network, which was recently dubbed the world’s “most dangerous AI” and that you can try for yourself, to compose multiple academic papers that evaded scrutiny from both his professors and plagiarism detection software. Tiago says he only had to write a single strategic topic sentence and the “transformer” neural network algorithm filled in the rest for him.
The idea of neural networks has been around since 1943 and has been applied to fields as diverse as machine vision, speech recognition, machine translation, and medical diagnosis. Originally conceived of as mirroring the interconnected nodes of neurons in the human brain, neural networks are a pillar of technological breakthroughs in artificial intelligence and deep learning.
OpenAI, the AI research organization founded by Elon Musk, has already been experimenting with neural networks for years and used them to develop a new fleet of “language modelling” algorithms, which includes state-of-the-art “zero-shot learning.”
To create their text-generating AI system, GPT-2, researchers originally used data from 8 million posts and upvotes from Reddit as a data aggregator, which created a 40GB data set. They baited their AI system with a prompt for a counter-intuitive thesis argument that recycling is bad for the Earth.
“… and it wrote this really competent, really well-reasoned essay,” said David Luan, VP of engineering at the Californian lab who ran the first test of it. “This was something you could have submitted to the US SAT and get a good score on.”
Tiago claims he got the idea after reading about an algorithmic entry into a climate change essay contest. Inspired, he decided to bypass his business school’s “boring” curriculum and execute a high-tech entrepreneurial sleight-of-hand, tapping the website, “TalkToTransformer.com.”
This online iteration of the GPT-2 network allowed him to author his 3 to 5 page paper by merely dictating topic sentences to an AI algorithm.
“You couldn’t write an essay on science that could be anywhere near as convincing using the methods that I used,” said Tiago. “I wrote the structure and one sentence per paragraph. All the information that was in that final essay was in that structure, but the sentences were added by GPT-2.
“I did it for two essays in two different courses… I would write the first sentence of the paragraph, let’s say the point of the paragraph is ‘Starbucks has innovated by raising the quality of its coffee.’ I would write a sentence that encompasses the whole point, and then I would feed it to GPT-2, and then I would get a paragraph. I would generate again until I get something that I found more or less believable.”
While seemingly user-friendly, the GPT-2 network is actually more evolved than even the DeepMind AlphaGo program that defeated the world’s champion Go player. Yet, this same program also lost to a child at Monopoly, which indicates that language and communication remain a complex hurdle in AI deep learning research.
GPT-2 was a significant step forward, so much so that after its initial deployment of the neural network, OpenAI concealed its code and data set from the public because developers feared it would be exploited to spread disinformation and fake news across the Internet.
As an example, The Verge ran tests using the prompt “Jews control the media,” which produced the following artificially authored manifesto:
“They control the universities. They control the world economy. How is this done? Through various mechanisms that are well documented in the book The Jews in Power by Joseph Goebbels, the Hitler Youth and other key members of the Nazi Party,” the system wrote.
Jack Clark, policy director at OpenAI, fears trolls could use GPT-2 to disrupt natural communication communities online: “They’ll make it so there’s enough weird information that outweighs the good information that it damages the ability of real people to have real conversations.”
Many scientists, however, believe the risk-reward payoff of advanced predictive text neural networks such as GPT-2 leans toward the positive. Such technology may produce revolutionary innovations in life sciences, manufacturing, banking, retail, and medical diagnostic research.
“Neural networks have the ability to identify anomalies,” says data scientist Leigh Ann Herhold. “In the future, we can use them to give doctors a second opinion – for example, if something is cancer, or what some unknown problem is. And we’ll be able to provide these second opinions faster and with more accuracy.”
Others fear the same technology that enables the gleeful production of automated fan fiction could also produce more advanced dystopian surveillance systems and mass media control mechanisms.
In the near-term future, it’s worth remembering that all citizens will benefit from having an intermediate to advanced understanding of programming. Students like Tiago have obviously considered whether it’s worth using neural networks to produce papers so they can focus on what they’re actually interested in, and unless they’re caught let’s face it most academic establishments around the world aren’t going to be any the wiser that their students are great at cheating, and less well educated than they think… And this is just the beginning so expect more cheating from more people very soon.
[…] it’s not the first time that an AI has taken exams, nailed them, or been used by students to do their coursework assignments […]