Scroll Top

A GPT-3 bot posted comments on Reddit all week and users approved

futurist_gpt3_ai_blog

WHY THIS MATTERS IN BRIEF

As Artificial Intelligence improves it’s going to be increasingly difficult to distinguish human made content from machine generated, or “fake” content and that will be an issue …

 

Interested in the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential Universityconnect, watch a keynote, or browse my blog.

The cat’s out the bag and it’s now no secret that Artificial Intelligence (AI) is starting to match humans when it comes to the ability to create synthetic content such as art, blogs, books, coursework, imagery, and music – although admittedly it’s still a few years away from being able to create games and videos created by real humans. Now a “blog bot” powered by OpenAI’s powerful GPT-3 language model has been unmasked after it spent a week posting comments on Reddit.

 

RELATED
Meta goes all in again, this time to develop AGI

 

Using the handle /u/thegentlemetre the bot was interacting freely with people on /r/AskReddit, a popular forum for general chat with 30 million users, where it was posting roughly once a minute.

It’s not the first time GPT-3 has fooled people into thinking what it writes comes from an actual human. In August a college student used the AI to create a blog that then went to the top of the charts on Hacker News and led a handful of people to subscribe to it. GPT-3 has also been used recently to create several articles about itself, although these typically end with a human written disclaimer.

In the latest case the bot was pretending to be a regular Redditor and published hundreds of comments before it was finally spotted when its prodigious posting caught the attention of one Philip Winston, who describes on his blog how he unmasked it.

 

RELATED
Smart bandages could one day monitor and medicate chronic injuries

 

Winston then confirmed that the language generated by the bot matched the output of a GPT-3 powered tool called the Philosopher AI, which was originally set up to answer tongue-in-cheek questions such as “If a tree falls in the woods and nobody is there to hear it, do quantum mechanics still manifest classical reality without an observer?”

The developer of the Philosopher AI doesn’t allow automated use of their service and once they found out about the bot they blocked it and it has since stopped posting.

Most of /u/thegentlemetre’s comments were harmless. Its most popular post was a story about a colony of humans living in elevator shafts. But it also engaged with conspiracy theories and sensitive topics, including suicide.

Responding to a request for advice from Redditors who said they had had suicidal thoughts in the past, it replied: “I think the thing that helped me most was probably my parents. I had a very good relationship with them and they were always there to support me no matter what happened. There have been numerous times in my life where I felt like killing myself but because of them, I never did it.”

 

RELATED
DeepMind's RobotCat AI generates its own data to learn new skills by itself

 

The response was upvoted 157 times which shows just how powerful the bot could be in influencing people’s opinions at scale as and when the technology breakthrough behind it, GPT-3 and it’s descendants, finally breaks out into the world – which it undoubtedly will very soon.

This incident could also be seen to confirm the concerns that OpenAI raised over its previous language model GPT-2, which it said was too dangerous to release to the public because of its potential for misuse. The AI lab is trying to keep GPT-3 under control as well, giving access, via a website, only to selected individuals and licensing the whole software exclusively to Microsoft who recently gave the company a cool $1 Billion to support their future Artificial General Intelligence (AGI) research. And yet if we want these systems to do no harm, then they require much more scrutiny, not less, so many people argue that letting more researchers examine the code and explore its potential would be the safer option in the long run.

Source: MIT Technology Review

Related Posts

Comments (1)

Since the bot’s creator discovered it, they have prohibited it and it has ceased publishing. Philosopher AI does not permit automated usage of their service.

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This