Scroll Top

OpenAI starts to test AI that automates tasks on devices

WHY THIS MATTERS IN BRIEF

OpenAI might very well be trying to create a new kind of device OS that uses agents to automate tasks and perform actions, and which could replace the need for apps.

 

Love the Exponential Future? Join our XPotential Community, future proof yourself with courses from XPotential University, read about exponential tech and trendsconnect, watch a keynote, or browse my blog.

OpenAI, the maker of ChatGPT, is now reportedly is working on Artificial Intelligence (AI)  agents that can execute tasks for the user autonomously after The Information reported that one type of agent software OpenAI is developing would effectively take over a user’s device to automate complex tasks within an environment – such as the persons work. Normally, people will have to move the cursor, click and type to transfer between applications, but in this case, via a kind of Generative Operating System, ChatGPT could transfer the information on a document to a spreadsheet for analysis.

 

RELATED
New experimental chewing gum slows down the spread of COVID-19

 

Another type of AI Agent OpenAI is developing handles web-based tasks such as booking airfares or creating travel itineraries without access to APIs. ChatGPT currently can do agent-like tasks, as we saw quite a while ago with the likes of AutoGPT, but it has to use the relevant third-party’s APIs.

 

The Future of AI keynote, by Futurist Matthew Griffin

 

Last November at its developers’ conference, OpenAI launched its Assistants API that lets users build agent-like experiences in their applications. But Adept CEO David Luan, who used to lead engineering at OpenAI, told The Information that many enterprise apps do not have APIs. Here, agents can fill in the gap, Luan said.

 

RELATED
An AI backed party in Denmark is running for election in 2023

 

AI agents differ from Robotic Process Automation (RPA), which still needs developers to manually code steps needed to complete a task, Luan said. In contrast, AI agents can do more complex, unstructured work with little guidance from users.

AI agents have been under development for years by various companies. In 2018, Google demonstrated that a computer can do things like call a hair salon to make an appointment for a user or contact a restaurant to book a reservation – all by itself. The staff at these businesses did not know they were talking to a computer. Google, however, did not launch it in part due to fear of public backlash, the news outlet said.

 

RELATED
Scientists get funding to grow neural networks in petri dishes

 

Google is changing its tune now. CEO Sundar Pichai recently said that adding generative AI to search would make it “act more like an agent over time” to “go beyond answers and follow through for users even more” by autonomously executing on their search results.

Related Posts

Leave a comment

You have Successfully Subscribed!

Pin It on Pinterest

Share This