0

WHY THIS MATTERS IN BRIEF

AI’s might destroy other AI’s but at least humanity is safe…?

 

The chief scientist leading Facebook’s research into Artificial Intelligence (AI) doesn’t believe the technology is a threat to human existence – unless we are very stupid. So there’s still a chance then…

 

RELATED
Lights, camera, artificial action as AI takes on the special effects industry

 

“I don’t think at AI will become an existential threat to humanity,” said Yann LeCun, the top professor who is leading Facebook’s efforts in the space of AI and machine learning.

“I’m not saying that it’s impossible, but we would have to be very stupid to let that happen. Others have claimed that we would have to be very smart to prevent that from happening, but I don’t think that’s true,” he said.

Several top scientists and technologists such as Stephen Hawking and Elon Musk have warned humans face a risk from AI.

“If we are smart enough to build machines with super human intelligence, chances are we will not be stupid enough to give them infinite power to destroy humanity,” he continued.

 

RELATED
China plays catch up as AI trounces PLA's best fighter pilots in simulated combat

 

“Also, there is a complete fallacy due to the fact that our only exposure to intelligence is through other humans. There are absolutely no reason that intelligent machines will even want to dominate the world and/or threaten humanity. The will to dominate is a very human one.”

LeCun, who leads Facebook’s AI Research (FAIR) added that behaviours specific to humans, such as violence or jealousy, would only be in intelligent machines if they are explicitly given to them. Even then, a second AI machine could be built with explicit instructions to destroy a dangerous AI.

“If both AIs have access to the same amount of computing resources, the second one will win, just like a tiger a shark or a virus can kill a human of superior intelligence.”

The professor at New York University also revealed his view on how rivals are doing with their AI research, saying Google’s DeepMind and Facebook AI projects are attracting talent from competitors.

“Microsoft is doing some good work, but they are losing quite a lot of people to Facebook and Google. They did some good work on deep learning in speech, and on handwriting recognition back in the early 2000s before the current craze. But their recent efforts seem to have been less ambitious than at FAIR or DeepMind,” he said, when asked who was leading the way in research among the big tech firms.

 

RELATED
IBM's AI learns how to predict the outcomes of chemical reactions

 

“A number of top researchers from Google Brain have left for DeepMind, OpenAI, or FAIR,” he added, while Apple’s secretive nature meant they were “not a player in the AI research circuit”.

“You simply cannot do leading-edge research in secret. If you can’t publish, it’s not research. At best, it’s technology development.”

Asked whether the technology is experiencing a bubble, he said, “There is a lot of hype around AI and deep learning at the moment,” and then he went on to say, “that said, deep learning produces real results and is at the root of a real industry that makes money today. The promises of it in the near future are very exciting – even without the hype – in areas like self-driving cars, medical imaging, personalised medicine, content filtering and so on.”

About author

Matthew Griffin

Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.

Comments
  • Fender#1

    8th February 2017

    Great Article – it’s obvious Apple is not into AI. Just look at Siri, their voice recognition from voice to text is a joke too. Furthermore, Apple’s new Pro series for Mac was another huge disappointment, Apple is never the same after Steve Jobs In my honest opinion.

    If brokers get the hint it is smart the stocks should already have been shifting to VR, AI, bionanotechnology and now the emergence of quantum mechanics is seeing new break-throughs

    Reply

Your email address will not be published. Required fields are marked *