Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
The best way apparently to train one of the world’s best AI’s is to point it at two billion posts about cats
Elon Musk’s artificial intelligence company OpenAI just received a package that took $2 billion to develop when Nvidia CEO Jen-Hsun Huang delivered the first DGX-1 supercomputer to the non-profit organisation, which is dedicated to “advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
The “AI supercomputer in a box” is packed with 170 teraflops of computing power – that’s equivalent to 250 conventional servers. NVIDIA says it’s a very fitting match.
“The world’s leading non-profit artificial intelligence research team needs the world’s fastest AI system.”
“I thought it was incredibly appropriate that the world’s first supercomputer dedicated to artificial intelligence would go to the laboratory that was dedicated to open artificial intelligence,” Huang added.
The supercomputer will tackle the most difficult challenges facing the artificial intelligence industry by reading through Reddit forums. And apparently, Reddit’s size was not a hindrance. In fact, the site’s size the main reason why the online community was specifically chosen as DGX-1’s training ground.
“Deep learning is a very special class of models because as you scale up, they always work better,” says OpenAI researcher Andrej Karpathy.
The nearly two billion Reddit comments will be processed by DGX-1 in months instead of years, as the $129,000 desktop-sized box contains eight NVIDIA Tesla P100 GPUs, 7 terabytes of SSD storage, and two Xeon processors, apart from the aforementioned 170 teraflops of performance.
DGX-1 will take on Reddit to learn faster and to chat more accurately.
“You can take a large amount of data that would help people talk to each other on the internet, and you can train, basically, a chatbot, but you can do it in a way that the computer learns how language works and how people interact,” Karpathy said.
The supercomputer is also equipped to make things easier from the developers at OpenAI.
“We won’t need to write any new code, we’ll take our existing code and we’ll just increase the size of the model,” says OpenAI scientist Ilya Sutskever, “and we’ll get much better results than we have right now.”