Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
WHY THIS MATTERS IN BRIEF
Many people forget that virtual reality is just that – virtual, as a result it’s often easy to forget that the rules of the real world don’t have to apply in the virtual one.
Researchers at Stanford University think that having a third arm in VR could make you a more efficient, albeit virtual, human. So they’ve set out to learn what they can about the most effective means of controlling an extra limb in virtual reality (VR).
Thanks to high quality VR motion controllers, computer users are beginning to reach into the digital world in an entirely new and tangible way. But this is VR after all, and we can do whatever we want, so why be restricted to a mere two arms? Researchers at Stanford’s Virtual Human Interaction Lab have finally said “enough is enough,” and have begun studying which control schemes are most effective for use with a virtual third arm.
Having only ever lived with two arms, a virtual third arm would need to be easy to learn to control to be of any use so the team defined three methods of controlling a third arm that extends outward from the virtual user’s chest.
The first method controls the arm via the user’s head. Turning and tilting the head causes the arm move in a relatively intuitive way. The second method, which the researchers call ‘Bimanual’, uses the horizontal rotation of one controller combined with the vertical rotation of a second controller to act as inputs for the arm. And the third method, called ‘Unimanual’ uses the horizontal and vertical rotation of just a single controller to drive the third arm.
In their paper, called Evaluating Control Schemes for the Third Arm of an Avatar, the team detail the experiments they designed to test the efficiency of each schema – one of which was to get their test subjects to tap a randomly changing white block among a grid of blocks, with one grid for the left arm, another for the right arm, and a third set that’s further away and only reachable by the third arm.
Recent research into immersive virtual environments has shown that users can not only inhabit and identify with novel avatars with novel body extensions, but also learn to control novel appendages in ways that benefit the task at hand and that ultimately improve productivity. What the team found was that both the unimanual and the head-control were significantly faster, elicited significantly higher body ownership, and were preferred over the bimanual control schema which they felt was significantly more challenging to control.
Ultimately, the idea of a third arm in VR is something of a metaphor. When you break it down the study was more about how humans might be able to add, then use different types of appendages to complete different, virtual tasks – whatever those appendages might be.