Matthew Griffin, described as “The Adviser behind the Advisers” and a “Young Kurzweil,” is the founder and CEO of the World Futures Forum and the 311 Institute, a global Futures and Deep Futures consultancy working between the dates of 2020 to 2070, and is an award winning futurist, and author of “Codex of the Future” series. Regularly featured in the global media, including AP, BBC, Bloomberg, CNBC, Discovery, RT, Viacom, and WIRED, Matthew’s ability to identify, track, and explain the impacts of hundreds of revolutionary emerging technologies on global culture, industry and society, is unparalleled. Recognised for the past six years as one of the world’s foremost futurists, innovation and strategy experts Matthew is an international speaker who helps governments, investors, multi-nationals and regulators around the world envision, build and lead an inclusive, sustainable future. A rare talent Matthew’s recent work includes mentoring Lunar XPrize teams, re-envisioning global education and training with the G20, and helping the world’s largest organisations envision and ideate the future of their products and services, industries, and countries. Matthew's clients include three Prime Ministers and several governments, including the G7, Accenture, Aon, Bain & Co, BCG, Credit Suisse, Dell EMC, Dentons, Deloitte, E&Y, GEMS, Huawei, JPMorgan Chase, KPMG, Lego, McKinsey, PWC, Qualcomm, SAP, Samsung, Sopra Steria, T-Mobile, and many more.
Organisations that operate hyperscale cloud data centers, such as Alibaba, AWS, Google, Facebook, Microsoft and Tencent have always pushed the limits of what’s technically possible with yesterday’s hardware so in 2011 it came as no surprise that they formed the Open computing Project alliance, a program that set about to create standards based, open source architectures that would allow them to collaborate with each other to design hardware platforms specifically designed for tomorrows hyperscale computing operations and tomorrows workloads.
There were, however notable absences from the party namely the incumbent Tier 1 and Tier 2 server, networking and storage equipment manufacturers like Cisco, Dell, EMC, HP, IBM, Juniper and Lenovo and while some might say their invitation got lost in the post most say they were never sent an invitation in the first place. The OCP bypass them by designing then outsourcing server and network manufacturing to the very same Original Device Manufacturers (ODM’s) such as Delta, Hon Hai, Quanta and Wistron who manufacture and build equipment for Cisco, Dell, HP and Lenovo. HP, for example was once king of the server hill has now been displaced by it’s own white box manufacturers and it’s a trend that’s increasing not decreasing with other hyperscale datacenter operators like AT&T and Vodafone all jumping on the OCP band wagon.
Hyperscale datacenters operate at a dramatically different scale to even the largest Fortune 50’s datacenter operations, employing millions not thousands of servers and operating vast storage estates that grow by Petabytes an day so it’s no surprise that Google, one of the largest storage consumers on the planet has announced that it wants to apply the same principles followed by the OCP program to shake up and free the world of storage from the shackles of the 20th Century. It’s first initiative is to make a clean break from the 3.5 inch hard drive dimensions inherited from old floppy disks in the 1970’s and 1980’s and Google is challenging disk manufacturers including Seagate and Western Digital to come up with a new design, optimised for a cloud like Google’s.
Speaking at the Usenix File and Storage conference this week, Google VP of infrastructure Eric Brewer made the case for disk vendors to look at its wish list for disks in the cloud, which would involve significantly different designs to the ones used by the current generation of disks aimed at enterprise servers. Key to Brewer’s argument, also outlined in a new white paper, is that video is driving huge demand for disk and that’s coming from cloud datacenters operated by the likes of Google, where data is already replicated for failover purposes.
Brewer points out that YouTube users are uploading one petabyte of new storage every day and at current growth rates that they should be uploading 10 petabytes per day by 2021.
“At the heart of the paper is the idea that we need to optimise the collection of disks, rather than a single disk in a server. This shift has a range of interesting consequences including the counter-intuitive goal of having disks that are actually a little more likely to lose data, as we already have to have that data somewhere else anyway,” said Brewer.
Specifically, Google appears to be willing to pay a higher gigabyte price for storage, so long as it delivers a lower total cost of ownership as well as higher capacity and higher I/O operations per second but as the paper notes, “The industry is relatively good at improving GB/$, but less so at IOPS/GB.” Also, Google isn’t interested in SSDs despite their higher IOPS because they cost too much per gigabyte and as for the alternative to the standard 3.5 inch HDDs, Google proposes taller drives than the standard one inch for 3.5 inch drives and 15mm for 2.5 inch drives.
“Taller drives allow for more platters per disk, which adds capacity, and amortises the costs of packaging, the printed circuitboard, and the drive motor and actuator. Given a fixed total capacity per disk, smaller platters can yield smaller seek distances and higher RPM, due to platter stability, and thus higher IOPS, but worse GB/$,” the paper notes.
Google notes that it does have the scale to order a custom form factor but sees the issue extending to the wider industry and therefore would like to see it standardised.
Security is another area Google wants the industry to work on. The paper points to the very real threat of the government hacking hard disk firmware, referencing research by Kaspersky Lab into the Equation Group, which did just this.
“It is clear that it must be easier to assure correct firmware and restrict unauthorised changes, and in the long term we must apply the full range of hardening techniques already used in other systems,” the paper notes, “We approach this problem in the short term by restricting physical access to the disks and by isolation of untrusted code from the host OS, which has the power to reflash the disk firmware.” It also notes that modern enterprise disks support encryption at rest today, but traditionally with a single key. Google wants finer-grained control using different keys for different areas of the disk.
If Googles plans work then we could see another 20% to 30% taken off of cloud storage prices and that, combined with an already accelerating move to the Cloud could spell even more trouble for yesterdays hardware incumbents.