Why Google Wants Hard Drives To Be Less Reliable

The rise of cloud storage means that individuals and corporations are buying fewer hard drives, while the operators of cloud services are purchasing them in massive quantities. Now Google is arguing that as a consequence, hard drives really need to become less reliable — and that’s not quite as insane as it first sounds.

Disk drive picture from Shutterstock

Google outlined its vision of the future of hard drives in a recent blog post by Google’s infrastructure VP Eric Brewer and an associated white paper released at the same time.

Unsurprisingly, Google is a massive consumer of hard drives in its data centres. The volumes of storage Google needs are insane: as the post notes, YouTube alone requires a petabyte of new storage every single day, thanks to the 400 hours of video which are uploaded to the service each day.

Most Google customers don’t end up paying for that storage either, though Google is obviously bringing in enough cash for that not to be a concern. (Last year, it pulled in $US74 billion in revenues.)

So while individuals are increasingly relying on SSDs and USB storage, or simply uploading everything into the cloud, Google (and its rivals such as Amazon and Microsoft) are soon going to be the biggest buyers of drives, as Brewer notes:

The rise of cloud-based storage means that most (spinning) hard disks will be deployed primarily as part of large storage services housed in data centers. Such services are already the fastest growing market for disks and will be the majority market in the near future.

That’s good news for hard drive manufacturers, since they’ll still have somebody to flog their wares to. But Google makes a good case for needing to rethink the current approach to drive manufacture and design.

The main reason is because current hard drive designs don’t reflect the data centre use case. One simple and obvious example: hard drives maintain their current form factor because they needed to be roughly the same size as the 3.5-inch floppy drives which preceded them. This is undoubtedly ancient history to most Lifehacker readers, but in the late 1980s and early 1990s, a hard drive wasn’t a given in a PC. Keeping the same form factor as the then-predominant form of storage meant that those who could afford the luxury of a hard drive had the option, but the system design didn’t completely need to change. That made sense in 1989, but it isn’t necessarily sensible a quarter of a century later.

More crucially, data centre operators aren’t particularly concerned about the reliability of a single disk. The fact is that all hard drives will fail eventually, and any data centre system will ensure that data is stored on multiple drives so it can be restored when a failure happens. As a consequence, we don’t necessarily need to focus on improving the reliability of drives beyond current levels, Brewer argues:

We need to optimise the collection of disks, rather than a single disk in a server. This shift has a range of interesting consequences including the counter-intuitive goal of having disks that are actually a little more likely to lose data, as we already have to have that data somewhere else anyway. It’s not that we want the disk to lose data, but rather that we can better focus the cost and effort spent trying to avoid data loss for other gains such as capacity or system performance.

So what might those changes look like? They could include altering the physical format of disks to make them taller, or changing the firmware in hard drives to offer a higher rate of I/O (even if it’s slightly less reliable and predictable than previous versions).

Height might not seem like a major factor in drive storage, but as the paper explains, it could substantially increase performance:

We propose increasing the allowable height (“Z height”). Current disks have a relatively small fixed height: typically 1″ for 3.5″ disks and 15mm maximum for 2.5″ drives. Taller drives allow for more platters per disk, which adds capacity, and amortises the costs of packaging, the printed­circuit board, and the drive motor/actuator. Given a fixed total capacity per disk, smaller platters can yield smaller seek distances and higher RPM (due to platter stability), and thus higher IOPS, but worse GB/$. The net result is a higher GB/$ for any specific IOPS/GB that could be achieved by altering any other single aspect, such as platter sizes or RPM alone.

Google isn’t proposing an exact height or any other design parameters. As the paper notes, while Google buys enough disks that it could create any specification it liked and create enough business for a manufacturer to sign up, it would prefer a universal standard. That might require some delicate negotiations with the other major cloud providers, but it’s a sensible goal.

Those changes won’t ever be seen in consumer drives, but they would make a big difference to cloud service operators. And while they won’t come overnight, Google has sufficient clout that its vision of the future of the hard drive is likely to bear fruit.

Angus Kidman is editor-in-chief for comparison site finder.com.au , a former editor for Lifehacker Australia and a man who still has the first 5.25in floppy disk he ever purchased. Follow him on Twitter @gusworldau.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments


18 responses to “Why Google Wants Hard Drives To Be Less Reliable”