The idea of an “in-memory” database has recently become very popular, and I increasingly hear it touted as a silver bullet for solving data problems. Unfortunately it isn’t – memory might be cheaper than it used to be, but so are hard drives and SSDs, and data gets bigger at least as fast as memory does. Memory is what it has always been – one component of many required to build a performant data system.

The legend of “in-memory” seems to have been born from stories out of Facebook. I worked at Facebook for a long time and helped build a lot of the systems that gave rise to this myth. I was the director of engineering for the infrastructure team at Facebook that was responsible for memcached, mysql, cassandra, hive, scribe, etc.– So I’d like to set the story straight.

These systems do use a lot of memory, a mind-boggling amount of it. But the data is even bigger than the memory – it always has been and it always will be. When we built a system, step one was to cram as much as we could into memory, but then the real work started – figuring out how to deal with the stuff that didn’t fit. Every time you load your home page on facebook there are many spinning disks seeking for you, and many reads from SSD cards. This is where the vast majority of the engineering work goes, and it’s the reason the site works so well.

The way I often hear people describe in-memory is “RAM has gotten so much cheaper, we don’t even need disks, we can just put everything in RAM!”. The fallacy here is in assuming RAM prices will change but the requirements of applications won’t. It’s like saying we don’t need faster computers because the ones we have already run Pong and Lotus123 well enough. RAM is getting a lot cheaper, but not as fast as our appetite for applications.

Tweet This: “Our appetite for applications grows even faster than Moore’s law.” – Bobby Johnson

Furthermore, reading from RAM isn’t really that fast, and it isn’t that simple. On a modern CPU there are three levels of cache in front of main memory, and memory is split between local and remote to each processor, not to mention that most applications today run in a distributed system where the memory might be across a network hop.

The Cost of I/O

The CPU cache is measured in Megabytes, and a miss can cost hundreds of CPU cycles. The difference in cost between cache and RAM is increasing over time. This looks strikingly similar to a main memory vs. disk system of a couple of decades ago. So maybe we should start a new movement called “in-cpu-cache”. But life isn’t that simple – good systems use whatever components are available to their best advantage.

So, I’m not saying you shouldn’t rely on RAM to make things fast – it’s great to get things into RAM when you can. I never met a GB of RAM I couldn’t use to make something work better. But that’s actually my point – as RAM gets cheaper there’s an endless list of applications that will eat it up. And there will always be that next application that we want, but doesn’t feasibly fit in RAM yet.

The real trick is to use RAM as effectively as possible to speed things up, but still be able to take advantage of disks and SSDs that can hold more data. Disks and RAM are good at different things, and a well designed system will use each to its advantage. Choosing an “in-memory” solution isn’t a feature or a silver bullet, it’s just cutting off a lot of opportunity to make things work better.

Disk vs Memory

I’m not at Facebook anymore, and I’ve had the opportunity to write some new software based on lessons learned there. I didn’t choose to write an “in-memory” system – I wrote something that uses everything from spinning disk to CPU cache to its best advantage.

In my next post I’ll talk about some of the specific techniques and principles involved in this.