Sizing cache

When I first starting working on performance with BDB core, I remember learning that the cache size tuning parameter was the biggest dial you could turn to boost performance for an app.  I envisioned this huge dial, and the temptation was to turn it all the way, up to eleven! But there are some subtleties involved here.  Picking that cache size number is a bit like Goldilocks and the Three Bears.  You don’t want it too small, you don’t want it too large, you want it just right. Fortunately, there’s usually some leeway, though like many things in BDB, it can depend a lot on your own situation.

Not too large. First of all, you need to consider how much memory you can afford for the BDB application.  Perhaps your scenario is to have a pretty much dedicated box for your BDB work.  In that case, you’ll want to pick a number that is somewhat less than physical memory. Even on a dedicated box, the OS needs memory and there are other processes (cron, shells) that may come and go.  For a dedicated box with more that a gig of memory, maybe around 75% of physical memory is a good place to start.  Out of that 75%, you’ll need to subtract out the data size of your application (your JVM heap size if you’re in java) – what you end up with can be used for BDB cache.  That 75% figure gives us a little safety margin.  If you pick a number that is too large, some of your BDB cache may get paged out, and that will lead to very bad performance (and db_stat won’t clue you in to that [1]).

Not too small. If you don’t have a dedicated system for your BDB installation, another alternative is to work from the bottom up.  By default, BDB environments have a cache size of approximately 256K [2].  That’s tiny, but almost guaranteed to work without giving errors about running out of space on all but the most memory starved systems.  It turns out that for most folks, this is really small.

How tiny is 256K?  Let’s suppose you have a database of a million items, each item has a 10 byte key and 100 byte data.  You have 8K disk blocks.  On the face of it, each key/data pair occupies 110 bytes, and with a million of those, that’s 110 megabytes.  There’s some extra overhead, and some free space in each BDB database, so let’s round the total size to 150 megabytes (we can talk about that guesstimate another time).  Another quick way to tell this size of your database is to take a look at the size of your .db file.  Okay, as a first guess, we can say that one out of every 600 pages in our 150M database fits in that 256K cache (600*256K = ~150M).  Yipes!  Doesn’t that mean a cache hit rate of less than %0.2? Hold on a second, it depends greatly on the locality of your references.  If you have a pretty localized access pattern, perhaps hitting the same keys many times, or maybe hitting nearby keys in the Btree, you’ll actually do a lot better than that.  A LOT.  You can look at your cache hit rate in BDB by running db_stat -m.

Well let’s suppose your actual cache hit rate is much better, about 70%. Now that sounds pretty good, doesn’t it?  Not really.  Each of those misses might take 1000 times as long to process (or more) than each hit.  To quote the proverb: disk slow memory fast.  You’re spending at least 99% of your time waiting for disk.  If you could decrease your misses by a half (or bring your cache hit rate to 85%) you would about double the speed of the database part of your application.

So how do you improve the cache hit rate?  Well the easiest way, is to pull that lever called the cache size.  Like we said, 256K is very small.  How about a megabyte instead? How about 10 megabytes?  Again, it depends on how much locality your application has, but generally speaking, you’ll see steady performance increases up to some plateau. As your databases get larger and larger that plateau may get higher and higher. Sometimes the plateau goes beyond the memory you have available – so you size as large as you can. But since you’re competing with other memory users on your system, bear in mind the Not too big guideline above.

Remember, all this is rough estimates and rules of thumb.  Your mileage will vary. The best way to know is to experiment with different cache sizes with your application on your system, with some real data loads.  Having an adequate set of tests with an adequate quantity and quality of data is pretty important to finding out how far to turn that dial.

Footnotes

BACK TO POST
[1] The system utility iostat will help you diagnose this. Also you might even hear your disk churn.
BACK TO POST>
[2] For the default cache size, see the Berkeley DB source code, function __memp_env_create() in /mp/mp_method.c .

Advertisements

About ddanderson

Berkeley DB, Java, C, C , C# consultant and jazz trumpeter
This entry was posted in Uncategorized. Bookmark the permalink.

One Response to Sizing cache

  1. Pingback: No Threads Attached | libdb

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s