– Full Frontal Nerdity

Clint Byrum's Personal Stuff


We had a fun time this week with TokyoTyrant. Recently it has become apparent that MemcacheDB has been all but abandoned. As fantastic as the early work was by Steve Chu, the project is in disrepair. That, coupled with the less than obvious failover for its replication combined to make us seek alternatives.


Brian Aker had mentioned to me at one time that TokyoTyrant was way better than memcachedb and we should run it instead. I took notice and it turns out he’s right! It does basically the same thing, applying the memcache protocol to an on disk key/value store. However, the code is incredibly clean, well maintained, and runs extremely fast. There’s also a lot more flexibility, with the ability to choose between in-memory or on disk storage, hash tables, B+Tree’s, etc.

The availability of log based asynchronous master/master replication (somewhat similar to MySQL’s replication in concept) was probably one of the biggest wins, allowing much simpler failover (just move the IP, or DNS, or whatever) when compared to MemcacheDB’s adherence to BerkeleyDB’s replication setup, which is a single-master system implementing an election algorithm.

Somewhere during migration, we missed one tiny detail though. Sometimes, the devil is in the details. This is really the only evidence in the documentation that tokyo tyrant has support for the memcache protocol. It is very clear:

Memcached Compatible Protocol

As for the memcached (ASCII) compatible protocol, the server implements the following commands; “set”, “add”, “replace”, “get”, “delete”, “incr”, “decr”, “stats”, “flush_all”, “version”, and “quit”. “noreply” options of update commands are also supported. However, “flags”, “exptime”, and “cas unique” parameters are ignored.

Now, as I said, there’s nothing ambiguous about this. That would have helped, if anyone on my team had ever read it. We installed TokyoTyrant, pointed our basic test code at it, and it worked. This is really a process problem, not so much a technical one. The process must be to assume it won’t work, and test all the different use cases to make sure it works.

Now, why is that bit of the manual important? Well we use PHP. Specifically, we use the PECL “Memcache” module to access memcache protocol storage. Now, the Memcache module is mostly oriented toward caching in the memory based original memcached. It works great for memcachedb too, which simply ignores the exptime parameter. However, memcacheDB *does not* ignore “flags”.

And therein lies the problem. Users of the PECL Memcache module may not know this, but the flags are *important*. There are two bits in that flags field that the Memcache module may set. Bit 0 is used to indicate whether or not the content has been serialized, and, therefore, on read, must be unserialized. Bit 1 is used to indicate whether or not the content has been gzipped.

So, while all of the strings that were stored in MemcacheDB and subsequently copied to TokyoTyrant worked great, the serialized objects, arrays, and gzipped values, were completely inoperative, as they were coming back to the code as strings and binary compressed data. The gzipped data was easy (turn off automatic gzip compression). The serialized data took some quick tap dancing to remedy, with code something like this:

class Memcache_BrokenFlags extends Memcache
public function get($key, &$flags)
$v = parent::get($key, $flags);
$uv = @unserialize($v);
return $uv === false ? $v : $uv;

Luckily our code all uses one Factory method to spawn all “MemcacheDB” connections, so it was easy to substitute this in.

Eventually we can just change the code by segregating into things that always serialize, and things that don’t, and just do the serialization ourselves. This should eventually allow us to use the new tokyo_tyrant module in PECL, which only reliably stores scalars (I noticed recent versions have added a call to the internal PHP function convert_to_string().. this is, I think, a mistake, but one that still leaves it up the programmer to explicitly serialize when serialization is desired).

This was a pretty big gotchya, and one that illustrates that even though sometimes us cowboy coders and sysadmins get annoyed when those pesky business people ask us for plans, schedules, expected impact, etc., and we keep assuring them we know whats up, its still important to actually know whats up, and make sure to RTFMC .. C as in, CAREFULLY.

October 27, 2009 at 4:28 am Comments (0)

TokyoTyrant – MemcacheDB, but without the BDB?

This past April I was riding in a late model, 2 door rental car with an interesting trio for sure. On my right sat Patrick Galbraith, maintainer of DBD::mysql and author of the Federated storage engine. Directly in front of me manning the steering wheel (for those of you keen on spatial description, you may have noted at this point that its most likely I was seated in the back, left seat of a car which is designed to be driven on the right side of the road. EOUF [end of useless fact]), David Axmark, co-founder of MySQL. Immediately to his right sat Brian Aker, of (most recently) Drizzle fame.

This was one of those conversations that I felt grossly unprepared for. It was the 2009 MySQL User’s conference, and Patrick and I had been hacking on DBD::drizzle for most of the day. We had it 98% of the way there and were in need of food, so we were joining the Drizzle dev team for gourmet pizza.

As we navigated from the Santa Clara conference center to Mountain View’s quaint downtown, Brian, Patrick, and I were discussing memcached stuff. I mentioned my idea, and subsequent implementation of the Mogile+Memcached method for storing data more reliably in memcached. I knew in my head why we had chosen to read from all of the replica servers, not just the first one that worked, but I forgot (The reason, btw, is that if one of the servers had missed a write for some reason, you might get out-of-date data). I guess I was a little overwhelmed by Brian’s mountain of experience w/ memcached.

Anyway, the next thing I mentioned was that we had also tried MemcacheDB with some success. Brian wasn’t exactly impressed with MemcacheDB, and immediately suggested that we should be using Tokyo Tyrant instead. I had heard of Tokyo Cabinet, the new hotness in local key/value storage and retrieval, but what is this Tyrant you speak of?

I’ve been playing with Tokyo Tyrant ever since, and advocating for its usage at Adicio. Its pretty impressive. In addition to speaking memcached protocol, it apparently speaks HTTP/WEBDAV too. The ability to select hash, btree, and a host of other options is nice, though I’m sure some of these are available as obscure options to berkeleydb as well.

Anyway, I was curious what performance was like, so I did some tests on my little Xen instance, and came up with pretty graphs.


I used the excellent Brutis tool to run these benchmarks using the most interesting platform for me at the moment.. which would be, php with the pecl Memcache module.

These numbers were specifically focused on usage that is typical to MemcacheDB. A wide range of keys (in this case, 10000 is “wide” since the testing system is very small), not-small items (2k or so), and lower write:read ratio (1:50). I had the tests restart each daemon after each run, and these numbers are the results of the average of 3 runs each test.

I also tried these from another xen instance on the same LAN, and things got a lot slower. Not really sure why as latency is in the sub-millisecond range.. but maybe Xen’s networking just isn’t very fast. Either way, the numbers for each combination didn’t change much.

What I find interesting is that memachedb in no-sync mode actually went faster than memached. Of course, in nosync mode, memcachedb is just throwing data at the disk. It doesn’t have to maintain LRU or slabs or anything.

Tokyo Tyrant was very consistent, and used *very* little RAM in all instances. I do recall reading that it compresses data. Maybe thats a default? Anyway, tokyo tyrant also was the most CPU hungry of the bunch, so I have to assume having more cores might have resulted in much better results.

I’d like to get together a set of 3 or 4 machines to test multiple client threads, and replication as well. Will post that as part 2 when I pull it together. For now, it looks like.

In case anybody wants to repeat these tests, I’ve included the results, and the scripts used to generate them in this tarball.

– Additional info, 6/4/2009
Another graph that some might find interesting, is this one detailing CPU usage. During all the tests, brutis used about 60% of the CPU available on the machine, so 40% is really 100%:


This tells me that the CPU was the limiting factor for Tokyo Tyrant, and with a multi-core machine, we should see huge speed improvements. Stay tuned for those tests!

June 4, 2009 at 6:40 am Comments (0)