SpamapS.org – Full Frontal Nerdity

Clint Byrum's Personal Stuff

But will it scale? – Taking Limesurvey horizontal with juju…

One of the really cool things about using the cloud, and especially juju, is that it instantly enables things that often times take a lot of thought to even try out in traditional environments. While I was developing some little PHP apps “back in the day”, I knew eventually they’d need to go to more than one server, but testing them for that meant, well, finding and configuring multiple servers. Even with VMs, I had to go allocate one and configure it. Oops, I’m out of time, throw it on one server, pray, move to next task.

This left a very serious question in my mind.. “When the time comes, will my app actually scale?”

Have I forgotten some huge piece to make sure it is stateless, or will it scale horizontally the way I intended it to? Things have changed though, and now we have the ability to start virtual machines via an API on several providers, and actually *test* whether our app will scale.

This brings us to our story. Recently, Nick Barcet created a juju charm for Limesurvey. This is a really cool little app that lets users create rich, multi faceted surveys and invite the internet to vote on things, answer questions, etc. etc. This is your standard “LAMP” application, and it seems written in a way that will allow it to scale out.

However, when Nick submitted the charm for the official juju charms collection, I wanted to see if it actually would scale the way I knew LAMP apps should. So, I fired up juju on ec2, threw in some haproxy, and related it to my limesurvey service, and then started adding units. This is incredibly simple with juju:

juju deploy --repository charms local:mysql
juju deploy --repository charms local:limesurvey
juju deploy --repository charms local:haproxy
juju add-relation mysql limesurvey
juju add-relation limesurvey haproxy
juju add-unit limesurvey
juju expose haproxy

Lo and behold, it didn’t scale. There were a few issues with the default recommendations of limesurvey that Nick had encoded into the charm. These were simple things, like assuming that the local hostname would be the hostname people use to access the site.

Once that was solved, there were some other scaling problems immediately revealed. First on the ticket was that Limesurvey, by default, uses MyISAM for its storage engine in MySQL. This is a huge mistake, and I can’t imagine why *anybody* would use MyISAM in a modern application. MyISAM uses a “whole table” locking scheme for both reads and writes, so whenever anything writes to any part of the table, all reads and writes must wait for that to finish. InnoDB, available since MySQL 4.0, and the default storage engine for MySQL 5.5 and later, doesn’t suffer from this problem as it implements an MVCC model and row-level locks to allow concurrent reads and writes.

The MyISAM locks caused request timeouts when I pointed siege at the load balancer, because too many requests were stacking up waiting for updates to complete before even reading from the table. This is especially critical on something like the session storage that limesurvey does in the database, as it effectively meant that only one user can do anything at a time with the database.

Scalability testing in 10 minutes or less, with a server investment of about $1US. Who knew it could be this easy? Granted, I stopped at three app server nodes, and we didn’t even get to scaling out the database (something limesurvey doesn’t really have native support for). But these are things that are already solved, and that have been encoded in charms already. Now we just have to suggest small app changes to allow users to take advantage of all those well know best practices sitting in charms.

(check the bug comments for the results, I’d be interested if somebody wants to repeat the test).

So, in a situation where one needs to deploy now, and scale later, I think juju will prove quite useful. It should be on anybody’s radar who wants to get off the ground quickly.

December 23, 2011 at 1:41 am Comments (0)

So what is Ensemble anyway?

Have you heard of Ensemble? Are you excited about Cloud/Service Orchestration? What? Ok you’re not alone if you are scratching your head.

Ensemble is an implementation of a new idea that has been taking shape the last couple of years. Ever since Amazon hooked up a remote API to thousands of machines to provide access to their virtual infrastructure (and called it macaroni? err.. AWS), people have been dreaming up ways to take advantage of what is basically a robotic “NOC guy”. No longer do you have to pre-rack servers or call your vendor frantically to get servers sent next-day to your colo. Right?

Naturally, the system administrators that would normally be in charge of racking servers, applied their existing tools to the job, to mixed success. Config management is really good at modelling identical hosts. But with virtual hosts instantly available, this left those thinking at a higher level wanting more. Chef in particular implemented a nice set of tools and functionality to allow this high level “service” definition with their knife tools and simple ruby API.

But how easy are Chef’s cookbooks to share and use without modification? How easy are they to integrate together? Puppet has modules that are also capable of similar functionality, and the recent integration of Mcollective, plus puppet Faces, has certainly added a lot of the same things Chef had to support this kind of application modelling, but again, the modules seem to require a lot of convention and assumption, and tweaking to get useful.

Its my opinion, that this is very much like the way tarballs+autoconf became the de-facto standard for distributing free software. It was *so much* better than writing a Makefile by hand, and it achieved an enormous amount of portability, so developers adopted it rapidly. In fact, it is still the dominant way to distribute portable open source applications.

But at some point, the limitations of this became clear. There was a need for something more concise, that could distribute both the source, and binaries, built for a platform. There was some limited early success with tarballs built by convention. But then, Enter RPM and DPKG. These included ways to express facts about software, like its dependencies, architecture, and the revisions made to it to work on the target platform. This allowed distributors of software to more easily maintain their systems, and enabled users to manage the software in their environments.

At that point, some smart guy figured out that we should be able to download and automatically configure all of the software needed for one application to work properly, just from its packaging information. To my mind, apt-get was my first experience with this, though FreeBSD ports authors may disagree there. Either way, this made it very easy for admins and users to install software without spending hours in the 7 levels of dependency hell.

In many ways, Service Orchestration is a way of bringing the benefits of packaging to the cloud. It should allow us to build out our cloud in a sane way, taking advantage of the knowledge that has been gained by others. For the bits that we need to finely tune, it should step aside and allow that without compromising the system.

Ensemble is an implementation of this idea, and Principia is a collection of “Formulas” for Ensemble. They are tightly coupled to Ubuntu, as they are in many ways meant to be the dpkg and apt-get for Ubuntu in the cloud.

Its pretty easy to try out Ensemble and Principia on Ubuntu. Right now you’ll need an EC2 account with an access key setup, though we’re working on making this work with just your local machine for rapid development.

Its been pointed out to me that the version of principia-tools that was available at the time of this writing didn’t include /usr/share/principia-tools/tests. I’ve uploaded a fixed version to the ensemble PPA, so if you tried these instructions and failed, please try updating principia-tools. If that fails, you can get the tests with bzr branch lp:principia-tools.


sudo add-apt-repository ppa:ensemble/ppa
sudo apt-get update
sudo apt-get install principia-tools
export AWS_SECRET_ACCESS_KEY=xxxxxxxxxxxxxxxx
export AWS_SECRET_KEY_ID=0123456789ABCDEF
ensemble bootstrap
principia getall /some/path/for/formulas
/usr/share/principia-tools/tests/mediawiki.sh /some/path/for/formulas

What does this give you, well it should give you a 7 node mediawiki cluster of t1.micro’s in the us-east-1 region of EC2. I just ran it and now I have this:

machines:
  0: {dns-name: ec2-50-19-158-109.compute-1.amazonaws.com, instance-id: i-215dd84f}
  1: {dns-name: ec2-50-17-16-228.compute-1.amazonaws.com, instance-id: i-8d58dde3}
  2: {dns-name: ec2-72-44-49-114.compute-1.amazonaws.com, instance-id: i-9558ddfb}
  3: {dns-name: ec2-50-19-47-106.compute-1.amazonaws.com, instance-id: i-6d5bde03}
  4: {dns-name: ec2-174-129-132-248.compute-1.amazonaws.com, instance-id: i-7f5bde11}
  5: {dns-name: ec2-50-19-152-136.compute-1.amazonaws.com, instance-id: i-755bde1b}
  6: {dns-name: '', instance-id: i-4b5bde25}
services:
  demo-wiki:
    formula: local:mediawiki-62
    relations: {cache: wiki-cache, db: wiki-db, website: wiki-balancer}
    units:
      demo-wiki/0:
        machine: 2
        relations: {}
        state: null
      demo-wiki/1:
        machine: 6
        relations: {}
        state: null
  wiki-balancer:
    formula: local:haproxy-13
    relations: {reverseproxy: demo-wiki}
    units:
      wiki-balancer/0:
        machine: 4
        relations: {}
        state: null
  wiki-cache:
    formula: local:memcached-10
    relations: {cache: demo-wiki}
    units:
      wiki-cache/0:
        machine: 3
        relations: {}
        state: null
      wiki-cache/1:
        machine: 5
        relations: {}
        state: null
  wiki-db:
    formula: local:mysql-93
    relations: {db: demo-wiki}
    units:
      wiki-db/0:
        machine: 1
        relations: {}
        state: null

At the top you see the machines that ensemble spun up in EC2 in the ‘machines’ section. The numbers there correspond to the ‘machine: #’ in the service/units definitions below. If you look through, you’ll see above that wiki-balancer is machine 4, which has a hostname of ec2-174-129-132-248.compute-1.amazonaws.com. If you go to that hostname, once all relations are up (I like to use ‘watch ensemble status’ to see when this happens), you should see a working mediawiki. But not just a working mediawiki, a scalable one. If you want to pour on the traffic, spin up 3 more demo-wiki’s to handle the app server load:


ensemble add-unit demo-wiki
ensemble add-unit demo-wiki
ensemble add-unit demo-wiki

These will of course take a minute or two to spin up. Once they’re ready they’ll show up in the status output:

services:
  demo-wiki:
    formula: local:mediawiki-62
    relations: {cache: wiki-cache, db: wiki-db, website: wiki-balancer}
    units:
      demo-wiki/0:
        machine: 2
        relations:
          cache: {state: up}
          db: {state: up}
          website: {state: up}
        state: started
      demo-wiki/1:
        machine: 6
        relations:
          cache: {state: up}
          db: {state: up}
          website: {state: up}
        state: started
      demo-wiki/2:
        machine: 7
        relations:
          cache: {state: up}
          db: {state: up}
          website: {state: up}
        state: started
      demo-wiki/3:
        machine: 8
        relations:
          cache: {state: up}
          db: {state: up}
          website: {state: up}
        state: started
      demo-wiki/4:
        machine: 9
        relations:
          cache: {state: up}
          db: {state: up}
          website: {state: up}
        state: started

How about a little test then? After I got to this point, I logged in as WikiSysop (change the password folks! its change-me) and imported the Wikipedia exports for “Ubuntu” and “EC2″. After that I used harvestman to spider the site and then saved all the urls in a file, urls.txt. Alright! Now lets fire up *siege* from a machine outside the cluster, but in the same availability zone / security group (so at least we’re only dealing with EC2′s latency and not my net connection), and see if we can take this cluster down!


$ siege -i -c 5 -f urls.txt
...
Transactions: 563 hits
Availability: 100.00 %
Elapsed time: 95.58 secs
Data transferred: 2.64 MB
Response time: 0.35 secs
Transaction rate: 5.89 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 2.04
Successful transactions: 544
Failed transactions: 0
Longest transaction: 13.54
Shortest transaction: 0.00

This is, btw, the best run I got out of t1.micro’s. Sometimes it would get quite ugly:


Transactions: 892 hits
Availability: 99.55 %
Elapsed time: 221.69 secs
Data transferred: 3.64 MB
Response time: 0.61 secs
Transaction rate: 4.02 trans/sec
Throughput: 0.02 MB/sec
Concurrency: 2.45
Successful transactions: 849
Failed transactions: 4
Longest transaction: 27.41
Shortest transaction: 0.00

Lets try the whole thing over with m1.small. First I edit ~/.ensemble/environments.yaml and add an override for the default-instance-type:


ensemble: environments

environments:
  sample:
    type: ec2
    default-instance-type: m1.small
    control-bucket: ensemble-12345678901234567890
    admin-secret: xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

Then I re-run the whole test:


Transactions: 290 hits
Availability: 98.98 %
Elapsed time: 81.79 secs
Data transferred: 0.78 MB
Response time: 0.53 secs
Transaction rate: 3.55 trans/sec
Throughput: 0.01 MB/sec
Concurrency: 1.89
Successful transactions: 277
Failed transactions: 3
Longest transaction: 1.50
Shortest transaction: 0.00

Oops! I forgot to add my 3 extra nodes. Note that these two m1.smalls are already almost keeping up. Now as I add these, I keep siege running. Its pretty cool to watch the response times drop as nodes come online to carry some of the load.

Now with 5 m1.small’s:


Transactions: 273 hits
Availability: 100.00 %
Elapsed time: 54.27 secs
Data transferred: 0.99 MB
Response time: 0.47 secs
Transaction rate: 5.03 trans/sec
Throughput: 0.02 MB/sec
Concurrency: 2.38
Successful transactions: 260
Failed transactions: 0
Longest transaction: 19.92
Shortest transaction: 0.00

And with higher concurrency raised from 5 to 10:


Transactions: 327 hits
Availability: 100.00 %
Elapsed time: 42.20 secs
Data transferred: 1.30 MB
Response time: 0.66 secs
Transaction rate: 7.75 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 5.12
Successful transactions: 318
Failed transactions: 0
Longest transaction: 25.51
Shortest transaction: 0.00

And now if we add 2 more, for a total of 7 nodes, concurrency of 10 gets even better:


Transactions: 531 hits
Availability: 100.00 %
Elapsed time: 53.37 secs
Data transferred: 1.75 MB
Response time: 0.44 secs
Transaction rate: 9.95 trans/sec
Throughput: 0.03 MB/sec
Concurrency: 4.35
Successful transactions: 507
Failed transactions: 0
Longest transaction: 15.49
Shortest transaction: 0.00

And with 2 more (total of 9 units in demo-wiki serving the app):


Transactions: 354 hits
Availability: 100.00 %
Elapsed time: 34.41 secs
Data transferred: 1.23 MB
Response time: 0.41 secs
Transaction rate: 10.29 trans/sec
Throughput: 0.04 MB/sec
Concurrency: 4.22
Successful transactions: 337
Failed transactions: 0
Longest transaction: 11.45
Shortest transaction: 0.00

Anyway, this isn’t a Mediawiki benchmark. This is to show you how easy it is to scale up and down in response to load with Ensemble. We all know that scaling out works, these graphs show it nicely:

Response Time
Transactions per Second

Notice how the transactions/second went up all the time, but the response time went up drastically with the jump in concurrency. This is where you need to have the ability to scale quickly, and where, if you can live with the other limitations of EC2 or any other IaaS provider, the cloud should actually win you business, since better response time means more happy users.

Now that my siege is over, I can safely remove the unnecessary units one by one with ‘ensemble remove-unit demo-wiki/9′, etc. etc. There’s still a lot of room for sugar to be added. We could say “ensemble resize-service demo-wiki 5″ and it might just pick 5 to keep and remove the rest, or add 3 to fulfill the request. There are also a ton of other ideas just bubbling up that are really exciting.

Come say hi and hack on ensemble with us in Freenode, #ubuntu-ensemble and on the mailing list on the mailing list.

June 3, 2011 at 6:53 pm Comments (0)

Gearman K.O.’s mysql to solr replication

Ding ding ding.. in this corner, wearing black shorts and a giant schema, we have over 11 million records in MySQL with a complex set of rules governing which must be searchable and which must not be. And in that corner, we have the contender, a kid from the back streets, outweighed and out reached by all his opponents, but still victorious in the queue shootout, with just open source, and 12 patch releases.. written in C, its gearman!


I’m pretty excited today, as I’m preparing to go live with the first real, high load application of Gearman that I’ve written. What is it you say? Well it is a simple trigger based replicator from mysql to SOLR.

I should say (because I know some of my colleagues read this blog) that I don’t actually believe in this design. Replication using triggers seems fraught with danger. It totally makes sense if you have a giant application and can’t track down everywhere that a table is changed. However, if your app is simple and properly abstracted, hopefully you know the 1 or 2 places that write to the table.

I should also say that I really can’t reveal all of the details. The general idea is pretty simple. Basically we have a trigger that dumps a primary key into gearman via the gearman MySQL UDFs. The idea is just to tell a gearman worker “look at this record in that table”.

Once the worker picks it up, it applies some logic to the record.. “should this be searchable or not”. If the answer is yes it should be searchable, the worker pushes the record into SOLR. If not, the worker will make sure it is not in solr.

This at least is pretty simple. The end result is a system where we can rebuild the search index in parallel using multiple CPU’s (thank you to solr/lucene for being able to update indexes concurrently and efficiently btw). This is done by pushing all of the records in the table into the queue at once.

Anyway, gearmand is performing like a champ, libgearman and the gearman pecl module are doing great. I’m just really happy to see gearman rolled out in production, as I really do think it has that nice mix of simplicity and performance. I love the commandline client which makes it easy to write scripts to inject things into queues, or query workers. This allows me to access a worker like this:

$ gearman -h gearmanbox -f all_workers -s
Known Workers: 11

boxname_RealTimeUpdate_Queue_TriggerWorker_1 jobs=627366,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13311 jobs=304134,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13306 jobs=606126,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13314 jobs=576714,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13342 jobs=294846,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13347 jobs=376998,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13359 jobs=470508,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13364 jobs=403182,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Property_SolrPublish_ jobs=219630,restarts=0,memory_MB=6.19,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_TriggerWorker_2 jobs=393642,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Property_SolrBatchPub jobs=6,restarts=0,memory_MB=6.23,lastcheckin=Tue, 23 Mar 2010 22:37:28 -0700

Brilliant.. no need for html or HTTP.. just a nice simple commandline interface.

I think gearman still has a ways to go. I’d really like to see some more administration added to it. Deleting empty queues and quickly flushing all queues without restarting gearmand would be nice to haves. We’ll see what happens going forward, but for not, thanks so much to the gearman team (especially Eric Day who showed me gearman, and Brian Aker for pushing hard to release v0.12).

w00t!


March 24, 2010 at 5:47 am Comments (0)

How do you do, that voodoo, that Queues Do?

Queues seem to be all over the place right now. Maybe its like when I wanted a VW GTi VR6 a few years back. I kept seeing them pass me on the freeway and thought “crap, everybody is getting this hot new thing and I’m missing out!”.

I think everybody at one point looked at MySQL and tought.. “that would work fine as a queue system”. For low volume stuff, it *is* fine. But then somebody grabs your little transactional, relational, reliable queue system and plugs 5 million messages per hour through it, and somewhere, a man name Heikki cries.

So then you start to look around.. and for those of us who have meager budgets and tend to use open source, there aren’t a lot of choices. The guys at Second Life did some research for all of us…. Once you get through that though, you realize that the needs of second life, a MMORPG, are quite a bit different from your average web app.

So, without further ado, my “queue” system round up.

  • ActiveMQ – This shining star of the queueing world seems to come up quickly in conversation. At Adicio, we actually gave it a good try. The main problem was, we’re a PHP shop. The PHP accessibility comes not through the normal Java Messing Service connector, but “STOMP”.

    Honestly, I’m not a big fan of these giant Apache sponsored java projects. SOLR has changed my mind a bit, as it seems to work well and doesn’t really crash. Then again, I’m not carrying a pager anymore, so maybe it does suck and I’m just not seeing it.

    Anyway, at first, ActiveMQ was winning me over. It was pretty quick.. had a pretty simple setup curve (just start up the latest version, and you have a working persistent queue system), and despite having mountains of documentation that reads like the text spammers shove into their emails randomly to pass bayesian filters, it made sense.

    However, its fall was pretty quick, as the first problem we hit was its Producer Throttling. This probably works fine when you’re using the JMS connector. However, with Stomp, when ActiveMQ decides your queue is too full, and it needs you to stop, it just stops acking your packets. Your stomp client blocks (or spins, in non-block mode) and you wait. This is made worse by the fairly naive php stomp driver, which doesn’t really check to see why its write failed, or even try to see if it can.

    Things got better when that was disabled, but the stomp driver was still haphazard. After figuring out that the Master/Slave protocol requires one to shut down the slave whenever failing back to a downed master, I had had enough. Sionara ActiveMQ.

  • RabbitMQ – This one seems to be a favorite of many. My experience is limited, and I really haven’t tried it that much. Its written in erlang, which I guess automatically makes something “telco reliable”. Cool.
  • QPID – Wow, this one is supposedly INCREDIBLE. “500,000 messages per second per LUN.” . WOW. It also has RedHat’s backing, which is a big win for me.
    In fact, as I write this, I’m doing my best to build and install the latest qpid on CentOS 5.4.

     gcc -DHAVE_CONFIG_H -I. -I. -I./src/config -I./include/ -I/usr/src/redhat/BUILD/xerces-c-src_2_8_0/src -I./src/lexer/ -D_GNU_SOURCE -D_REENTRANT -O2 -g -m64 -mtune=generic -MT mapm_add.lo -MD -MP -MF .deps/mapm_add.Tpo -c src/mapm/mapm_add.c  -fPIC -DPIC -o .libs/mapm_add.o
    ...
    

    In case you’re familiar, I’m there. Oops, thats not qpid. Thats xerces-c. Which I have to build.. and I also have to build xqilla after that. Luckily, 40 other packages required to build qpid were available in the standard CentOS yum repository.

    Another unfortunate reality is that there is no qpid connectivity available for PHP. Unless the php-amqp module works. Its really not clear yet.
    Anyway, this looks like a promising messaging technology. However, this much software leaves a lot of room for things to break.. so, while I will probably complete the build, as I want to find out how it stacks up to the others in terms of simplicity and performance, I think this one is dead.

  • Gearman – Ok I’m going to say it up front. I like this one. Its really not a “queue” system per sé. The name is an anagram of ‘manager’ (say that 5 times fast!). Its one of those great things that came out of the Danga group, the same people who created MogileFS and Memcached.

    Call me stupid, but I like to be able to read things. QPID is in C++, and is so big, I don’t even know where to start. Java gives me the shivers, and I don’t even know what erlang looks like. But damn, who doesn’t like poring over well written C? Thats pretty much what the new C port of gearmand is.

    I’m especially fond of the ease with which one can write a persistence layer. I recently submitted code to make the tokyocabinet queue store better. Its a simple B+Tree store that everybody’s going crazy about these days. Its also written in really nice C.

    The built in ability for gearman clients/workers (producers/consumers) to have a 2 way conversation is especially appealing. Its not like they can just freely pass messages back and forth. But clients can choose to wait for the job they submitted to complete. They can also check on the status of the job fairly easily. Workers can send back two integers (numerator and denominator), which is particularly useful for sending back a count of things done over the count of things to do.

    Combine all this cool stuff with the dead simple ‘gearman’ command line client, and you have a happy Clint. I wrote a little PHP worker that just sits around collecting data sent to it by the other workers running. When it receives a “show_all_workers” message (function in gearman-ese), it just spits back a text report of what it knows. This can be triggered by just saying:

    $ gearman -s -f show_all_workers
    
    Known Workers: 5
    
    dev3.adicio.com_Adicio_App_Reverse_Worker_29336 jobs=26508,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29333 jobs=19194,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29356 jobs=29208,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29370 jobs=27638,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29332 jobs=10636,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    
    $
    

    This is pretty damn cool. Now double the fun with MySQL UDF’s, and you have a workable solution for queueing via MySQL trigger.

    So, I can’t help but give this one the nod for simplicity of design. There are no massive books written to explain what gearman does. Just a nice easy C library, and perhaps one of the most important things, a really useful PHP extension.


January 22, 2010 at 8:32 am Comments (0)

Bromine and Selenium – second and third most useful elements behind Oxygen

If you’re an engineer, you hate testing. Seriously, who likes doing what those mere mortal “users” do? We’re POWER users and we don’t need to use all those silly features on all those sites. Just look at Craigslist, clearly an engineer’s dream tool.

For web apps, testing actually isn’t *that* hard. The client program (the browser) is readily available on every platform known to man, and they generally don’t do much more than store and retrieve data in clever ways. So, its not like we have to fire up a Large Hadron Collider to observe the effects of our web app.periodictable

Therein lies the problem though, as clicking around on web forms and entering the same email address, password, address, phone number, etc. etc., 100 times, is BORING.

Enter Selenium. This amazing little tool has been on the scene for a little while now, but its just now getting some momentum. Click through to the website and watch “the magic” as they put it, but basically here’s how it goes:

  • open their firefox plugin and click ‘record
  • do something
  • click ‘record’ again.

Then just save this little test case to a file, and the next time you change anything that might relate to the series of clicks and data entries you just made, run this test again. There are all kinds of assertions you can make while you’re doing something. Like ‘Make sure the title is X’ or ‘make sure a link to Y exists’.

But wait, I could have done that with something like Test::More,  PHPUnit, or lime. Where’s the real benefit?

Well because Selenium remotely controls your browser, all those gotchya’s regarding javascript CSS incompatibilities can come into play here. Because Selenium can control Internet Explorer, Firefox, *and* Safari. In fact it can also control Opera, and according to their website, any browser that properly supports javascript fully.

This is really a nice evolutionary step for web shops, as tools like this generally are OS specific and cost a lot of money. Once again open source software appears where a need becomes somewhat ubiquitous.

You can even take it a step further. The next thing that generally happens in a web dev shop when they get bigger than 20 or 30 people is they hire people who actually like testing. Well not really, but they dislike it *less* than software engineers. These are QA engineers. And they DO like things to be orderly and efficient.

Bromine is the answer for that. Its still pretty rough around the edges, but it gets the job done.

Again check out their website and watch the screencast, but basically it goes like this:

  • Write selenium tests as specified above
  • Upload tests to Bromine server
  • Attach tests to requirements
  • Run selenium remote control on all required OS/browser version combinations (can you say virtualbox?)
  • Run tests

Another nice thing about using bromine is now you are running your tests in a server side language, not just the Selenium IDE, which is limited to the IDE’s generated “Selenese” XML commands for tests. The IDE exports your basic test into PHP or Java, and then on the bromine server you can do interesting things, like check an IMAP box for an email, run a backend process, or send an SMS.

At first it may not seem like much, but eventually you end up with a multitude of useful tests for your web app that can be run all the time against development branches before release, and catch many problems. Quality means happier users, which hopefully means loyal users that keep coming back.


November 3, 2009 at 1:48 am Comments (0)

TokyoTyrant – MemcacheDB, but without the BDB?

This past April I was riding in a late model, 2 door rental car with an interesting trio for sure. On my right sat Patrick Galbraith, maintainer of DBD::mysql and author of the Federated storage engine. Directly in front of me manning the steering wheel (for those of you keen on spatial description, you may have noted at this point that its most likely I was seated in the back, left seat of a car which is designed to be driven on the right side of the road. EOUF [end of useless fact]), David Axmark, co-founder of MySQL. Immediately to his right sat Brian Aker, of (most recently) Drizzle fame.

This was one of those conversations that I felt grossly unprepared for. It was the 2009 MySQL User’s conference, and Patrick and I had been hacking on DBD::drizzle for most of the day. We had it 98% of the way there and were in need of food, so we were joining the Drizzle dev team for gourmet pizza.

As we navigated from the Santa Clara conference center to Mountain View’s quaint downtown, Brian, Patrick, and I were discussing memcached stuff. I mentioned my idea, and subsequent implementation of the Mogile+Memcached method for storing data more reliably in memcached. I knew in my head why we had chosen to read from all of the replica servers, not just the first one that worked, but I forgot (The reason, btw, is that if one of the servers had missed a write for some reason, you might get out-of-date data). I guess I was a little overwhelmed by Brian’s mountain of experience w/ memcached.

Anyway, the next thing I mentioned was that we had also tried MemcacheDB with some success. Brian wasn’t exactly impressed with MemcacheDB, and immediately suggested that we should be using Tokyo Tyrant instead. I had heard of Tokyo Cabinet, the new hotness in local key/value storage and retrieval, but what is this Tyrant you speak of?

I’ve been playing with Tokyo Tyrant ever since, and advocating for its usage at Adicio. Its pretty impressive. In addition to speaking memcached protocol, it apparently speaks HTTP/WEBDAV too. The ability to select hash, btree, and a host of other options is nice, though I’m sure some of these are available as obscure options to berkeleydb as well.

Anyway, I was curious what performance was like, so I did some tests on my little Xen instance, and came up with pretty graphs.

tokyotyrantvsmemcachedb1

I used the excellent Brutis tool to run these benchmarks using the most interesting platform for me at the moment.. which would be, php with the pecl Memcache module.

These numbers were specifically focused on usage that is typical to MemcacheDB. A wide range of keys (in this case, 10000 is “wide” since the testing system is very small), not-small items (2k or so), and lower write:read ratio (1:50). I had the tests restart each daemon after each run, and these numbers are the results of the average of 3 runs each test.

I also tried these from another xen instance on the same LAN, and things got a lot slower. Not really sure why as latency is in the sub-millisecond range.. but maybe Xen’s networking just isn’t very fast. Either way, the numbers for each combination didn’t change much.

What I find interesting is that memachedb in no-sync mode actually went faster than memached. Of course, in nosync mode, memcachedb is just throwing data at the disk. It doesn’t have to maintain LRU or slabs or anything.

Tokyo Tyrant was very consistent, and used *very* little RAM in all instances. I do recall reading that it compresses data. Maybe thats a default? Anyway, tokyo tyrant also was the most CPU hungry of the bunch, so I have to assume having more cores might have resulted in much better results.

I’d like to get together a set of 3 or 4 machines to test multiple client threads, and replication as well. Will post that as part 2 when I pull it together. For now, it looks like.

In case anybody wants to repeat these tests, I’ve included the results, and the scripts used to generate them in this tarball.

– Additional info, 6/4/2009
Another graph that some might find interesting, is this one detailing CPU usage. During all the tests, brutis used about 60% of the CPU available on the machine, so 40% is really 100%:

tokyotyranttests_cpu

This tells me that the CPU was the limiting factor for Tokyo Tyrant, and with a multi-core machine, we should see huge speed improvements. Stay tuned for those tests!


June 4, 2009 at 6:40 am Comments (0)

Parallel mysql replication?

Its always been a dream of mine. I’ve posted about parallel replication on Drizzle’s mailing list before. I think when faced with the problem of a big, highly concurrent master, and scaling out reads simply with lower cost slaves, this is going to be the only way to go.

So today I was really glad to see that somebody is trying out the idea. Seppo Jaakola from “Codership”, who I’ve never heard of before today, posted a link to an article on his blog about his experimentation with parallel replication slaves. The findings are pretty interesting.

I hope that he’ll be able to repeat his tests with a real world setup. The software they’ve written seems to have the right idea. The biggest issue I have with the tests is that the tests were run on tiny hardware. Hyperthreading? Single disks? Thats not really the point of having parallel replication slaves.

The idea is that you have maybe a gigantic real time write server for OLTP. This beast may have lots of medium-power CPU cores, and an obscene amount of RAM, and a lot of battery backed write cache for writes.

Now you know that there are tons of reads that shouldn’t ever be done against this server. You drop a few replication slaves in, and you realize that you need a box with as much disk storage as your central server, and probably just as much write cache. Pretty soon scaling out those reads is just not very cost effective.

However, if you could have lots of CPU cores, and lots of cheap disks, you could dispatch these writes to be done in parallel, and you wouldn’t need expensive disk systems or lots of RAM for each slave.

So, the idea is not to make slaves faster in a 1:1 size comparison. Its to make it easier for a cheap slave to keep up with a very busy, very expensive master.

I do see where another huge limiting factor is making sure things synchronize in commit order. I think thats an area where a lot of time needs to be spent on optimization. The order should already be known so that the commiter thread is just waiting for the next one in line, and if the next 100 are already done it can just rip through them quickly, not signal them that they can go. Something like this seems right:


id=first_commit_id();
while(wait_for_commit(id)) {
commit(id);
id++;
}

I applaud the efforts of Codeship, and I hope they’ll continue the project and maybe ship something that will rock all our worlds.


June 2, 2009 at 7:08 pm Comments (0)