SpamapS.org – Full Frontal Nerdity

Clint Byrum's Personal Stuff

But will it scale? – Taking Limesurvey horizontal with juju…

One of the really cool things about using the cloud, and especially juju, is that it instantly enables things that often times take a lot of thought to even try out in traditional environments. While I was developing some little PHP apps “back in the day”, I knew eventually they’d need to go to more than one server, but testing them for that meant, well, finding and configuring multiple servers. Even with VMs, I had to go allocate one and configure it. Oops, I’m out of time, throw it on one server, pray, move to next task.

This left a very serious question in my mind.. “When the time comes, will my app actually scale?”

Have I forgotten some huge piece to make sure it is stateless, or will it scale horizontally the way I intended it to? Things have changed though, and now we have the ability to start virtual machines via an API on several providers, and actually *test* whether our app will scale.

This brings us to our story. Recently, Nick Barcet created a juju charm for Limesurvey. This is a really cool little app that lets users create rich, multi faceted surveys and invite the internet to vote on things, answer questions, etc. etc. This is your standard “LAMP” application, and it seems written in a way that will allow it to scale out.

However, when Nick submitted the charm for the official juju charms collection, I wanted to see if it actually would scale the way I knew LAMP apps should. So, I fired up juju on ec2, threw in some haproxy, and related it to my limesurvey service, and then started adding units. This is incredibly simple with juju:

juju deploy --repository charms local:mysql
juju deploy --repository charms local:limesurvey
juju deploy --repository charms local:haproxy
juju add-relation mysql limesurvey
juju add-relation limesurvey haproxy
juju add-unit limesurvey
juju expose haproxy

Lo and behold, it didn’t scale. There were a few issues with the default recommendations of limesurvey that Nick had encoded into the charm. These were simple things, like assuming that the local hostname would be the hostname people use to access the site.

Once that was solved, there were some other scaling problems immediately revealed. First on the ticket was that Limesurvey, by default, uses MyISAM for its storage engine in MySQL. This is a huge mistake, and I can’t imagine why *anybody* would use MyISAM in a modern application. MyISAM uses a “whole table” locking scheme for both reads and writes, so whenever anything writes to any part of the table, all reads and writes must wait for that to finish. InnoDB, available since MySQL 4.0, and the default storage engine for MySQL 5.5 and later, doesn’t suffer from this problem as it implements an MVCC model and row-level locks to allow concurrent reads and writes.

The MyISAM locks caused request timeouts when I pointed siege at the load balancer, because too many requests were stacking up waiting for updates to complete before even reading from the table. This is especially critical on something like the session storage that limesurvey does in the database, as it effectively meant that only one user can do anything at a time with the database.

Scalability testing in 10 minutes or less, with a server investment of about $1US. Who knew it could be this easy? Granted, I stopped at three app server nodes, and we didn’t even get to scaling out the database (something limesurvey doesn’t really have native support for). But these are things that are already solved, and that have been encoded in charms already. Now we just have to suggest small app changes to allow users to take advantage of all those well know best practices sitting in charms.

(check the bug comments for the results, I’d be interested if somebody wants to repeat the test).

So, in a situation where one needs to deploy now, and scale later, I think juju will prove quite useful. It should be on anybody’s radar who wants to get off the ground quickly.

December 23, 2011 at 1:41 am Comments (0)

Time for some ghetto monitoring

If you came here between April 28 and about an hour ago, you got a “couldn’t connect to database” error. Oops! Seems my limited memory EC2 instance got a little overwhelmed by php processes and decided the db server, drizzled, should die to make more room for PHP. Ooops! Time to drop pm.max_children.

I don’t have any monitoring setup for the site, so I just now figured it out. Until I get proper monitoring, I’ve installed this fancy bit of duct-tape upstart magic:

start on stopping
task
script
env | mail -s "$JOB is stopping!" me@myemail.com
end script

What does this do? Well is emails me whenever upstart gives up respawning something, or I manually stop a service.

Its not monitoring. I need monitoring. But this is a nice little hack to prevent a regression while I figure that out.

May 2, 2011 at 4:54 pm Comments (0)

Gearman K.O.’s mysql to solr replication

Ding ding ding.. in this corner, wearing black shorts and a giant schema, we have over 11 million records in MySQL with a complex set of rules governing which must be searchable and which must not be. And in that corner, we have the contender, a kid from the back streets, outweighed and out reached by all his opponents, but still victorious in the queue shootout, with just open source, and 12 patch releases.. written in C, its gearman!


I’m pretty excited today, as I’m preparing to go live with the first real, high load application of Gearman that I’ve written. What is it you say? Well it is a simple trigger based replicator from mysql to SOLR.

I should say (because I know some of my colleagues read this blog) that I don’t actually believe in this design. Replication using triggers seems fraught with danger. It totally makes sense if you have a giant application and can’t track down everywhere that a table is changed. However, if your app is simple and properly abstracted, hopefully you know the 1 or 2 places that write to the table.

I should also say that I really can’t reveal all of the details. The general idea is pretty simple. Basically we have a trigger that dumps a primary key into gearman via the gearman MySQL UDFs. The idea is just to tell a gearman worker “look at this record in that table”.

Once the worker picks it up, it applies some logic to the record.. “should this be searchable or not”. If the answer is yes it should be searchable, the worker pushes the record into SOLR. If not, the worker will make sure it is not in solr.

This at least is pretty simple. The end result is a system where we can rebuild the search index in parallel using multiple CPU’s (thank you to solr/lucene for being able to update indexes concurrently and efficiently btw). This is done by pushing all of the records in the table into the queue at once.

Anyway, gearmand is performing like a champ, libgearman and the gearman pecl module are doing great. I’m just really happy to see gearman rolled out in production, as I really do think it has that nice mix of simplicity and performance. I love the commandline client which makes it easy to write scripts to inject things into queues, or query workers. This allows me to access a worker like this:

$ gearman -h gearmanbox -f all_workers -s
Known Workers: 11

boxname_RealTimeUpdate_Queue_TriggerWorker_1 jobs=627366,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13311 jobs=304134,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13306 jobs=606126,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13314 jobs=576714,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13342 jobs=294846,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13347 jobs=376998,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13359 jobs=470508,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13364 jobs=403182,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Property_SolrPublish_ jobs=219630,restarts=0,memory_MB=6.19,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_TriggerWorker_2 jobs=393642,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Property_SolrBatchPub jobs=6,restarts=0,memory_MB=6.23,lastcheckin=Tue, 23 Mar 2010 22:37:28 -0700

Brilliant.. no need for html or HTTP.. just a nice simple commandline interface.

I think gearman still has a ways to go. I’d really like to see some more administration added to it. Deleting empty queues and quickly flushing all queues without restarting gearmand would be nice to haves. We’ll see what happens going forward, but for not, thanks so much to the gearman team (especially Eric Day who showed me gearman, and Brian Aker for pushing hard to release v0.12).

w00t!


March 24, 2010 at 5:47 am Comments (0)

How do you do, that voodoo, that Queues Do?

Queues seem to be all over the place right now. Maybe its like when I wanted a VW GTi VR6 a few years back. I kept seeing them pass me on the freeway and thought “crap, everybody is getting this hot new thing and I’m missing out!”.

I think everybody at one point looked at MySQL and tought.. “that would work fine as a queue system”. For low volume stuff, it *is* fine. But then somebody grabs your little transactional, relational, reliable queue system and plugs 5 million messages per hour through it, and somewhere, a man name Heikki cries.

So then you start to look around.. and for those of us who have meager budgets and tend to use open source, there aren’t a lot of choices. The guys at Second Life did some research for all of us…. Once you get through that though, you realize that the needs of second life, a MMORPG, are quite a bit different from your average web app.

So, without further ado, my “queue” system round up.

  • ActiveMQ – This shining star of the queueing world seems to come up quickly in conversation. At Adicio, we actually gave it a good try. The main problem was, we’re a PHP shop. The PHP accessibility comes not through the normal Java Messing Service connector, but “STOMP”.

    Honestly, I’m not a big fan of these giant Apache sponsored java projects. SOLR has changed my mind a bit, as it seems to work well and doesn’t really crash. Then again, I’m not carrying a pager anymore, so maybe it does suck and I’m just not seeing it.

    Anyway, at first, ActiveMQ was winning me over. It was pretty quick.. had a pretty simple setup curve (just start up the latest version, and you have a working persistent queue system), and despite having mountains of documentation that reads like the text spammers shove into their emails randomly to pass bayesian filters, it made sense.

    However, its fall was pretty quick, as the first problem we hit was its Producer Throttling. This probably works fine when you’re using the JMS connector. However, with Stomp, when ActiveMQ decides your queue is too full, and it needs you to stop, it just stops acking your packets. Your stomp client blocks (or spins, in non-block mode) and you wait. This is made worse by the fairly naive php stomp driver, which doesn’t really check to see why its write failed, or even try to see if it can.

    Things got better when that was disabled, but the stomp driver was still haphazard. After figuring out that the Master/Slave protocol requires one to shut down the slave whenever failing back to a downed master, I had had enough. Sionara ActiveMQ.

  • RabbitMQ – This one seems to be a favorite of many. My experience is limited, and I really haven’t tried it that much. Its written in erlang, which I guess automatically makes something “telco reliable”. Cool.
  • QPID – Wow, this one is supposedly INCREDIBLE. “500,000 messages per second per LUN.” . WOW. It also has RedHat’s backing, which is a big win for me.
    In fact, as I write this, I’m doing my best to build and install the latest qpid on CentOS 5.4.

     gcc -DHAVE_CONFIG_H -I. -I. -I./src/config -I./include/ -I/usr/src/redhat/BUILD/xerces-c-src_2_8_0/src -I./src/lexer/ -D_GNU_SOURCE -D_REENTRANT -O2 -g -m64 -mtune=generic -MT mapm_add.lo -MD -MP -MF .deps/mapm_add.Tpo -c src/mapm/mapm_add.c  -fPIC -DPIC -o .libs/mapm_add.o
    ...
    

    In case you’re familiar, I’m there. Oops, thats not qpid. Thats xerces-c. Which I have to build.. and I also have to build xqilla after that. Luckily, 40 other packages required to build qpid were available in the standard CentOS yum repository.

    Another unfortunate reality is that there is no qpid connectivity available for PHP. Unless the php-amqp module works. Its really not clear yet.
    Anyway, this looks like a promising messaging technology. However, this much software leaves a lot of room for things to break.. so, while I will probably complete the build, as I want to find out how it stacks up to the others in terms of simplicity and performance, I think this one is dead.

  • Gearman – Ok I’m going to say it up front. I like this one. Its really not a “queue” system per sé. The name is an anagram of ‘manager’ (say that 5 times fast!). Its one of those great things that came out of the Danga group, the same people who created MogileFS and Memcached.

    Call me stupid, but I like to be able to read things. QPID is in C++, and is so big, I don’t even know where to start. Java gives me the shivers, and I don’t even know what erlang looks like. But damn, who doesn’t like poring over well written C? Thats pretty much what the new C port of gearmand is.

    I’m especially fond of the ease with which one can write a persistence layer. I recently submitted code to make the tokyocabinet queue store better. Its a simple B+Tree store that everybody’s going crazy about these days. Its also written in really nice C.

    The built in ability for gearman clients/workers (producers/consumers) to have a 2 way conversation is especially appealing. Its not like they can just freely pass messages back and forth. But clients can choose to wait for the job they submitted to complete. They can also check on the status of the job fairly easily. Workers can send back two integers (numerator and denominator), which is particularly useful for sending back a count of things done over the count of things to do.

    Combine all this cool stuff with the dead simple ‘gearman’ command line client, and you have a happy Clint. I wrote a little PHP worker that just sits around collecting data sent to it by the other workers running. When it receives a “show_all_workers” message (function in gearman-ese), it just spits back a text report of what it knows. This can be triggered by just saying:

    $ gearman -s -f show_all_workers
    
    Known Workers: 5
    
    dev3.adicio.com_Adicio_App_Reverse_Worker_29336 jobs=26508,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29333 jobs=19194,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29356 jobs=29208,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29370 jobs=27638,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    dev3.adicio.com_Adicio_App_Reverse_Worker_29332 jobs=10636,restarts=0,memory_MB=1.47,lastcheckin=Thu, 21 Jan 2010 15:33:32 -0800
    
    $
    

    This is pretty damn cool. Now double the fun with MySQL UDF’s, and you have a workable solution for queueing via MySQL trigger.

    So, I can’t help but give this one the nod for simplicity of design. There are no massive books written to explain what gearman does. Just a nice easy C library, and perhaps one of the most important things, a really useful PHP extension.


January 22, 2010 at 8:32 am Comments (0)

Bromine and Selenium – second and third most useful elements behind Oxygen

If you’re an engineer, you hate testing. Seriously, who likes doing what those mere mortal “users” do? We’re POWER users and we don’t need to use all those silly features on all those sites. Just look at Craigslist, clearly an engineer’s dream tool.

For web apps, testing actually isn’t *that* hard. The client program (the browser) is readily available on every platform known to man, and they generally don’t do much more than store and retrieve data in clever ways. So, its not like we have to fire up a Large Hadron Collider to observe the effects of our web app.periodictable

Therein lies the problem though, as clicking around on web forms and entering the same email address, password, address, phone number, etc. etc., 100 times, is BORING.

Enter Selenium. This amazing little tool has been on the scene for a little while now, but its just now getting some momentum. Click through to the website and watch “the magic” as they put it, but basically here’s how it goes:

  • open their firefox plugin and click ‘record
  • do something
  • click ‘record’ again.

Then just save this little test case to a file, and the next time you change anything that might relate to the series of clicks and data entries you just made, run this test again. There are all kinds of assertions you can make while you’re doing something. Like ‘Make sure the title is X’ or ‘make sure a link to Y exists’.

But wait, I could have done that with something like Test::More,  PHPUnit, or lime. Where’s the real benefit?

Well because Selenium remotely controls your browser, all those gotchya’s regarding javascript CSS incompatibilities can come into play here. Because Selenium can control Internet Explorer, Firefox, *and* Safari. In fact it can also control Opera, and according to their website, any browser that properly supports javascript fully.

This is really a nice evolutionary step for web shops, as tools like this generally are OS specific and cost a lot of money. Once again open source software appears where a need becomes somewhat ubiquitous.

You can even take it a step further. The next thing that generally happens in a web dev shop when they get bigger than 20 or 30 people is they hire people who actually like testing. Well not really, but they dislike it *less* than software engineers. These are QA engineers. And they DO like things to be orderly and efficient.

Bromine is the answer for that. Its still pretty rough around the edges, but it gets the job done.

Again check out their website and watch the screencast, but basically it goes like this:

  • Write selenium tests as specified above
  • Upload tests to Bromine server
  • Attach tests to requirements
  • Run selenium remote control on all required OS/browser version combinations (can you say virtualbox?)
  • Run tests

Another nice thing about using bromine is now you are running your tests in a server side language, not just the Selenium IDE, which is limited to the IDE’s generated “Selenese” XML commands for tests. The IDE exports your basic test into PHP or Java, and then on the bromine server you can do interesting things, like check an IMAP box for an email, run a backend process, or send an SMS.

At first it may not seem like much, but eventually you end up with a multitude of useful tests for your web app that can be run all the time against development branches before release, and catch many problems. Quality means happier users, which hopefully means loyal users that keep coming back.


November 3, 2009 at 1:48 am Comments (0)

TokyoOops

We had a fun time this week with TokyoTyrant. Recently it has become apparent that MemcacheDB has been all but abandoned. As fantastic as the early work was by Steve Chu, the project is in disrepair. That, coupled with the less than obvious failover for its replication combined to make us seek alternatives.

virtual_stupidity


Brian Aker had mentioned to me at one time that TokyoTyrant was way better than memcachedb and we should run it instead. I took notice and it turns out he’s right! It does basically the same thing, applying the memcache protocol to an on disk key/value store. However, the code is incredibly clean, well maintained, and runs extremely fast. There’s also a lot more flexibility, with the ability to choose between in-memory or on disk storage, hash tables, B+Tree’s, etc.

The availability of log based asynchronous master/master replication (somewhat similar to MySQL’s replication in concept) was probably one of the biggest wins, allowing much simpler failover (just move the IP, or DNS, or whatever) when compared to MemcacheDB’s adherence to BerkeleyDB’s replication setup, which is a single-master system implementing an election algorithm.

Somewhere during migration, we missed one tiny detail though. Sometimes, the devil is in the details. This is really the only evidence in the documentation that tokyo tyrant has support for the memcache protocol. It is very clear:

Memcached Compatible Protocol

As for the memcached (ASCII) compatible protocol, the server implements the following commands; “set”, “add”, “replace”, “get”, “delete”, “incr”, “decr”, “stats”, “flush_all”, “version”, and “quit”. “noreply” options of update commands are also supported. However, “flags”, “exptime”, and “cas unique” parameters are ignored.

Now, as I said, there’s nothing ambiguous about this. That would have helped, if anyone on my team had ever read it. We installed TokyoTyrant, pointed our basic test code at it, and it worked. This is really a process problem, not so much a technical one. The process must be to assume it won’t work, and test all the different use cases to make sure it works.

Now, why is that bit of the manual important? Well we use PHP. Specifically, we use the PECL “Memcache” module to access memcache protocol storage. Now, the Memcache module is mostly oriented toward caching in the memory based original memcached. It works great for memcachedb too, which simply ignores the exptime parameter. However, memcacheDB *does not* ignore “flags”.

And therein lies the problem. Users of the PECL Memcache module may not know this, but the flags are *important*. There are two bits in that flags field that the Memcache module may set. Bit 0 is used to indicate whether or not the content has been serialized, and, therefore, on read, must be unserialized. Bit 1 is used to indicate whether or not the content has been gzipped.

So, while all of the strings that were stored in MemcacheDB and subsequently copied to TokyoTyrant worked great, the serialized objects, arrays, and gzipped values, were completely inoperative, as they were coming back to the code as strings and binary compressed data. The gzipped data was easy (turn off automatic gzip compression). The serialized data took some quick tap dancing to remedy, with code something like this:


class Memcache_BrokenFlags extends Memcache
{
public function get($key, &$flags)
{
$v = parent::get($key, $flags);
$uv = @unserialize($v);
return $uv === false ? $v : $uv;
}
}

Luckily our code all uses one Factory method to spawn all “MemcacheDB” connections, so it was easy to substitute this in.

Eventually we can just change the code by segregating into things that always serialize, and things that don’t, and just do the serialization ourselves. This should eventually allow us to use the new tokyo_tyrant module in PECL, which only reliably stores scalars (I noticed recent versions have added a call to the internal PHP function convert_to_string().. this is, I think, a mistake, but one that still leaves it up the programmer to explicitly serialize when serialization is desired).

This was a pretty big gotchya, and one that illustrates that even though sometimes us cowboy coders and sysadmins get annoyed when those pesky business people ask us for plans, schedules, expected impact, etc., and we keep assuring them we know whats up, its still important to actually know whats up, and make sure to RTFMC .. C as in, CAREFULLY.


October 27, 2009 at 4:28 am Comments (0)