SpamapS.org – Full Frontal Nerdity

Clint Byrum's Personal Stuff

Juju constraints unbinds your machines

This week, William “I code more than you will ever be able to” Reade announced that Juju has a new feature called ‘Constraints’.

This is really, really cool and brings juju into a new area of capability for deploying big and little sites.

To be clear, this allows you to abstract things pretty effectively.

Consider this:

juju deploy mysql --constraints mem=10G
juju deploy statusnet --constraints cpu=1

This will result in your mysql service being on an extra large instance since it has 15GB of RAM. Your statusnet instances will be m1.small’s since that will have just 1 ECU.

Even cooler than this is now if you want a mysql slave in a different availability zone:

juju deploy mysql --constraints ec2-zone=a mysql-a
juju deploy mysql --constraints ec2-zone=b mysql-b
juju add-relation mysql-a:master mysql-b:slave
juju add-relation statusnet mysql-a

Now if mysql-a goes down

juju remove-relation statusnet mysql-a
juju add-relation statusnet mysql-b

Much and more is possible, but this really does make juju even more compelling as a tool for simple, easy deployment. Edit: fixed ec2-zone to be the single character, per William’s feedback.

April 16, 2012 at 10:48 pm Comments (0)

But will it scale? – Taking Limesurvey horizontal with juju…

One of the really cool things about using the cloud, and especially juju, is that it instantly enables things that often times take a lot of thought to even try out in traditional environments. While I was developing some little PHP apps “back in the day”, I knew eventually they’d need to go to more than one server, but testing them for that meant, well, finding and configuring multiple servers. Even with VMs, I had to go allocate one and configure it. Oops, I’m out of time, throw it on one server, pray, move to next task.

This left a very serious question in my mind.. “When the time comes, will my app actually scale?”

Have I forgotten some huge piece to make sure it is stateless, or will it scale horizontally the way I intended it to? Things have changed though, and now we have the ability to start virtual machines via an API on several providers, and actually *test* whether our app will scale.

This brings us to our story. Recently, Nick Barcet created a juju charm for Limesurvey. This is a really cool little app that lets users create rich, multi faceted surveys and invite the internet to vote on things, answer questions, etc. etc. This is your standard “LAMP” application, and it seems written in a way that will allow it to scale out.

However, when Nick submitted the charm for the official juju charms collection, I wanted to see if it actually would scale the way I knew LAMP apps should. So, I fired up juju on ec2, threw in some haproxy, and related it to my limesurvey service, and then started adding units. This is incredibly simple with juju:

juju deploy --repository charms local:mysql
juju deploy --repository charms local:limesurvey
juju deploy --repository charms local:haproxy
juju add-relation mysql limesurvey
juju add-relation limesurvey haproxy
juju add-unit limesurvey
juju expose haproxy

Lo and behold, it didn’t scale. There were a few issues with the default recommendations of limesurvey that Nick had encoded into the charm. These were simple things, like assuming that the local hostname would be the hostname people use to access the site.

Once that was solved, there were some other scaling problems immediately revealed. First on the ticket was that Limesurvey, by default, uses MyISAM for its storage engine in MySQL. This is a huge mistake, and I can’t imagine why *anybody* would use MyISAM in a modern application. MyISAM uses a “whole table” locking scheme for both reads and writes, so whenever anything writes to any part of the table, all reads and writes must wait for that to finish. InnoDB, available since MySQL 4.0, and the default storage engine for MySQL 5.5 and later, doesn’t suffer from this problem as it implements an MVCC model and row-level locks to allow concurrent reads and writes.

The MyISAM locks caused request timeouts when I pointed siege at the load balancer, because too many requests were stacking up waiting for updates to complete before even reading from the table. This is especially critical on something like the session storage that limesurvey does in the database, as it effectively meant that only one user can do anything at a time with the database.

Scalability testing in 10 minutes or less, with a server investment of about $1US. Who knew it could be this easy? Granted, I stopped at three app server nodes, and we didn’t even get to scaling out the database (something limesurvey doesn’t really have native support for). But these are things that are already solved, and that have been encoded in charms already. Now we just have to suggest small app changes to allow users to take advantage of all those well know best practices sitting in charms.

(check the bug comments for the results, I’d be interested if somebody wants to repeat the test).

So, in a situation where one needs to deploy now, and scale later, I think juju will prove quite useful. It should be on anybody’s radar who wants to get off the ground quickly.

December 23, 2011 at 1:41 am Comments (0)

The 2011 O’Reilly Open Mysql Drizzle Maria Monty Percona Xtra Galera Xeround Tungsten Cloud Database Conference and Expo

Or, for short, the “2011 O’Reilly MySQL Users Conference & Expo”. Yes thats the short name of the conference that, thus far, has brought me nothing but good info, good times, and insight into one of the most interesting open source communities around.

MySQL has been at the core of a real revolution in the way data driven applications have exploded on the internet. Its so easy to just install it, fire up php’s mysql driver, and boom, you’re saving and retrieving data. The *use* of MySQL has always been incredibly simple.

The politics has, at times, been confusing. Dual licensing was sort of an odd concept when MySQL AB was doing it “back in the day”. Nobody really understood how it worked or how they could sell something that was also “free”. But it worked out great for them. InnoDB got bought by Oracle and a lot of people thought “oh noes MySQL will have no transactional storage, Oracle will kill it.” Well we see where thats about 180 degrees from what actually happened (R.I.P. Falcon).

So this year, with the oddness of Oracle not being the top sponsor at an event that had driven a lot of the innovation and collaboration in the MySQL world (ironically, choosing instead to spend their time and effort on a conference called “Collaborate”), I thought “wonderful, more politics”.

But as Brian Aker says in his “State of the ecosystem” post, it was quite the opposite. The absence of the commercial entity responsible for MySQL took a lot of the purely business focused discussion down to almost a whisper, while big ideas and big thinking seemed to be extremely prominent.

Drizzle had quite a few sessions, including my own about what we’ve done with Drizzle in Ubuntu. This is particularly interesting to me because Drizzle is mostly driven by a community effort, though most of the heavy lifting work up until now has been sponsored by Sun then Rackspace. Its purely an idea of how a MySQL-like database should be written, and while it may be seeing limited production use now, the discussions were on how it can be used, what it does now, not where its going or who is going to pay for its development. Its such a good idea, I’m pretty convinced users will drive it in much the same way Apache was driven by users wanting to do interesting things with HTTP.

I saw a lot of interesting ideas around replication put forth as well. Galera, Tungsten, and Xeround all seem to be trying to build on MySQL’s success with replication and NDB (a.k.a. MySQL Cluster). I really like that there are multiple takes on how to make a multi-master highly available / scalable system work. Getting all the people using and developing these things into one conference center is always pretty interesting to me.

The keynotes were especially interesting, as they were delivered by people who are sitting at the interesection of the old MySQL world, and the new MySQL “ecosystem”. I missed Monty Widenius’s keynote but it strikes me that he is still leading the charge for a simple, scalable, powerful database system, proving that the core of MySQL is mostly unchanged. Martin Mickos delivered a really interesting take on how MySQL was part of the last revolution in computing (LAMP) and how it may very well be a big part of the next revolution (IaaS, aka “the cloud”). Brian Aker reinforced that MySQL as a concept, and specifically, Drizzle, are just part of your Infrastructure (the I in IaaS).

Then on Thursday, Baron Schwartz blew the whole place up. Go, watch the video if you weren’t there, or haven’t seen it. Baron has always been  insightful in his evaluation of the MySQL ecosystem. Maatkit came around when the community needed it, and on joining Percona I think he brought his clear thinking to Petr’s bold decision making at just the right time to help fuel their rise as one of the most respected consulting firms in the “WebScale” world. So when Baron got up and said that the database is still going to scale up, that MySQL isn’t going to lose to NoSQL or SomeSQL, but rather, that the infrastructure would adapt to the data requirements, it caught my attention, and got me nodding. And when he plainly called Oracle out for not supporting the conference, there was a hush over the croud followed by a big sigh. Its likely that those in attendance were the ones who understand that, and those who weren’t there were probably the ones who need to hear it. I’d guess by now they’ve seen the video or at least heard the call. Either way, thanks Baron for your insight and powerful thoughts.

This was my second MySQL Conference, and I hope it won’t be my last. The mix of users, developers, and business professionals has always struck me as quite unique, as MySQL sits at the intersection of a number of very powerful avenues. Lets hope that O’Reilly decides to do it again, *and* lets hope that Oracle gets on board as well.

April 27, 2011 at 5:49 pm Comments (0)

Ubuntu and Drizzle — Run Drizzle on your Narwhal: OReilly MySQL Conference & Expo 2011 – OReilly Conferences, April 11 – 14, 2011, Santa Clara, CA

Ubuntu and Drizzle — Run Drizzle on your Narwhal: OReilly MySQL Conference & Expo 2011 – OReilly Conferences, April 11 – 14, 2011, Santa Clara, CA.

I gave a talk this week in Santa Clara at the MySQL Users Conference. I think it went pretty well and I got a lot of feedback from Ubuntu users about the positives of having Drizzle available in Universe.The slides are available at the link above.

April 15, 2011 at 9:36 pm Comments (0)

Handlersocket — NoSQL for MySQL, now on your favorite Linux..

Handlersocket is what all the cool kids are using these days.. I think. Basically you get a couple of new ports on your mysql server that allow SQL-free reading and writing for doing many thousands of tiny transactions per second without the overhead of parsing SQL.

Thanks to my venerable Ubuntu sponsor, Chuck Short, handlersocket is now available in Ubuntu Natty in the universe repository. apt-get install handlersocket-mysql-5.1 handlersocket-doc, then follow the instructions in /usr/share/doc/handlersocket-doc/docs-en to enable it, and you have yourself a bonified NoSQL solution for your MySQL server.

There are also client libraries for perl (libnet-handlersocket-perl) and C/C++ (libhsclient-dev .. static only as the API is in flux). Other languages are still not packaged, but the protocol is simple, and links to early implementations are listed in the README file, which should be at /usr/share/doc/handlersocket-mysql-5.1/README.

It should be on Debian unstable as well soon…
Update April 3 2011, Handlersocket is now in Debian Unstable as well

Happy hacking!


February 9, 2011 at 7:42 am Comments (0)

Drizzle7 Beta Released! now with MySQL migration! « LinuxJedis /dev/null

Drizzle7 Beta Released! now with MySQL migration! « LinuxJedis /dev/null.

Drizzle is a project that is near and dear to my heart.

To sum it up, Drizzle took all that was really good in MySQL, cut out all that was mediocre, and replaced some of it with really good stuff. The end product is, I think, something that is leaner, should be more stable, and definitely more flexible.

So go check out the beta! I guess I should use Andrew’s migration tool and see if I can migrate this blog to drizzle. :)


September 29, 2010 at 10:39 pm Comments (0)

Gearman K.O.’s mysql to solr replication

Ding ding ding.. in this corner, wearing black shorts and a giant schema, we have over 11 million records in MySQL with a complex set of rules governing which must be searchable and which must not be. And in that corner, we have the contender, a kid from the back streets, outweighed and out reached by all his opponents, but still victorious in the queue shootout, with just open source, and 12 patch releases.. written in C, its gearman!


I’m pretty excited today, as I’m preparing to go live with the first real, high load application of Gearman that I’ve written. What is it you say? Well it is a simple trigger based replicator from mysql to SOLR.

I should say (because I know some of my colleagues read this blog) that I don’t actually believe in this design. Replication using triggers seems fraught with danger. It totally makes sense if you have a giant application and can’t track down everywhere that a table is changed. However, if your app is simple and properly abstracted, hopefully you know the 1 or 2 places that write to the table.

I should also say that I really can’t reveal all of the details. The general idea is pretty simple. Basically we have a trigger that dumps a primary key into gearman via the gearman MySQL UDFs. The idea is just to tell a gearman worker “look at this record in that table”.

Once the worker picks it up, it applies some logic to the record.. “should this be searchable or not”. If the answer is yes it should be searchable, the worker pushes the record into SOLR. If not, the worker will make sure it is not in solr.

This at least is pretty simple. The end result is a system where we can rebuild the search index in parallel using multiple CPU’s (thank you to solr/lucene for being able to update indexes concurrently and efficiently btw). This is done by pushing all of the records in the table into the queue at once.

Anyway, gearmand is performing like a champ, libgearman and the gearman pecl module are doing great. I’m just really happy to see gearman rolled out in production, as I really do think it has that nice mix of simplicity and performance. I love the commandline client which makes it easy to write scripts to inject things into queues, or query workers. This allows me to access a worker like this:

$ gearman -h gearmanbox -f all_workers -s
Known Workers: 11

boxname_RealTimeUpdate_Queue_TriggerWorker_1 jobs=627366,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13311 jobs=304134,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13306 jobs=606126,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13314 jobs=576714,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13342 jobs=294846,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13347 jobs=376998,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_Subject_13359 jobs=470508,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Queue_Subject_13364 jobs=403182,restarts=0,memory_MB=7.03,lastcheckin=Tue, 23 Mar 2010 22:37:58 -0700
boxname_RealTimeUpdate_Property_SolrPublish_ jobs=219630,restarts=0,memory_MB=6.19,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Queue_TriggerWorker_2 jobs=393642,restarts=0,memory_MB=4.27,lastcheckin=Tue, 23 Mar 2010 22:37:59 -0700
boxname_RealTimeUpdate_Property_SolrBatchPub jobs=6,restarts=0,memory_MB=6.23,lastcheckin=Tue, 23 Mar 2010 22:37:28 -0700

Brilliant.. no need for html or HTTP.. just a nice simple commandline interface.

I think gearman still has a ways to go. I’d really like to see some more administration added to it. Deleting empty queues and quickly flushing all queues without restarting gearmand would be nice to haves. We’ll see what happens going forward, but for not, thanks so much to the gearman team (especially Eric Day who showed me gearman, and Brian Aker for pushing hard to release v0.12).

w00t!


March 24, 2010 at 5:47 am Comments (0)

Your code must suck

While attending OSCON 2009 w/ my faithful sidekick fluffy, we constantly kept finding instances of a common theme. The leading companies and projects seem to share one attribute that might shock you.

They all have at least *some* crappy code. At some point, all of them have set aside their principles and thrown in a hack to get things working. This is reinforced by those projects that have their dignity, but no market share. FreeBSD users are famous for saying that Linux is coded by 10,000 monkeys. FreeBSD is an awesome project, that has powered some huge websites. However, the primary Free OS is Linux. Even further along that line is Windows, which is pretty much a hack on a hack on a hack, but somehow, everybody ends up running it.

This isn’t to say that all of the code in popular projects sucks. Just that some of it does. I’m still waiting for the example of an organization that has produced pure, beautiful code with no compromises, and then gone on to garner a large market share and/or massive profits.

The site TheDailyWTF exists primarily because of this fact. I hit that site at least twice a week to have a good laugh. Many times it causes me to reminisce about some of the things I saw early in my career. Just as often, I’m reminded of something more recent. The trend doesn’t seem to stop, despite advances in computing and human understanding, it goes back decades. I imagine Ogg, the first guy who designed a wheel, snarked about how Thag’s wheels weren’t perfectly round. But ultimately, Thag was able to produce wheels that weren’t perfectly round, but rolled pretty well. He probably got them out in half the time, and ended up trading more wheels for Mammoth pelts than Ogg by a factor of five. No doubt Thag was able to attract more mates with his Mammoth Pelt fortune, so maybe its just in our nature.

Really though, this flies in the face of code purity, which we all want. Code sucking == profit? Hacks == market share? This doesn’t sit well with those of us who pride ourselves on brace placement discipline, and knowing at least 5 design patterns without looking them up in a book. But there it is, that pile of dung you knocked out at 3am the day before release to QA… 3 years ago. Still powering the site despite being closer to Alpaca bile than beautiful code.

This doesn’t mean projects fail without hacks. What it means though, is that projects that obsess over doing things “the right way” tend to languish, and rarely achieve success on a massive scale. For some that is ok, they’re happy to have produced something great that a few people like and that works right for them. In fact, this is largely the (healthy) attitude I see from the PostgreSQL project.

The PostgreSQL developers and users tend to feel strongly that their database is far superior to the likes of say, MySQL. They’ll tell you that they have always had full ACID compliance, that their bug counts are low, and performance continues to rise with every release.

I know a lot of people are successfully running PostgreSQL, but really, by contrast, seems like everybody’s running MySQL. MySQL is not bad code either. It just has hacks. Ok, having dug into it a bit now, it has a lot of hacks. But, why is MySQL the leader, and PostgreSQL the follower.

I think the answer is right there in that last sentence. As Cesar Milan will tell you, “choo gotta be da pack leader”. PostgreSQL probably would have continued on as a fine, but obscure, database engine had MySQL not revolutionized data storage in the same way Apache revolutionized web serving. MySQL has managed to carve out a huge market with Free software, while PostgreSQL’s market is only now beginning to grow. Really PostgreSQL has refused to follow in MySQL’s footsteps for a long time, and because of that, they’ve avoided many of the pitfalls MySQL has fallen in to as their scope creeps larger and larger like an amoeba slowly devouring the edges of the enterprise market that used to seem so far from its original targets.

However, even the Postgres guys know that hacks may be necessary. As of May, 2008 they have given in and will produce a general purpose master/slave replication system. The message to the “pgsql-hackers” list has an air of reluctance to it..

Users who might consider
PostgreSQL are choosing other database systems because our existing
replication options are too complex to install and use for simple cases.
In practice, simple asynchronous single-master-multiple-slave
replication covers a respectable fraction of use cases, so we have
concluded that we should allow such a feature to be included in the core
project.

Its like they’re finally saying “ok we want more users, so we’ll include this thing that goes against our principles.” Personally I think this is great, as PostgreSQL is a nice RDBMS, and to be able to use it for small-medium scaleout just like MySQL is really quite exciting.

So, the moral of the story is, if you want your project to be successful, throw in some crap code. Otherwise your developers will be up on their high horses too long, and not down in the trenches getting things done.


July 25, 2009 at 9:19 pm Comments (0)

TokyoTyrant – MemcacheDB, but without the BDB?

This past April I was riding in a late model, 2 door rental car with an interesting trio for sure. On my right sat Patrick Galbraith, maintainer of DBD::mysql and author of the Federated storage engine. Directly in front of me manning the steering wheel (for those of you keen on spatial description, you may have noted at this point that its most likely I was seated in the back, left seat of a car which is designed to be driven on the right side of the road. EOUF [end of useless fact]), David Axmark, co-founder of MySQL. Immediately to his right sat Brian Aker, of (most recently) Drizzle fame.

This was one of those conversations that I felt grossly unprepared for. It was the 2009 MySQL User’s conference, and Patrick and I had been hacking on DBD::drizzle for most of the day. We had it 98% of the way there and were in need of food, so we were joining the Drizzle dev team for gourmet pizza.

As we navigated from the Santa Clara conference center to Mountain View’s quaint downtown, Brian, Patrick, and I were discussing memcached stuff. I mentioned my idea, and subsequent implementation of the Mogile+Memcached method for storing data more reliably in memcached. I knew in my head why we had chosen to read from all of the replica servers, not just the first one that worked, but I forgot (The reason, btw, is that if one of the servers had missed a write for some reason, you might get out-of-date data). I guess I was a little overwhelmed by Brian’s mountain of experience w/ memcached.

Anyway, the next thing I mentioned was that we had also tried MemcacheDB with some success. Brian wasn’t exactly impressed with MemcacheDB, and immediately suggested that we should be using Tokyo Tyrant instead. I had heard of Tokyo Cabinet, the new hotness in local key/value storage and retrieval, but what is this Tyrant you speak of?

I’ve been playing with Tokyo Tyrant ever since, and advocating for its usage at Adicio. Its pretty impressive. In addition to speaking memcached protocol, it apparently speaks HTTP/WEBDAV too. The ability to select hash, btree, and a host of other options is nice, though I’m sure some of these are available as obscure options to berkeleydb as well.

Anyway, I was curious what performance was like, so I did some tests on my little Xen instance, and came up with pretty graphs.

tokyotyrantvsmemcachedb1

I used the excellent Brutis tool to run these benchmarks using the most interesting platform for me at the moment.. which would be, php with the pecl Memcache module.

These numbers were specifically focused on usage that is typical to MemcacheDB. A wide range of keys (in this case, 10000 is “wide” since the testing system is very small), not-small items (2k or so), and lower write:read ratio (1:50). I had the tests restart each daemon after each run, and these numbers are the results of the average of 3 runs each test.

I also tried these from another xen instance on the same LAN, and things got a lot slower. Not really sure why as latency is in the sub-millisecond range.. but maybe Xen’s networking just isn’t very fast. Either way, the numbers for each combination didn’t change much.

What I find interesting is that memachedb in no-sync mode actually went faster than memached. Of course, in nosync mode, memcachedb is just throwing data at the disk. It doesn’t have to maintain LRU or slabs or anything.

Tokyo Tyrant was very consistent, and used *very* little RAM in all instances. I do recall reading that it compresses data. Maybe thats a default? Anyway, tokyo tyrant also was the most CPU hungry of the bunch, so I have to assume having more cores might have resulted in much better results.

I’d like to get together a set of 3 or 4 machines to test multiple client threads, and replication as well. Will post that as part 2 when I pull it together. For now, it looks like.

In case anybody wants to repeat these tests, I’ve included the results, and the scripts used to generate them in this tarball.

– Additional info, 6/4/2009
Another graph that some might find interesting, is this one detailing CPU usage. During all the tests, brutis used about 60% of the CPU available on the machine, so 40% is really 100%:

tokyotyranttests_cpu

This tells me that the CPU was the limiting factor for Tokyo Tyrant, and with a multi-core machine, we should see huge speed improvements. Stay tuned for those tests!


June 4, 2009 at 6:40 am Comments (0)

Parallel mysql replication?

Its always been a dream of mine. I’ve posted about parallel replication on Drizzle’s mailing list before. I think when faced with the problem of a big, highly concurrent master, and scaling out reads simply with lower cost slaves, this is going to be the only way to go.

So today I was really glad to see that somebody is trying out the idea. Seppo Jaakola from “Codership”, who I’ve never heard of before today, posted a link to an article on his blog about his experimentation with parallel replication slaves. The findings are pretty interesting.

I hope that he’ll be able to repeat his tests with a real world setup. The software they’ve written seems to have the right idea. The biggest issue I have with the tests is that the tests were run on tiny hardware. Hyperthreading? Single disks? Thats not really the point of having parallel replication slaves.

The idea is that you have maybe a gigantic real time write server for OLTP. This beast may have lots of medium-power CPU cores, and an obscene amount of RAM, and a lot of battery backed write cache for writes.

Now you know that there are tons of reads that shouldn’t ever be done against this server. You drop a few replication slaves in, and you realize that you need a box with as much disk storage as your central server, and probably just as much write cache. Pretty soon scaling out those reads is just not very cost effective.

However, if you could have lots of CPU cores, and lots of cheap disks, you could dispatch these writes to be done in parallel, and you wouldn’t need expensive disk systems or lots of RAM for each slave.

So, the idea is not to make slaves faster in a 1:1 size comparison. Its to make it easier for a cheap slave to keep up with a very busy, very expensive master.

I do see where another huge limiting factor is making sure things synchronize in commit order. I think thats an area where a lot of time needs to be spent on optimization. The order should already be known so that the commiter thread is just waiting for the next one in line, and if the next 100 are already done it can just rip through them quickly, not signal them that they can go. Something like this seems right:


id=first_commit_id();
while(wait_for_commit(id)) {
commit(id);
id++;
}

I applaud the efforts of Codeship, and I hope they’ll continue the project and maybe ship something that will rock all our worlds.


June 2, 2009 at 7:08 pm Comments (0)