Google WebSite Optimizer

I’m testing out google website optimizer on this blog … hence the funny avatar photos on the top right.
The idea of website optimizer is pretty simple:
1) Configure a couple of alternatives to the current page (in this case, two alternate images)
2) Specify a conversion goal (in this case a click to my bio page)
3) Let google randomly replace the original html with the specified alternatives
So far the cartoon of me is in the lead, followed by the picture of me pointing at a giant projection of the google maps interface. The black-and-white photo of me in a suit is not popular at all.
This may seem like a trivial exercise (do I really care how many people read my bio?), but if you imagine doing this on a page that fulfills real business goals (like a checkout process) you start to see the possibilities.

Rails being single-threaded causes scalability problems

So anyone who works with Rails even a little bit knows that it is single-threaded. A given mongrel can only process one http request at a time. The solution is to run a large number (or “pack”) of mongrels, and load-balance your incoming requests across the pack of mongrels.
This works fine (it’s what every even moderately big rails site does), but it has one serious problem. If one of your pages are slow, then requests will “back up” behind the mongrel that is processing the slow request. You’d think you can solve the problem using load balancing (we use the NGINX fair plugin in order to get the smartest load balancing possible). But there’s a point where better load balancing simply won’t solve your problems. For example, let’s say my average request takes 250ms to respond. If I’m serving a request that is going to take a whopping 5 seconds to respond, there’s no way for the load balancer to know that something is fishy for at least 250ms. This guarantees that a certain number of requests are going to “back up” behind the slow request.
It’s all well and good to say you should profile your code and fix pages that are slow. But for a big web app like SlideShare, there are LOTS of pages. We work hard on making the important pages fast, but we don’t necessarily always have the time to profile every page. And if we do happen to have some slow pages, those pages don’t just respond slowly: they cause OTHER pages (the ones that are “backed up” behind the slow request) to respond slowly as well. So even an OCCASIONAL slow page will cause a reasonably large number of slow responses.
The solution that we’re currently working on is to route our “important” pages to different mongrels than our less important / slower pages. This should make it so that a slow page doesn’t cause other (fast) pages to slow down as well. But this is extra complexity in our system, and is a bit tricky to do for pages that don’t have an easy-to-recognize url “signature”.
I’m a bit surprised I don’t hear more chatter about this problem: it seems to me like the single biggest bottleneck to building a huge rails site. What are other big Rails sites doing to deal with this problem? Is there an easy solution that I’ve missed? As always, feel free to post suggestions in the comments!