SF Ruby meetup tonight at SlideShare

Tonight there’s a Ruby meetup at the SF offices of SlideShare. There’s pizza and beer, and plenty of parking in the scary-looking-but safe alley behind our office! Here’s the list of presenters:
Raul Parolari will touch on metaprogramming details – “class_inheritable_accessor: unknown heroes of Rails startup”.

Class variables and class instance variables have pros and cons (that programmers have discussed for ever since the beginning of time). Sometimes we would wish for a variable that had their qualities, but not their defects; wishful thinking, right? not in the Rails world! where they exist baptized as ‘class_inheritable_accessor’ (which by the way plays an important role at startup). This short talk discusses the 3 types of variables, and how the new one works (yes, another metaprogramming Ruby trick, at the service of the “Rails magic”).

Bala Paranj will talk about Design Patterns in Ruby

Paul Graham said “When I see patterns in my programs, I consider it a sign of trouble. The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I’m using abstractions that aren’t powerful enough? often that I’m generating by hand the expansions of some macro that I need to write.”
In this presentation we will see how some of the GOF patterns can be implemented using powerful features of Ruby in a simpler fashion.

When S3 goes down, the internet goes down!

We’re now in hour 4 of an S3 outage that is effecting the entire startup ecosystem. SlideShare is down, as is MuxTape, SmugMug, and almost every other site you can think of. Fingers crossed that they resolve this soon! Here’s the current status from AWS:

9:05 AM PDT We are currently experiencing elevated error rates with S3. We are investigating.
9:26 AM PDT We’re investigating an issue affecting requests. We’ll continue to post updates here.
9:48 AM PDT Just wanted to provide an update that we are currently pursuing several paths of corrective action.
10:12 AM PDT We are continuing to pursue corrective action.
10:32 AM PDT A quick update that we believe this is an issue with the communication between several Amazon S3 internal components. We do not have an ETA at this time but will continue to keep you updated.
11:01 AM PDT We’re currently in the process of testing a potential solution.
11:22 AM PDT Testing is still in progress. We’re working very hard to restore service to our customers.
11:45 AM PDT We are still in the process of testing a series of configuration changes aimed at bringing the service back online.
12:05 PM PDT We have now restored communication between a small subset of hosts. We are working on restoring internal communication across the rest of the fleet. Once communication is fully restored, then we will work to restore request processing.
12:25 PM PDT We have restored communication between additional hosts and are continuing this work across the rest of the fleet. Thank you for your continued patience.