Episode #432 - January 14, 2014

Posted 3 months back at Ruby5

We Brag about our Backend, shed some Light on Test Driven Rails, avoid the DBeater, pout over Ruby 1.9's end of life on this HAIKU edition of Ruby5.

Listen to this episode on Ruby5

This episode is sponsored by Top Ruby Jobs
If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.

Light Table Ruby
Rafe Rosen just released a new plugin for the recently-open-sourced Light Table IDE last week that adds full page, selection, or single line Ruby code execution. You can use this to quickly execute or demonstrate some code without leaving your Ruby files.

Test Driven Rails Part 1
Last week, Karol Galanciak posted the first article in a series on Test Driven Rails. The series is intending to cover how, when, and what to test when developing a Rails application. This first part is mostly theoretical, and Part 2 will take the topics discussed and apply them to application development.

Code Reviews with Codebrag
Code reviews are sometimes hard to do, and do consistently. Codebrag is a downloadable Ruby application that you can install and run on your own servers to watch your repositories and give you a simple interface for reviewing your code. Version 1 is free, and will be forever, so check it out.

XML-based DB Migrations with DBeater
DBeater is a yet-to-be-backed, crowdfunded project on Indiegogo which will become a Ruby gem that will allow you to migrate and version your database. It's backend agnostic, but uses XML instead of Ruby for it's definition files. We can't all be perfect, eh?

Faster I18n Backend for Ruby Written in C
i18nema is a new I18n translation library which uses C underpinnings to ease some of the garbage collection / Ruby object generation pain in current I18n libraries. It should be faster and more memory efficient, albeit not something you likely want to talk too much about at work.

Ruby 1.9.3 End of Life
The Ruby core team announced late last week that support for Ruby 1.9 will be ending. Active development will cease in about a month, followed by a year of security fix support, and all support will end in February of 2015. Time to migrate to Ruby 2.1!

About the recent oss-binaries.phusionpassenger.com outages

Posted 3 months back at Phusion Corporate Blog

As some of you might have noticed, there were some problems recently with the server oss-binaries.phusionpassenger.com, where we host our APT repository and precompiled binaries for Phusion Passenger. Although it was originally meant to be a simple file server meant for speeding up installation (by avoiding the need to compile Phusion Passenger), it has grown a lot in importance in the past few Phusion Passenger releases, so that any down time causes major problems for many users:

  • Our APT repository has grown more popular than we thought.
  • Many Heroku users are unable to start new dynos as long as the server is down. The Heroku dyno environment does not provide the necessary compiler toolchain, nor the hardware resources, to compile Phusion Passenger. Which is why when run on Heroku, Phusion Passenger downloads binaries from our server.

The server had first gone down on Sunday and was fixed later that day. Unfortunately it had gone down again on Tuesday morning, which we fixed soon after.

We sincerely apologize for this problem. But of course, apologies are not going to cut it. Since the first outage on Sunday we realized just how important this — originally minor — server became. Since Sunday we’ve begun work to solve this issue permanently. It’s clear that relying on a single server is a mistake, so we’re taking the following actions:

  • We’re adjusting the download timeouts in Phusion Passenger so that server problems don’t freeze it indefinitely. This allows Phusion Passenger to detect server problems quicker, and to fall back to compilation, without triggering any timeouts that may abort Phusion Passenger entirely. This work has been implemented yesterday but requires some more testing.
  • Instead of trying to download the native_support binary, Phusion Passenger should try to compile it first, because compiling native_support takes less than 1 second. If the correct compiler toolchain is installed on the server then it will avoid using the network entirely, so that it’s unaffected by any server outages of ours. This has also been implemented yesterday. (The rest of Phusion Passenger takes longer to compile so we can’t apply the same strategy there.)
  • For Heroku users: having the binaries downloaded at Heroku deploy time, not at dyno boot time, so that Heroku users are less susceptible to download problems. This has been implemented yesterday.
  • Reverting any server changes that we’ve made recently to oss-binaries.phusionpassenger.com, in the hope that it would increase the server’s uptime. The true reason for the downtime is still under investigation, but we’re giving the other items in this list more priority because they have more potential to fix the problem permanently. This has been implemented today.
  • Setting up an Amazon S3 mirror for high availability. If the main server is down, Phusion Passenger should automatically download from the mirror instead. We’re currently working on this.

The goal is to finish all these items this week and to release a new version that includes these fixes. We’re working around the clock on this.

Workarounds for now

Users can apply the following workaround for now in order to prevent Phusion Passenger from freezing during downloading of binaries:

Edit /etc/hosts and add “ oss-binaries.phusionpassenger.com”

Phusion Passenger will automatically fall back to compiling if it can’t download binaries.

Unfortunately, this workaround will not be useful for users who rely on our APT repository, or Heroku users. We’re working on a true fix as quickly as we can.

How We Test Rails Applications


I'm frequently asked what it takes to begin testing Rails applications. The hardest part of being a beginner is that you often don't know the terminology or what questions you should be asking. What follows is a high-level overview of the tools we use, why we use them, and some tips to keep in mind as you are starting out.


We use RSpec over Test::Unit because the syntax encourages human readable tests. While you could spend days arguing over what testing framework to use, and they all have their merits, the most important thing is that you are testing.

Feature specs

Feature specs, a kind of acceptance test, are high-level tests that walk through your entire application ensuring that each of the components work together. They're written from the perspective of a user clicking around the application and filling in forms. We use RSpec and Capybara, which allow you to write tests that can interact with the web page in this manner.

Here is an example RSpec feature test:

# spec/features/user_creates_a_foobar_spec.rb

feature 'User creates a foobar' do
  scenario 'they see the foobar on the page' do
    visit new_foobar_path

    fill_in 'Name', with: 'My foobar'
    click_button 'Create Foobar'

    expect(page).to have_css '.foobar-name', 'My foobar'

This test emulates a user visiting the new foobar form, filling it in, and clicking "Create". The test then asserts that the page has the text of the created foobar where it expects it to be.

While these are great for testing high level functionality, keep in mind that feature specs are slow to run. Instead of testing every possible path through your application with Capybara, leave testing edge cases up to your model, view, and controller specs.

I tend to get questions about distinguishing between RSpec and Capybara methods. Capybara methods are the ones that are actually interacting with the page, i.e. clicks, form interaction, or finding elements on the page. Check out the docs for more info on Capybara's finders, matchers, and actions.

Model specs

Model specs are similar to unit tests in that they are used to test smaller parts of the system, such as classes or methods. Sometimes they interact with the database, too. They should be fast and handle edge cases for the system under test.

In RSpec, they look something like this:

# spec/models/user_spec.rb

# Prefix class methods with a '.'
describe User, '.active' do
  it 'returns only active users' do
    # setup
    active_user = create(:user, active: true)
    non_active_user = create(:user, active: false)

    # exercise
    result = User.active

    # verify
    expect(result).to eq [active_user]

    # teardown is handled for you by RSpec

# Prefix instance methods with a '#'
describe User, '#name' do
  it 'returns the concatenated first and last name' do
    # setup
    user = build(:user, first_name: 'Josh', last_name: 'Steiner')

    # excercise and verify
    expect(user.name).to eq 'Josh Steiner'

To maintain readability, be sure you are writing Four Phase Tests.

Controller specs

When testing multiple paths through a controller is necessary, we favor using controller specs over feature specs, as they are faster to run and often easier to write.

A good use case is for testing authentication:

# spec/controllers/sessions_controller_spec.rb

describe 'POST #create' do
  context 'when password is invalid' do
    it 'renders the page with error' do
      user = create(:user)

      post :create, session: { email: user.email, password: 'invalid' }

      expect(response).to render_template(:new)
      expect(flash[:notice]).to match(/^Email and password do not match/)

  context 'when password is valid' do
    it 'sets the user in the session and redirects them to their dashboard' do
      user = create(:user)

      post :create, session: { email: user.email, password: user.password }

      expect(response).to redirect_to '/dashboard'
      expect(controller.current_user).to eq user

View specs

View specs are great for testing the conditional display of information in your templates. A lot of developers forget about these tests and use feature specs instead, then wonder why they have a long running test suite. While you can cover each view conditional with a feature spec, I prefer to use view specs like the following:

# spec/views/products/_product.html.erb_spec.rb

describe 'products/_product.html.erb' do
  context 'when the product has a url' do
    it 'displays the url' do
      assign(:product, build(:product, url: 'http://example.com')


      expect(rendered).to have_link 'Product', href: 'http://example.com'

  context 'when the product url is nil' do
    it "displays 'None'" do
      assign(:product, build(:product, url: nil)


      expect(rendered).to have_content 'None'


While writing your tests you will need a way to set up database records in a way to test against them in different scenarios. You could use the built-in User.create, but that gets tedious when you have many validations on your model. With User.create you have to specify attributes to fulfill the validations, even if your test has nothing to do with those validations. On top of that, if you ever change your validations later, you have to reflect those changes across every test in your suite. The solution is to use either factories or fixtures to create models.

We prefer factories (with FactoryGirl) over Rails fixtures, because fixtures are a form of Mystery Guest. Fixtures make it hard to see cause and effect, because part of the logic is defined in a file far away from the context in which you are using it. Because fixtures are implemented so far away from your tests, they tend to be hard to control.

Factories, on the other hand, put the logic right in the test. They make it easy to see what is happening at a glance and are more flexible to different scenarios you may want to set up. While factories are slower than fixtures, we think the benefits in flexibility and readability outweigh the costs.

Persisting to the database slows down tests. Whenever possible, favor using FactoryGirl's build_stubbed over create. build_stubbed will generate the object in memory and save you from having to write to the disk. If you are testing something in which you have to query for the object (like User.where(admin: true)), your database will be expecting to find it in the database, meaning you must use create.

Running specs with JavaScript

You will eventually run into a scenario where you need to test some functionality that depends on a piece of JavaScript. Running your specs with the default driver will not run any JavaScript on the page.

You need two things to run a feature spec with JavaScript.

  1. Install a JavaScript driver

    There are two types of JavaScript drivers. Something like Selenium will open a GUI browser and click around your page while you watch it. This can be a useful tool to visualize while debugging. Unfortunately, booting up an entire GUI browser is slow. For this reason, we prefer using a headless browser. For Rails, you will want to use either Poltergeist or Capybara Webkit.

  2. Tell the specific test to run with the JavaScript metadata key

     feature 'User creates a foobar' do
       scenario 'they see the foobar on the page', js: true do

With the following in place, RSpec will run any JavaScript necessary.

Database Cleaner

When running your tests by default, Rails wraps each scenario in a database transaction. This means, at the end of each test, Rails will rollback any changes to the database that happened within that spec. This is a good thing, as we don't want any of our tests having side effects on other tests.

Unfortunately, when we use a JavaScript driver, the test is run in another thread. This means it does not share a connection to the database and your test will have to commit the transactions in order for the running application to see the data. To get around this, we can allow the database to commit the data and subsequently truncate the database after each spec. This is slower than transactions, however, so we want to use truncation only when necessary.

This is where Database Cleaner comes in. Database Cleaner allows you to configure when each strategy is used. I recommend reading Avdi's post for all the gory details. It's a pretty painless setup, and I typically copy this file from project to project, or use Suspenders so that it's set up out of the box.

Test doubles and stubs

Test doubles are simple objects that emulate another object in your system. Often, you will want a simpler stand-in and only need to test one attribute, so it is not worth loading an entire ActiveRecord object.

car = double(:car)

When you use stubs, you are telling an object to respond to a given method in a known way. If we stub our double from before


we can now expect our car object to always return 120 when prompted for its max_speed. This is a great way to get an impromptu object that responds to a method without having to use a real object in your system that brings its dependencies with it. In this example, we stubbed a method on a double, but you can stub virtually any method on any object.

We can simplify this into one line:

car = double(:car, max_speed: 120)

Test spies

While testing your application, you are going to run into scenarios where you want to validate that an object receives a specific method. In order to follow Four Phase Test best practices, we use test spies so that our expectations fall into the verify stage of the test. Previously we used Bourne for this, but RSpec now includes this functionality in RSpec Mocks. Here's an example from the docs:

invitation = double('invitation', accept: true)


expect(invitation).to have_received(:accept)

Stubbing external requests with Webmock

Test suites that rely on third party services are slow, fail without an internet connection, and may have trouble with the services' rate limits or lack of a sandbox environment.

Ensure that your test suite does not interact with third party services by stubbing out external HTTP requests with Webmock. This can be configured in spec/spec_helper.rb:

require 'webmock/rspec'
WebMock.disable_net_connect!(allow_localhost: true)

Instead of making third party requests, learn how to stub external services in tests.

What's next?

This was just an overview of how to get started testing Rails. To expedite your learning, I highly encourage you to take our TDD workshop, where you cover these subjects in depth by building two Rails apps from the ground up. It covers refactoring both application and test code to ensure both are maintainable. Students of the TDD workshop also have access to office hours, where you can ask thoughtbot developers any questions you have in real time.

I took this class as an apprentice, and I can't recommend it enough.

Code Show and Tell: PolymorphicFinder



In the Learn app, we need to accept purchaseable items (books, screencasts, workshops, plans, and so on) as a parameter in several places. Because we're using Rails' polymorphic_path under the hood, these parameters come in based on the resource name, such as book_id or workshop_id. However, we need to treat them all as "purchaseables" when users are making purchases, so we need a way to find one of several possible models from one of several possible parameter names in PurchasesController and a few other places.

The logic for finding these purchaseables was previously on ApplicationController:

def requested_purchaseable
  if product_param
  elsif params[:individual_plan_id]
    IndividualPlan.where(sku: params[:individual_plan_id]).first
  elsif params[:team_plan_id]
    TeamPlan.where(sku: params[:team_plan_id]).first
  elsif params[:section_id]
    raise "Could not find a purchaseable object from given params: #{params}"

def product_param
  params[:product_id] ||
    params[:screencast_id] ||
    params[:book_id] ||

This method was problematic in a few ways:

  • ApplicationController is a typical junk drawer, and it's unwise to feed it.
  • The method grew in complexity as we added more purchaseables to the application.
  • Common problems, such as raising exceptions for bad IDs, could not be implemented in a generic fashion.
  • Testing ApplicationController methods is awkward.
  • Testing the current implementation of the method was repetitious.

While fixing a bug in this method, I decided to roll up my sleeves and use a few new objects to clean up this mess.

The Fix

The new method in ApplicationController now simply composes and delegates to a new object I created:

def requested_purchaseable
    finding(Section, :id, [:section_id]).
    finding(TeamPlan, :sku, [:team_plan_id]).
    finding(IndividualPlan, :sku, [:individual_plan_id]).
    finding(Product, :id, [:product_id, :screencast_id, :book_id, :show_id]).

The class composed and delegates to two small, private classes:

# Finds one of several possible polymorphic members from params based on a list
# of relations to look in and attributes to look for.
# Each polymorphic member will be tried in turn. If an ID is present that
# doesn't correspond to an existing row, or if none of the possible IDs are
# present in the params, an exception will be raised.
class PolymorphicFinder
  def initialize(finder)
    @finder = finder

  def self.finding(*args)

  def finding(relation, attribute, param_names)
    new_finder = param_names.inject(@finder) do |fallback, param_name|
      Finder.new(relation, attribute, param_name, fallback)


  def find(params)


  class Finder
    def initialize(relation, attribute, param_name, fallback)
      @relation = relation
      @attribute = attribute
      @param_name = param_name
      @fallback = fallback

    def find(params)
      if id = params[@param_name]
        @relation.where(@attribute => id).first!

  class NullFinder
    def find(params)
        "Can't find a polymorphic record without an ID: #{params.inspect}"

  private_constant :Finder, :NullFinder

The new class was much simpler to test.

It's also easy to add new purchaseable types without introducing unnecessary complexity or risking regressions.

The explanation

The solution uses a number of constructs and design patterns, and may be a little tricky for those unfamiliar with them:

It works like this:

  • The PolymorphicFinder class acts as a Builder for the Finder interface. It accepts initialize arguments for Finder, and encapsulates the logic of chaining them together.
  • The finding instance method of PolymorphicFinder uses inject to recursively build a chain of Finder instances for each of the param_names that the Finder should look for.
  • Each Finder in the chain accepts a fallback. In the event that the Finder doesn't know how to find anything from the given params, it delegates to its fallback. This forms a Chain of Responsibility.
  • The first Finder is initialized with a NullFinder, which forms the last resort of the Chain of Responsibility. In the event that every Finder instance delegates to its fallback, it will delegate to the NullFinder, which will raise a useful error of the correct Exception subclass.
  • The PolymorphicFinder class also acts as a Decorator for the Finder interface. Once the Builder interaction is complete, you can call find on the PolymorphicFinder (just as you would for a regular Finder) and it will delegate to the first Finder in its chain.


  • The new code replaces conditional logic and special cases with polymorphism, making it easier to change.
  • The usage (in ApplicationController) is much easier to read and modify.
  • Adding or changing finders is less likely to introduce regressions, since common issues like blank or unknown IDs are handled generically.
  • The code avoids possible state vs identity bugs by avoiding mutation.


  • The new code is larger, both in terms of lines of code and overall complexity by any measure.
  • It uses a large number of design patterns that will be confusing to those that are unfamiliar with them, or to those that fail to recognize them quickly.
  • It introduces new words into the application vocabulary. Although naming things can reveal their intent, too many names can cause vocabulary overload and make it difficult for readers to hold the problem in their head long enough to understand it.


In summary, using the new code is easier, but understanding the details may be harder. Although each piece is less complex, the big picture is more complex. This means that you can understand ApplicationController without knowing how it works, but knowing how the whole thing fits together will take longer.

The heavy use of design patterns will make the code very easy to read at a macro level when the patterns are recognized, but will read more like a recursive puzzle when the patterns aren't clear.

Overall, the ease of use and the improve resilience to bugs made me decide to keep this refactoring despite its overall complexity.

Also available in 3D

Okay, maybe not 3D, but Ben and I also discussed this in a video on our show, the Weekly Iteration, available to Learn subscribers.

What's next?

If you found this useful, you might also enjoy:

Episode #431 – January 10th, 2013

Posted 3 months back at Ruby5

Another Ruby5! Analyze your githubs with AccessList and hammerspace enumerable into submission using Sneakers.

Listen to this episode on Ruby5

This episode is sponsored by New Relic
New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.

Github Traffic Analytics
Traffic analytics for your Github repos! It's about time. Now you can see number of views, unique visitors, and other useful data.

How accessible is your site? Cameron Cundiff's AccessLint gem makes it easy to find out.

App response times climbing? Clearly you need persistent, concurrently-available, off-heap storage of strings! Well, Airbnb did at least. And if it worked for them it can work for you too.

Performance background processing for Ruby using RabbitMQ. Like a boss!

Stop including Enumerable, return Enumerator instead
Robert Pankowecki has written a blog post asking us all to please stop including Enumerable and use an Enumerator instead. Please. Stop.

Episode #431 - January 10th, 2014

Posted 3 months back at Ruby5

Another Ruby5! Analyze your githubs with AccessList and hammerspace enumerable into submission using Sneakers.

Handling Associations on Null Objects


The Null Object Pattern is a great tool for removing conditionals in your code base. Rather than checking for nil or predicates about an object existing, you instead return a "null" implementation that responds to the same interface. The most common case I've applied this to in my Rails apps is the concept of a "Guest User". For example:

class Guest
  def email

  def admin
  alias_method :admin?, :admin

  def purchases

class ApplicationController < ActionController::Base
  include Clearance::Controller

  def current_user
    super || Guest.new

This implementation can serve us quite well. Our code base expands, blissfully unaware of whether it's dealing with a User or a Guest. As we add logic for our store front, we end up with methods like this:

class StoreListing < ActiveRecord::Base
  def included_in?(purchases)

Things will continue to work on our empty array, as long as we are only calling methods from the Enumerable module. However, we run into problems as soon as we write some code like this:

class CategoryStoreController < ApplicationController
  def index


  def subcategory_for_redirect
    last_purchase_in_category.subcategory || category.subcategory.first

  def last_purchase_in_category
    @last_purchase_in_category ||= current_user.purchases.last_in_category(category)

  def category
    @category ||= Category.find(params[:id])

If the user isn't logged in, this will fail with NoMethodError: undefined method 'last_in_category' for []:Array.

Rails to the Rescue!

Luckily, Rails 4.0 introduced a new method on ActiveRecord::Relation to help with exactly this situation! Relation#none is a method that will return a new instance of NullRelation.

NullRelation responds to every method that a normal instance of Relation would. As a bonus, when you call .none on one of your model classes, it will also respond to all of the class methods that a Relation for that class would have held as well! Our updated Guest class would look like this:

class Guest
  def purchases

Once we've utilized Relation#none in our null objects, the rest of our code works as expected, and we can continue blissfully unaware of the existence of Guest in the rest of our codebase.

Backporting to Rails 3

For those of you who haven't upgraded to Rails 4 yet (you probably should…), you can achieve a similar effect with this simple back-port:

class ActiveRecord::Base
  def self.none
    where("1 = 0")

This isn't quite the same as NullRelation. It'll still hit the database and try to load data, and the call to none could be undone with .unscope(:where). However, for most cases it should act as a suitable polyfill until you're able to upgrade to Rails 4.

What's next?

If you enjoyed this article, you might also enjoy:

Site offline for maintenance

Posted 3 months back at entp hoth blog - Home

Hi all,

Tender will be offline for 5-10 minutes tonight, Thursday 1/9 at 9pm PST, while we perform a maintenance operation on one of our database servers. Emails coming in during this period will be slightly delayed.

We apologize for the inconvenience.

If you have any question or concern, drop us a line at support@tenderapp.com.


Arduino-Based Bathroom Occupancy Detector


At the thoughtbot Boston office, when nature calls, there is an arduous, uphill both ways and often snow covered trek to the bathroom area. The bathrooms are also out of sight from anywhere in the office. Many times someone will turn the corner to the bathrooms and see that they are all occupied. That person now must either wait around until a bathroom opens up or go back to their desk and try again later. We wanted an indicator, visible throughout our Boston office, that informs whether a bathroom is available. We used the power of hardware to hack together a solution.

Arduino-Based Bathroom Occupancy Detector


We decided to use two microcontrollers to monitor and report the bathroom status. One microcontroller would sit near the bathrooms and use door sensors to detect if they were closed or open. The other microcontroller would sit in an area visible to everyone in the office and use an LED to indicate the status of the bathroom doors. They will have to communicate wirelessly since their distance apart could be long. The bathroom door microcontroller has to run from batteries since there is not a nearby outlet. We also decided that besides using an LED to show if there was an available door, we would also post the info to a remote server so other applications could digest it.


We'll be using Arduinos for the microcontrollers because there is a ton of support and information about how to use them available online. For similar reasons, we will use XBee V1 radios for the communication between the Arduinos. The door sensing Arduino will be an Arduino Fio because it comes with a LiPo battery charging circuit and an easy way to plug in an XBee radio. The reporting Arduino will be an Arduino Yún because of its WiFi capabilities. The door sensors are generic reed switches with magnets. We'll also use a couple of solderless breadboards to prototype extra circuitry needed to tie things together.

Door Sensors

These sensors are a combination of a magnet and a reed switch. The magnet is placed on the door and the reed switch is on the frame. When the door closes, the magnet closes the reed switch, closing the circuit. We connect one side of the switch to power and the other side goes into a GPIO port on the Arduino. We must also connect a pull down resistor from the GPIO port to ground so when the switch is open, the port reads a low signal. Now when the door is shut, the switch is closed and there is a high signal on the port. When the door is open, the switch is open and there is a low signal on the port.

Door Sensor

Arduino Fio (sensor module)

This module is responsible for sensing the bathroom door's state and sending that state to the reporter. First, we wire up the two door sensors to the Fio. We'll use digital pins 2 and 3 on the Fio so we can use the interrupts later for power savings. Then we connect a common 10K resistor as pull down to ground from each pin. The only thing left to do is plug the XBee into the dedicated connector on the Fio. Now that everything is connected, we need to write the code that reads the sensors and sends their state to the XBee.

Arduino Fio

void setup() {
  pinMode(2, INPUT);
  pinMode(3, INPUT);

Here we initialize digital pins 2 and 3 as inputs and create a serial interface for communication with the XBee. The XBee is programmed at 9600 baud by default so we need to create the serial connection to match. Now in the main program loop, lets check the door sensors and send their state to the XBee.

void loop() {
  int leftDoor = digitalRead(2);
  int rightDoor = digitalRead(3);



Here we read the status of the doors and then send it to the XBee. Lets add a delay of 1000ms or 1 second so we are not constantly sending data, but also update often enough so the data isn't stale. We also add a prefix and postfix to our transmission so it is easier to receive on the other end.

This implementation isn't very efficient because there could be long periods of time where neither door has changed. Lets save the state of the doors and transmit only if one has changed.

int leftDoorState = 0;
int rightDoorState = 0;
boolean hasChanged = false;

void loop() {
  int leftDoor = digitalRead(2);
  int rightDoor = digitalRead(3);

  if (leftDoor != leftDoorState) {
    leftDoorState = leftDoor;
    hasChanged = true;

  if (rightDoor != rightDoorState) {
    rightDoorState = rightDoor;
    hasChanged = true;

  if (hasChanged) transmit();


void transmit() {
  hasChanged = false;

Here we create two global variables to hold the current door's state. When we read the doors' states, we check if it has changed since the last read. If either of the doors has changed state, we transmit the data with the XBee. This will save us from transmitting every second to maybe only 20 to 50 transmits per day depending on bathroom use. The XBee module is using the most power when transmitting so reducing the number of times it has to transmit will reduce the power consumption.

We can go one step further and use interrupts to notify the Arduino that a door state has changed.

void setup() {
  pinMode(2, INPUT);
  attachInterrupt(0, transmitDoorState, CHANGE);

  pinMode(3, INPUT);
  attachInterrupt(1, transmitDoorState, CHANGE);


void transmitDoorState() {
  int leftDoor = digitalRead(2);
  int rightDoor = digitalRead(3);


void loop() {


We attach a change interrupt to each of our door sensors. Any time a door state changes, the interrupt will execute the function transmitDoorState. Since our interrupt is doing all the work now, we can remove the global variables and all functionality from the main program loop.

The sensor module is now waiting for a change on the interrupt pins before it does anything. While its waiting, the CPU is active and processing no-op commands. This is waisting power because we're keeping the CPU on when we're not doing anything. To save power lets sleep the CPU when we don't need it.

#include <avr/sleep.h>

void setup() {
  pinMode(2, INPUT);
  attachInterrupt(0, transmitDoorState, CHANGE);

  pinMode(3, INPUT);
  attachInterrupt(1, transmitDoorState, CHANGE);


void transmitDoorState() {
  int leftDoor = digitalRead(2);
  int rightDoor = digitalRead(3);


void loop() {

This will put the Arduino into an idle sleep while waiting for a change on the interrupts. When a change occurs, the Arduino will wake up, transmit the state, then go back to sleep. We need to add a delay after we send data to the XBee to ensure that all the data has been sent before we try to sleep the processor again.

Further Improvements

The transmitDoorState function is an Interrupt Service Routine or ISR. An ISR should be quick because you want to be out of it before another interrupt occurs. The transmitDoorState function is not very quick because it has to send data serially and delay for 100ms. We probably won't run into any issues since the time between interrupts (door opening and closing) will most likely be greater than the time it takes to transmit, but to be safe we could move this code into the program loop and execute it after the sleep function. We could also reduce the power consumption further by using the sleep mode SLEEP_MODE_PWR_DOWN. This sleep mode affords us the most power savings but doesn't allow us to use the CHANGE interrupt. Instead we would have to use level interrupts at LOW or HIGH and manage which one to interrupt on depending on the state of the door.

Arduino Yún (reporter module)

This module is responsible for receiving the door state data and reporting that state to the office. First, we need to wire up the XBee module. It would be easy to get a XBee shield for the Arduino and use that but we have an XBee explorer instead so we need to manually wire up the XBee. Connect XBee explorer power and ground to the Arduino, then the RX and TX to digital pins 8 and 9 respectively (pins chosen at random). We also need to add a 10K pull up resistor from the RX and TX lines to power. Now add an LED in series with a 330 ohm resistor (or similar value) to digital pin 4 (also chosen at random). The LED & resistor combo should be connected low side and we'll drive it high from the Arduino to turn it on. Time to code!

Arduino Yún

#include <Bridge.h>
#include <Process.h>
#include <SoftwareSerial.h>

SoftwareSerial xbee(8, 9); // RX, TX

void setup()  {
  pinMode(4, OUTPUT);
  digitalWrite(4, LOW);
  xbee.begin( 9600 );

Here we set up the Bridge to the Linux processor on the Yún, set the mode of our LED pin to an output, start the LED off, and initialize the software serial connection for the XBee. Next, we need to receive data from the XBee and report the door status.

int leftDoor, rightDoor;

enum state {

state currentState = waiting_for_prefix;

void loop()  {
  if (xbee.available()) {
    int data = xbee.read();

    switch (currentState) {
      case waiting_for_prefix:
        if (data == 0xF0)
          currentState = get_left_door;

      case get_left_door:
        leftDoor = data;
        currentState = get_right_door;

      case get_right_door:
        rightDoor = data;
        currentState = waiting_for_postfix;

      case waiting_for_postfix:
        if (data == 0xF0) {
          currentState = waiting_for_prefix;

We'll use a state machine to receive the data from the XBee. There are four states: one for each the prefix and postfix to see the start and end of our data and one for each door. The door states will record the data from the XBee and the prefix and postfix states are used for flow control. When we see the postfix data we report the door states with reportState(). This function should turn on an LED if both doors are closed.

void reportState() {
  digitalWrite(4, leftDoor & rightDoor);

When the doors are closed their state is HIGH or 1 so when they are both HIGH the bitwise & operator will evaluate that to HIGH and turn on the LED.

Lets also report the door status online. Connect the Arduino Yún to your WiFi network. The Arduino processor can send shell commands to the Linux processor on the board. We will use the curl command to post the door states to an external API.

void reportState() {
  digitalWrite(4, leftDoor & rightDoor);

  Process curl;

  String data = "leftDoor=";
  data += leftDoor;
  data += "&rightDoor=";
  data += rightDoor;


That's all! Now, you'll be able to see if there is an available bathroom via an LED in the room or with a cool application that digests the API web service.

Areas For Improvement

During the project we realized that the power consumption of the XBee module was very high and had no way of going into a low power state. We measured the current consumption from the Fio board by placing a 0.5 ohm resistor in series with the power from the battery. The voltage drop across the resistor divided by the resistance gives us the current which was around 50mA to 60mA. The battery we are using is a 400mAh battery, so we would get about 8 hours of use. Charging the battery everyday is not an option so more research needs to be done into a lower power wireless communication solution. Also, this solution was expensive and more research can be done to find a lower cost implementation. thoughtbot Boston has two other bathrooms also located in remote places. We eventually want door sensors on these as well and a low cost setup for the door sensor would make this easier.

Episode #430 - January 7th, 2014

Posted 3 months back at Ruby5

Test Driving a JSON API in Rails, Jubilee for Vert.x, Exception#cause, Hulse, Caching an API

Listen to this episode on Ruby5

This episode is sponsored by Top Ruby Jobs
If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.

Test Driving a JSON API in Rails
Eno Compton wrote a blog post on "Test Driving a JSON API in Rails" where he touches on some pretty relevant details that tend to be overlooked by most developers.

Jubilee for Vert.x
Jubilee is a rack server that uses the best features of Vert.x 2.0 such as Event bus, Shared data and Clustering.

Ruby 2.1 Exception#cause
The folks at Bugsnag wrote a blog post describing Ruby 2.1's Exception#cause feature which allows access to the root cause of an exception in cases where multiple exceptions are raised.

Hulse, by Derek Willis, is a Ruby wrapper for House and Senate roll call votes. It parses Congressional vote data from the official House of Representatives and Senate websites.

Free Ruby Tapas on Caching an API
Avdi Grimm has just freed up another episode from the Ruby Tapas archives on Caching an API. He talks about building a caching layer over an HTTP API.

Playbook v2


We're pleased to announce the second major, 14,000 word version of our playbook. We're giving it away for free and licensing it as Creative Commons Attribution-NonCommercial.

The playbook is the most comprehensive description that we've ever gathered of how we run our company and how we make web and mobile products. We did not invent many of the techniques we describe, but we use them regularly in the course of doing real work.


We released the first version in August of 2011.

Some of our tools and techniques are the same as then, such as Heroku for application hosting, New Relic for performance monitoring, and Airbrake for error tracking.

The developing section is strongly influenced by Extreme Programming rules, which continues to be best practice well into its second decade of popularity.


Some things are new.

Right up front are product design sprints, strongly influenced by Google Ventures' design team.

Another overhauled section is about using Trello to manage our planning (strongly influenced by UserVoice), sales, and hiring processes.

We mention for the first time our style guide and how to follow the guidelines pragmatically.

We include a production checklist to make sure we don't forget an important part of a production deployment before launch.

We mention Zapier quite a few times. It has become important to us as glue between systems like StackOverflow and Campfire, FreshBooks and Campfire, email to Trello, Trello to Campfire, Stripe to Campfire, Wufoo to Trello, and a few others.


The HTML version is free at http://playbook.thoughtbot.com. You can get the PDF, MOBI, and EPUB versions for free by subscribing to our monthly-ish newsletter, The Bot Cave.

We hope you find it useful.

The Unix Shell's Humble If


The Unix shell is often overlooked by software developers more familiar with higher level languages. This is unfortunate because the shell can be one of the most important parts of a developer's toolkit. From the one-off grep search or sed replacement in your source tree to a more formal script for deploying or backing up an entire application complete with options and error handling, the shell can be a huge time saver.

To help shed light on the power and subtlety that is the Unix shell, I'd like to take a deep dive into just one of its many features: the humble if statement.


The general syntax of an if statement in any POSIX shell is as follows:

if command ; then

elif command ; then  # optionally

else command         # optionally


The if statement executes command and determines if it exited successfully or not. If so, the "consequent" path is followed and the first set of expressions is executed. Otherwise, the "alternative" is followed. This may mean continuing similarly with an elif clause, executing the expressions under an else clause, or simply doing nothing.

if grep -Fq 'ERROR' development.log; then
  # there were problems at some point
elif grep -Fq 'WARN' development.log; then
  # there were minor problems at some point
  # all ok!

The command can be a separate binary or shell script, a shell function or alias, or a variable referencing any of these. Success is determined by a zero exit-status or return value, anything else is failure. This makes sense: there may be many ways to fail but there should be exactly one way to succeed.

is_admin() {
  return 1

if is_admin; then
  # this will not run

If your command is a pipeline, the exit status of the last command in the pipeline will be used:

# useless use of cat for educational purposes only!
if cat development.log | grep -Fq 'ERROR'; then
  # ...

For the most part, this is intuitive and expected. In cases where it's not, some shells offer the pipefail option to change that behavior.

Negation, True, and False

The ! operator, when preceding a command, negates its exit status. Additionally, both true and false are normal commands on your system which do nothing but exit appropriately:

true; echo $?
# => 0

false; echo $?
# => 1

! true; echo $?
# => 1

The ! operator allows us to easily form an "if-not" statement:

if ! grep -Fq 'ERROR' development.log; then
  # All OK

The availability of true and false is what makes statements like the following work:

if true; fi
  # ...


if ! "$var"; then
  # ...

However, you should avoid doing this. The idiomatic (and more efficient) way to represent booleans in shell scripts is with the values 1 (for true) and 0 (for false). This idiom is made more convenient if you have (( available, which we'll discuss later.

The test Command

The test command performs a test according to the options given, then exits successfully or not depending on the result of said test. Since this is a command like any other, it can be used with if:

if test -z "$variable"; then
  # $variable has (z)ero size

if test -f ~/foo.txt; then
  # ~/foo.txt is a regular (f)ile

test accepts a few symbolic options as well, to make for more readable statements:

if test "$foo" = 'bar'; then
  # $foo equals 'bar', as a string

if test "$foo" != 'bar'; then
  # $foo does not equal bar, as a string

The = and != options are only for string comparisons. To compare numerically, you must use -eq and -ne. See man 1 test for all available numeric comparisons.

Since commands can be chained together logically with && and ||, we can combine conditions intuitively:

if test "$foo" != 'bar' && test "$foo" != 'baz'; then
  # $foo is not bar or baz

Be aware of precedence. If you need to enforce it, group your expressions with curly braces.

if test "$foo" != 'bar' && { test -z "$bar" || test "$foo" = "$bar"; }; then
  # $foo is not bar and ( $bar is empty or $foo is equal to it )

Note the final semi-colon before the closing brace

If your expression is made up entirely of test commands, you can collapse them using -a or -o. This will be faster since it's only one program invocation:

if test "$foo" != 'bar' -a "$foo" != 'baz'; then
  # $foo is not bar or baz

The [ Command

Surprisingly, [ is just another command. It's distributed alongside test and its usage is identical with one minor difference: a trailing ] is required. This bit of cleverness leads to an intuitive and familiar form when the [ command is paired with if:

if [ "string" != "other string" ]; then
  # same as if test "string" != "other string"; then

Unfortunately, many users come across this usage first and assume the brackets are part of if itself. This can lead to some nonsensical statements.

Rule: Never use commands and brackets together

Case in point, this is incorrect:

if [ grep -q 'ERROR' log/development.log ]; then
  # ...

And so is this:

if [ "$(grep -q 'ERROR' log/development.log)" ]; then
  # ...

The former is passing a number of meaningless words as arguments to the [ command; the latter is passing the string output by the (quieted) grep invocation to the [ command.

There are cases where you might want to test the output of some command as a string. This would lead you use a command and brackets together. However, there is almost always a better way.

# this does work
if [ -n "$(grep -F 'ERROR' log/development.log)" ]; then
  # there were errors

# but this is better
if grep -Fq 'ERROR' development.log; then
  # there were errors

# this also works
if [ -n "$(diff file1 file2)" ]; then
  # files differ

# but this is better
if ! diff file1 file2 >/dev/null; then
  # files differ

As with most things, quoting is extremely important. Take the following example:

var="" # an empty string

if [ -z $var ]; then
  # string is empty

You'll find if you run this code, it doesn't work. The [ command returns false even though we can clearly see that $var is in fact empty (a string of zero size).

Since [ OPTION is valid usage for [, what's actually being executed by the shell is this:

if [ -z ]; then
  # is the string "]" empty? No.

The fix is to quote correctly:

if [ -z "$var" ]; then
  # is the string "" empty? Yes.

When are quotes needed? Well, to paraphrase Bryan Liles…

Rule: Quote All the Freaking Time

Examples: "$var", "$(command)" "$(nested "$(command "$var")")"

It's common for script authors to use the following to "fix" this issue as well:

if [ x$var = x ]; then
  # $var is empty

Don't do this. It will still fail if $var contains any whitespace. The only way to properly handle whitespace is to properly quote; and once the expression is properly quoted, the "x trick" is no longer needed.

Non-POSIX Concerns

In most modern shells like bash and zsh, two built-ins are available: [[ and ((. These perform faster, are more intuitive, and offer many additional features compared to the test command.

Best Practice: If you have no reason to target POSIX shell, use [[


[[ comes with the following features over the normal test command:

  • Use familiar ==, >=, and <= operators
  • Check a string against a regular expression with =~
  • Check a string against a glob with ==
  • Less strict about quoting and escaping

You can read more details about the difference here.

While the operators are familiar, it's important to remember that they are string (or file) comparisons only.

Rule: Never use [[ for numeric comparisons.

For that, we'll use (( which I'll explain shortly.

When dealing with globs and regular expressions, we immediately come to another rule:

Rule: Never quote a glob or regular expression

I know, I just said to quote everything, but the shell is an epic troll and these are the only cases where quotes can hurt you, so take note:

for x in "~/*"; do
  # This loop will run once with $x set to "~/*" rather than once 
  # for every file and directory under $HOME, as was intended

for x in ~/*; do
  # Correct

case "$var" of
    # This will only hit if $var is exactly "this|that" 

    # This will only hit if $var is exactly "*"

# Correct
case "$var" of
  this|that) ;;
  *) ;;


if [[ "$foo" == '*bar*' ]]; then
  # True if $foo is exactly "*bar*"

if [[ "$foo" == *bar* ]]; then
  # Correct

if [[ "$foo" =~ '^foo' ]]; then
  # True if $foo is exactly "^foo", but leading or trailing 
  # whitespace may be ignored such that this is also true if $foo is 
  # (for example) "  ^foo  "

if [[ "$foo" =~ ^foo ]]; then
  # Correct

If the glob or regular expression becomes unwieldy, you can place it in a variable and use the (unquoted) variable in the expression:

pattern='^Home sweet'

if [[ 'Home sweet home' =~ $pattern ]]; then
  # ...


for file in $myfiles; do
  # ...

After regular expression matches, you can usually find any capture groups in a magic global. In bash, it's BASH_REMATCH.

if [[ 'foobarbaz' =~ ^foo(.*)baz$ ]]; then
  echo ${BASH_REMATCH[1]}
  # => "bar"

And in zsh, it's match.

if [[ 'foobarbaz' =~ ^foo(.*)baz$ ]]; then
  echo $match[1]
  # => "bar"

Note that in zsh, you don't need curly braces for array element access

Math and Numerical Comparisons

The built-in (( or Arithmetic Expression is concerned with anything numeric. It's an enhancement on the POSIX $(( )) expression which replaced the ancient expr program for doing integer math.


# old, don't use!
i=$(expr $i+1)

# better, POSIX

# valid in shells like bash and ksh93

# alternate syntax
let i++

The difference between $((expression)) and ((expression)) or let expression is whether you want the result or not. Also notice that in either form, we don't need to use the $ when referencing variables. This is true in most but not all cases ($# is one where it's still required).

When comparison operators are used within ((, it will perform the comparison and exit accordingly (just like test). This makes it a great companion to if:

if ((x == 42)); then
  # ...

if ((x < y)); then
  # ...

Here's a more extended example showing that it can be useful to perform arithmetic and comparisons in the same expression:

retry() {
  local i=1 max=5

  while ((i++ <= max)); do
    if try_something; then
      print "Call succeeded.\n"
      return 0

  printf "Maximum attempts reached!\n" >&2
  return 1

The (( form can also check numbers for "truthiness". Namely, the number 0 is false. This makes our boolean idiom a bit more convenient:


if [ "$var" -eq 1 ]; then
  # ...

# bash, zsh, etc
if ((var)); then
  # ...

# example use-case. $UID of the root user is 0.
if ((UID)); then
  error "You must be root"

This will perform better than a fork-exec of /bin/true or /bin/false.


To recap, we've seen that if in the Unix shell is both simple and complex. It does nothing but execute a command and branch based on exit status. When combined with the test command, we can make powerful comparisons on strings, files, and numbers while upgrading to [ gives the same comparisons a more familiar syntax. Additionally, using non-POSIX enhancements like [[ and (( gives us globs, regular expressions, and better numeric comparisons.

You've also seen a number of rules and best practices to ensure your shell scripts act as you intend in as many shells as you choose to run them.

Episode #429 - January 3rd, 2014

Posted 3 months back at Ruby5

Writing a Ruby compiler, the Omega universe simulator, RubyGems 2.2.0, debugging with HTTP clients, detecting similarities in images, and the Lotus web framework all in this episode of the Ruby5!

Listen to this episode on Ruby5

This episode is sponsored by New Relic
New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.

Writing a Compiler in Ruby from the Bottom Up
Ever wondered just what goes into writing a compiler? This long running series of blog posts from Vidar Hokstad will take you on a whirlwind tour!

The Omega Simulation Framework
The Omega universe simulation framework lets you create your very own game universe in the cloud!

RubyGems 2.2.0 Released
Happy Festivus, we got a new RubyGems!

Debugging an HTTP Client Library
Avdi gets to the bottom of an HTTP client.

Detect Similar Images
Check out part 1 of a series on detecting similar images.

Lotus is a new full-stack web application framework for Ruby.

Phusion Passenger 4.0.33 released

Posted 4 months back at Phusion Corporate Blog

Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.33 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

4.0.31 and 4.0.32 have been skipped because an incompatibility problem with very old Ruby versions was found by our build server shortly after tagging the releases. 4.0.32 fixes all those problems. The changes in 4.0.31, 4.0.32 and 4.0.33 combined are:

  • Introduced a new tool: passenger-config restart-app. With this command you can initiate an application restart without touching restart.txt. Unlike touching restart.txt, this tool initiates the restart immediately instead of on the next request.
  • Fixed some problems in process spawning and request handling.
  • Fixed some problems with the handling of HTTP chunked transfer encoding bodies. These problems only occurred in Ruby.
  • Fixed a compatibility problem in passenger-install-apache2-module with Ruby 1.8. The language selection menu didn’t work properly.
  • Fixed the HelperAgent, upon shutdown, not correctly waiting 5 seconds until all clients have disconnected. Fixes issue #884.
  • Fixed compilation problems on FreeBSD 10.
  • Fixed some C++ strict aliasing problems.
  • Fixed some problems with spawning applications that print messages without newline during startup. Fixes issue #1039.
  • Fixed potential hangs on JRuby when Ctrl-C is used to shutdown the server. Fixes issue #1035.
  • When Phusion Passenger is installed through the Debian package, passenger-install-apache2-module now checks whether the Apache module package (libapache2-mod-passenger) is properly installed, and installs it using apt-get if it’s not installed. Fixes issue #1031.
  • The passenger-status --show=xml command no longer prints the non-XML preamble, such as the version number and the time. Fixes issue #1037.
  • The Ruby native extension check whether it’s loaded against the right Ruby version, to prevent problems when people upgrade Ruby without recompiling their native extensions.
  • Various other minor Debian packaging improvements.

Installing or upgrading to 4.0.33

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball


Fork us on Github!

Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.

Announcing Lotus

Posted 4 months back at Luca Guidi - Home

I’m pleased to announce Lotus: the Open Source project I’ve conceived, hacked and built during the last year.

Lotus is a full stack web framework for Ruby, built with lightness, performances and testability in mind. It aims to bring back Object Oriented Programming to web development, leveraging on stable APIs, a minimal DSL, and plain objects.

Standalone frameworks

It’s composed by standalone frameworks (controllers, views, etc..), each one is shipped as an independent gem, in order to remark the separation of concerns. They can be used with any Rack compatible application for a specific need: for instance, Lotus::Router can be used to dispatch HTTP requests for a pool of Sinatra applications.

Full stack application

The other way to use Lotus is to build a full stack application with it, like Rails does. The Lotus gem is designed to enhance those frameworks’ features with a few specific conventions.


Lotus is based on simplicity, less DSLs, few conventions, more objects, zero monkey-patching of the core language and standard lib, separation of concerns for MVC layers. It suggests patterns, rather than imposing. It leaves all the freedom to developers to build their own architecture, choose the inheritance structure. It simplifies testability, and encourages single, well defined responsibilities between classes.


Lotus is a complex software, it needs to be completed, and to get feedback in order to became production ready. Some of its frameworks already have reached a certain degree of maturity, other still needs to be crafted as a gem yet. A single release day would be hard to meet as expectation, so I would like to suggest an experiment: to open source a component on the 23rd of every month, starting from January with Lotus::Utils and Lotus::Router.

Happy new year!