Starting and Stopping Background Services with Homebrew

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I love Homebrew, but sometimes it really gets me down, you know? Especially when I have to deal with launchctl.

launchctl loads and unloads services that start at login. In OS X, these services are represented by files ending with .plist (which stands for "property list"). These plists are usually stored in either ~/Library/LaunchAgents or /Library/LaunchAgents. You load them (i.e. tell them to start at login) with launchctl load $PATH_TO_LIST and unload them with launchctl unload $PATH_TO_LIST. Loading a plist tells the program it represents (e.g. redis) to start at login, while unloading it tells the program not to start at login.

This post-install message from Homebrew may look familiar:

To have launchd start mysql at login:
    ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents
Then to load mysql now:
    launchctl load ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist
Or, if you don't want/need launchctl, you can just run:
    mysql.server start

Typing launchctl load and launchctl unload takes too long, and I can never remember where Homebrew plists are. Fortunately, Homebrew includes a lovely interface for managing this without using launchctl or knowing where plists are.

brew services

While it's not publicized, brew services is available on every installation of Homebrew. First, run the ln command that Homebrew tells you about in the post-installation message above:

ln -sfv /usr/local/opt/mysql/*.plist ~/Library/LaunchAgents

For Redis, you'd run:

# `brew info redis` will tell you what to run if you missed it
ln -sfv /usr/local/opt/redis/*.plist ~/Library/LaunchAgents

And so on. Now you're ready to brew a service:

$ brew services start mysql
==> Successfully started `mysql` (label: homebrew.mxcl.mysql)

That bit about "label: " means it just loaded ~/Library/LaunchAgents/homebrew.mxcl.mysql.plist with launchctl load.

Let's say MySQL's acting funky. We can easily restart it:

brew services restart mysql
Stopping `mysql`... (might take a while)
==> Successfully stopped `mysql` (label: homebrew.mxcl.mysql)
==> Successfully started `mysql` (label: homebrew.mxcl.mysql)

Now let's see everything we've loaded:

$ brew services list
redis      started      442 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.redis.plist
postgresql started      443 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.postgresql.plist
mongodb    started      444 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mongodb.plist
memcached  started      445 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.memcached.plist
mysql      started    87538 /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mysql.plist

Note that the list of services includes services you started with launchctl load, not just services you loaded with brew services.

Let's say we uninstalled MySQL and Homebrew didn't remove the plist for some reason (it usually removes it for you). There's a command for you:

$ brew services cleanup
Removing unused plist /Users/gabe/Library/LaunchAgents/homebrew.mxcl.mysql.plist

Kachow.

Hidden Homebrew commands

Homebrew ships with a whole bunch of commands that don't show up in brew --help. You can see a list of them in the Homebrew git repo. Each file is named like brew-COMMAND, and you run them with brew command. I recommend brew beer.

What's next?

If you liked this, I recommend reading through Homebrew's Tips and Tricks. You can also try out another Homebrew extension for installing Mac apps: homebrew-cask.

Open Data Scotland: a Linked Data pilot study for the Scottish Government

Posted 3 months back at RicRoberts :

digital social map

Last month we launched Open Data Scotland - a pilot site built for the Scottish Government to showcase how Linked Open Data can make for smarter, more efficient data use. Accompanying the site is a report (download pdf) which we produced to explain what open linked data is; how to publish it effectively and its potential use and benefits to the Scottish public sector.

The site is split into three parts:

We wanted to emphasise the potential of Linked Data to a range of users. So, we’re using datasets with topics that have proven popular in other projects, such as deprivation data and we’ve targeted each section of the site to a slightly different audience.

One new concept that we introduced in this project are contextual tutorials, aimed at a range of users: from those working with spreadsheets to those interested in more technical Linked Data wizardry. We love it because it gives a whole new set of people a friendly way in to using the data. We’re introducing a whole new audience to the power of Linked Data.

Something else new to this project are data kits. These kits bridge the gap between the interactive visualisations and the more technical aspects. They also help more advanced users get started on working with the data in the site. This helps to get the right information to the people who want it, in a form that allows them to use it quickly and easily.

We’re really excited about this project which emphasises how interactive, accessible and useful Linked Data can be. Bill introduced the pilot at the Open Data Scotland conference in December. Check it out yourself and let us know what you think.

Brewfile: a Gemfile, but for Homebrew

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Bundler users define dependencies for Ruby applications in a Gemfile and install those dependencies by running bundle install.

Homebrew users can define dependencies for their OS X operating system with a Brewfile and install those dependencies by running brew bundle. Let's write a basic Brewfile:

# Brewfile
install openssl
# a comment
link --force openssl

Note that Homebrew will treat lines that start with # as comments. Every other line will be passed to brew. So this:

install openssl
# a comment
link -f openssl

is run as these commands:

brew install openssl
brew link --force openssl

Usage

I can think of a few places where a Brewfile would be welcome:

  • In dotfiles, either yours or your company's. For example, we use it in our excellent dotfiles repo.
  • A setup script for your app (bundle install && brew bundle)
  • A setup script for a new machine. I often forget to install one of them (like rbenv-gem-rehash).

It's a neat encapsulation for non-programming-language dependencies like phantomjs.

What's next?

If you found this useful, I recommend reading through the source of the brew bundle command. For more Homebrew tricks, read through our OSX-related posts.

Scrolling DOM elements to the top, a Zepto plugin

Posted 3 months back at mir.aculo.us - Home

There’s bunches of plugins, extensions and techniques to smoothly scroll page elements, but most of them are convoluted messes and probably do more than you need. I like “small and works well”, and it’s a good exercise for those JavaScript and DOM muscles to write a small plugin from time to time.

My goal was to have an animated “scroll to top” for the mobile version of Freckle—normally the browser would take care of that (tap status bar to scroll to top), but in a more complex layout the built-in mechanisms for this quickly fail and you’ll have to implement some of the interactions users expect (like tap status bar to scroll to top) yourself. Specifically, this is for the native app wrapper (Cordova) I use for Freckle’s upcoming mobile app. It’s hooked up so that taps on the statusbar invoke a JavaScript method.

During development of this I needed the same thing for arbitrary scroll positions as well, so “scrolltotop” is a bit of a misnomer now. Anyway, here’s the annotated code:

<script src="https://gist.github.com/madrobby/8507960.js"></script>

Often, writing your own specialized plug-in is faster than trying to understand and configure existing code. If you do, share it! :)

Episode #433 - January 17, 2014

Posted 3 months back at Ruby5

ActiveSupport Notifications, RailsBricks, DotEnv, Builder, Decorator, Chain of Responsibility, and null object patterns

Listen to this episode on Ruby5

NewRelic
NewRelic recently posted about what Nonlinear Dynamics Teach Us About App Stability

Instrumenting Your Code With ActiveSupport Notifications
We've been having hack lunches at CustomInk | Tech to level up our rails knowledge. Find out what we learned about ActiveSupport Notifications

RailsBricks
RailsBricks will setup Bootstrap 3, Font Awesome, Devise, Kaminari and build out the basic models and views for those gems

Composable Matchers in RSpec 3.0
One of RSpec 3’s big new features is composable matchers. This feature will help make your tests more powerful with less brittle expectations

DotEnv
One of the tenets of a Twelve-Factor App is to store configuration in env vars. They are easy to change between deploys without changing any code; and unlike config files, there is little chance of them being checked into the code repo accidentally.

Code Show and Tell: PolymorphicFinder
You just need a quick refactor to use the Builder, Decorator, Chain of Responsibility, and null object pattern

We're NASA and We Know It (Mars Curiosity) Song
Thank you for listening to Ruby5. Be sure to tune in every Tuesday and Friday for the latest news in the Ruby and Rails community.

Rails + Angular + Jasmine: A Modern Testing Stack

Posted 3 months back at zerosum dirt(nap) - Home

When I started on my first Angular+Rails project around 12 months ago, there wasn't a lot of guidance around code organization, interop, and testing, and we got a lot of these things wrong. Since then, I've worked on several additional projects using the same tech stack and have had several more chances to screw it up all over again. After a few of these, I feel like I've finally got some conventions in place that work well for better code organization, interop, and testing.

This morning the team over at Localytics (hi Raj!) wrote up a good retrospective on their use of Angular + Rails over the past year, including lessons they learned and ongoing challenges. They touch on several of the same issues that my colleagues and I have run into, and the writeup inspired me to dust off my old busted blog to document some of my own findings.

Testing Your JavaScript Has Never Been Easier

One area that I felt like needed some further clarity was testing. In particular, how a Rails-centric application can cleanly and easily integrate tests around Angular frontend logic. Fortunately, once you figure out how to set this up, you'll find that unit testing Angular code in Jasmine -- especially controller and factory code -- is surprisingly easy to do. It's really the first time I've been sufficiently happy with a frontend testing configuration.

To see a working example for yourself and hack around with it, go snag the sample project I pushed up to GitHub. Bundle and run it, and play around with the shockingly awesome todo list application. Because the world really needed another one of those. When you've had enough of that, take a look at the contents of the spec/javascripts directory.

We're using the jasmine-rails test runner with CoffeeScript here, because that's what works for me (sorry Karma). Pay close attention to the spec_helper.coffee, which does much of the dependency injection needed to provide clean and intuitively named interfaces in our example controller spec.

<script src="https://gist.github.com/8480021.js?file="></script>
<noscript>
<html><body>You are being <a href="https://github.com/gist/8480021">redirected</a>.</body></html>
</noscript>

This gives us nice ways to interface with the factories and controllers we're defining, as well as Angular's own ngMock library (super useful for stubbing server-side endpoints), the event loop, and even template compilation for partials and directives. A couple of these are illustrated in the sample controller spec shown here:

<script src="https://gist.github.com/8480013.js?file="></script>
<noscript>
<html><body>You are being <a href="https://github.com/gist/8480013">redirected</a>.</body></html>
</noscript>

Jasmine's syntax should be very familiar to anyone who does RSpec BDD work, and the work we've done in our spec helper really cleans up the beforeEach setup that's required in each individual controller spec. These particular tests make heavy use of ngMock, which you won't always need to use, and the calls to flush() are required to fulfill pending requests, preserving the async nature of the backend but allowing the tests to execute synchronously.

Testing Continuously With Guard

Although the Jasmine web interface is nice, but I'm a big fan of using Guard in order to watch for filesystem events and kick off automated test runs from the command line. By including the guard-jasmine gem and updating our Guardfile we can continuously test both our server-side RSpec logic and the Jasmine unit tests all at the same time through a single interface:

One thing I haven't addressed here is directive testing, which can be a bit more difficult. I'll try to address that in a future post, or if you have your own recipes, feel free to link em up in the comments.

Special thanks to Mark Bates for working with me on early versions of this approach, and convincing me that Angular was worth looking at in the first place.

Recursive Macros in Vim

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Macros in vim can be a huge time saver, especially if they apply to a large number of lines. A trick I've been using recently is to use recursive macros to format large chunks of a file.

Let's say we have the following list of thousands of dates:

10/30/2013
11/30/2013
12/30/2013
...

And we want to change each to the following:

10/30/2013 : 10-30-2013
11/30/2013 : 11-30-2013
12/30/2013 : 12-30-2013
...

Macro Recording Time

Let's create the macro:

qqq             #clear out anything that may already be in the q register
qq              #start recording a macro and store it in the q register
y$              #copy to the end of the current line
A               #append the end of the current line
<Space>:<Space> #add a colon surrounded by spaces
p               #paste the date from the buffer
<Escape>        #return to visual mode
F/              #find the last instance of /
r-              #replace the / with a -
;.              #repeat the last find and replace
^               #go to the front of the line
j               #move down one line
@q              #make the macro recursive by having it invoke itself
q               #stop recording the macro

Now when you run @q vim will run the macro on every line until it finishes while you sit back and relax. I like using recursive macros because the loop will be exited if it fails to execute on a line. This improves the speed of making changes without risking applying it incorrectly throughout the file, provided you write your macros carefully.

What's next?

If you liked this post you should check out our vim screencast series The Art of Vim.

Phusion Passenger 4.0.35 released

Posted 3 months back at Phusion Corporate Blog

Version 4.0.34 has been skipped because it was an non-public release for QA purposes. The changes in 4.0.34 and 4.0.35 combined are:

  • The Node.js loader code now sets the isApplicationLoader attribute on the bootstrapping module. This provides a way for apps and frameworks that check for module.parent to check whether the current file is loaded by Phusion Passenger, or by other software that work in a similar way.

    This change has been introduced to solve a compatibility issue with CompoundJS. CompoundJS users should modify their server.js, and change the following:

    if (!module.parent) {
    

    to:

    if (!module.parent || module.parent.isApplicationLoader) {
    
  • Improved support for Meteor in development mode. Terminating Phusion Passenger now leaves less garbage Meteor processes behind.

  • It is now possible to disable the usage of the Ruby native extension by setting the environment variable PASSENGER_USE_RUBY_NATIVE_SUPPORT=0.
  • Fixed incorrect detection of the Apache MPM on Ubuntu 13.10.
  • When using RVM, if you set PassengerRuby/passenger_ruby to the raw Ruby binary instead of the wrapper script, Phusion Passenger will now print an error.
  • Added support for RVM >= 1.25 wrapper scripts.
  • Fixed loading passenger_native_support on Ruby 1.9.2.
  • The Union Station analytics code now works even without native_support.
  • Fixed passenger-install-apache2-module and passenger-install-nginx-module in Homebrew.
  • Binaries are now downloaded from an Amazon S3 mirror if the main binary server is unavailable.
  • And finally, although this isn’t really a change in 4.0.34, it should be noted. In version 4.0.33 we changed the way Phusion Passenger’s own Ruby source files are loaded, in order to fix some Debian and RPM packaging issues. The following doesn’t work anymore:

    require 'phusion_passenger/foo'
    

    Instead, it should become:

    PhusionPassenger.require_passenger_lib 'foo'
    

    However, we overlooked the fact that this change breaks Ruby apps which use our Out-of-Band GC feature, because such apps had to call require 'phusion_passenger/rack/out_of_band_gc'. Unfortunately we’re not able to maintain compatibility without reintroducing the Debian and RPM packaging issues. Users should modify the following:

    require 'phusion_passenger/rack/out_of_band_gc'
    

    to:

    if PhusionPassenger.respond_to?(:require_passenger_lib)
      # Phusion Passenger >= 4.0.33
      PhusionPassenger.require_passenger_lib 'rack/out_of_band_gc'
    else
      # Phusion Passenger < 4.0.33
      require 'phusion_passenger/rack/out_of_band_gc'
    end
    

Installing or upgrading to 4.0.35

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Compare Commits Between Git Branches

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Working with a lot of git branches can be a bit of a headache. Graph visualisations can get tangled and confusing, especially when they include more than just the branches you care about. Sound familiar? You need git show-branch.

I have a feature branch called stock-information on a project that's hosted on Heroku. I want to compare it to my master branch and to the master branch on my staging remote:

git show-branch stock-information staging/master master

The output can be a little confusing at first, but once you learn how to read it it's a huge time saver:

! [stock-information] WIP: Link to data series
 ! [staging/master] Add a description to Stock
  ! [master] Display Stocks
---
+   [stock-information] WIP: Link to data series
+   [stock-information~1] Create DataSeries for Stocks.
++  [staging/master] Add a description to Stock
++  [staging/master~1] Import external Stock information
+++ [master] Display Stocks

The first three lines are column headings. They show the commit at the tip of each of the branches I specified, with a ! to indicate which column will represent this branch in the lines that follow.

After the --- come the commits. The + characters near the start of the lines indicate which of the branches this commit is present on.

For example, the first commit only has a + in the first column. This lines up with the ! for stock-information in the heading section. So, we know that this commit is on the stock-information branch but not staging/master or master.

The third commit ("Add a description to Stock") has a + in each of the first two columns, which indicates it is present on the stock-information and staging-master.

The output will end with the last commit that is present on all of the specified branches, indicated by a + in each of the leading columns.

What's next?

If you found this useful, you might also enjoy:

Who's using the Internet for social good?

Posted 3 months back at RicRoberts :

Digital Social Innovation is a project we’ve been working on for Nesta which is all about tracking organisations and activities across Europe using the Internet for social good.

You can explore who’s been working on what and with whom via an interactive map, which updates in realtime as more data is added.

digital social map

Any organisation in Europe can sign up to showcase themselves and their activities and, because the projects are linked, you can see at a glance who else is working on them. Each activity has a page describing it and the areas it impacts as well as a lovely map visualisation, showing who’s joining in on it. For example, check out the CitySDK project.

The icing on the cake is that all the data entered via the site is instantly accessible in a Linked Open Data site powered by our PublishMyData platform. So, personal details excepted, anyone and everyone can access anything and everything in there. Personally, this is one of our favourite features of this project; the more that people can access the data, the more it can be used. And getting data used is what we’re all about. Full details of how to access the data programatically via the APIs can be found here.

digital social data

The information collected in the site is being analysed by our collaborators in the project (who include FutureEverything, Esade, IRI and the Waag Society) to help identify the most important trends and influencers in this area, and so provide policy recommendations to the EU, who are funding the project.

We’re proud to have worked on this and think it’s an interesting and innovative use of Linked Data. Read out more about the project on its About page and blog.

We're Hiring a Producer

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We're looking to hire a full-time producer in Boston.

This person will have the following responsiblities:

  • Recording, editing, and writing show notes for the Giant Robots podcast.
  • Recording and editing the Build Phase podcast.
  • Scheduling guests for the Giant Robots podcast.
  • Shooting and editing The Weekly Iteration (a recurring video show for Learn subscribers).
  • Shooting and editing longer, video-based workshops.
  • Managing outsourced editors for larger projects.
  • Managing our studio space and equipment.

The ideal candidate has experience recording and editing both video and audio, but we'll happily consider passionate learners with experience in just one of the two.

This position is full-time, with benefits including weekly catered lunches, health insurance, and unlimited paid time off.

It also has an extremely high degree of autonomy. You'll be given a credit card—if you think we need a piece of equipment, just order it. If you want to try a new way of shooting, or a new tool for editing, go for it. Great candidates would rather be set loose on a problem than told what to do about it. thoughtbot is an organization that embraces change, and we're looking for someone who is always looking to do things better than last time.

To apply, please email resumes@thoughtbot.com.

sed 102: Replace In-Place

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Many people know how to use basic sed:

$ sed 's/hello/bonjour/' greetings.txt
$ echo "hi there" | sed 's/hi/hello'

That'll cover 80% of your sed usage. This post is about the other 20%. Think of it as a followup course after sed 101.

So you can change streams by piping output to sed. What if you want to change the file in-place?

Replacing in-place

sed ships with the -i flag. Let's consult man sed:

-i extension
    Edit files in-place, saving backups with the specified extension.

Let's try it:

$ ls
greetings.txt
$ cat greetings.txt
hello
hi there
$ sed -i .bak 's/hello/bonjour' greetings.txt
$ ls
greetings.txt
greetings.txt.bak
$ cat greetings.txt
bonjour
hi there
$ cat greetings.txt.bak
hello
hi there

So the original file contents are saved in a new file called [file_name].bak, and the new, changed version is in the original greetings.txt. Now all we have to do is:

$ rm greetings.txt.bak

And we've changed the file in-place. You are now the toast of the office, sung of by bards:

there walks the Unix programmer / they who know of sed -i

Let's get l33t

Wait, there's more in that man entry for sed -i:

If a zero-length extension is given, no backup will be saved.  It is not
recommended to give a zero-length extension when in-place editing files, as
you risk corruption or partial content in situations where disk space is
exhausted, etc.

Zero-length extension, eh? Let's use our original greetings.txt file before we changed it:

$ sed -i '' 's/hello/bonjour' greetings.txt
$ ls
greetings.txt
$ cat greetings.txt
bonjour
hi there
$ cat greetings.txt.bak
cat: greetings.txt.bak: No such file or directory

The -i '' tells sed to use a zero-length extension for the backup. A zero-length extension means that the backup has the same name as the new file, so no new file is created. It removes the need to run rm after doing an in-place replace.

I haven't run into any disk-space problems with -i ''. If you are worried about the man page's warning, you can use the -i .bak technique I mention in the previous section.

Find and replace in multiple files

We like sed so much that we use it in our replace script. It works like this:

$ replace foo bar **/*.rb

The first argument is the string we're finding. The second is the string with which we're replacing. The third is a pattern matching the list of files within which we want to restrict our search.

Now that you're a sed master, you'll love reading replace's source code.

What's next?

If you found this useful, you might also enjoy:

  • sed by example taught me sed. It's a great resource in an easy-to-follow format.
  • The Grymoire sed guide is also an easy-to-follow guide that starts off easy and dives deep. It's helpful when learning and as a reference.

Episode #432 - January 14, 2014

Posted 3 months back at Ruby5

We Brag about our Backend, shed some Light on Test Driven Rails, avoid the DBeater, pout over Ruby 1.9's end of life on this HAIKU edition of Ruby5.

Listen to this episode on Ruby5

This episode is sponsored by Top Ruby Jobs
If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.

Light Table Ruby
Rafe Rosen just released a new plugin for the recently-open-sourced Light Table IDE last week that adds full page, selection, or single line Ruby code execution. You can use this to quickly execute or demonstrate some code without leaving your Ruby files.

Test Driven Rails Part 1
Last week, Karol Galanciak posted the first article in a series on Test Driven Rails. The series is intending to cover how, when, and what to test when developing a Rails application. This first part is mostly theoretical, and Part 2 will take the topics discussed and apply them to application development.

Code Reviews with Codebrag
Code reviews are sometimes hard to do, and do consistently. Codebrag is a downloadable Ruby application that you can install and run on your own servers to watch your repositories and give you a simple interface for reviewing your code. Version 1 is free, and will be forever, so check it out.

XML-based DB Migrations with DBeater
DBeater is a yet-to-be-backed, crowdfunded project on Indiegogo which will become a Ruby gem that will allow you to migrate and version your database. It's backend agnostic, but uses XML instead of Ruby for it's definition files. We can't all be perfect, eh?

Faster I18n Backend for Ruby Written in C
i18nema is a new I18n translation library which uses C underpinnings to ease some of the garbage collection / Ruby object generation pain in current I18n libraries. It should be faster and more memory efficient, albeit not something you likely want to talk too much about at work.

Ruby 1.9.3 End of Life
The Ruby core team announced late last week that support for Ruby 1.9 will be ending. Active development will cease in about a month, followed by a year of security fix support, and all support will end in February of 2015. Time to migrate to Ruby 2.1!

About the recent oss-binaries.phusionpassenger.com outages

Posted 3 months back at Phusion Corporate Blog

As some of you might have noticed, there were some problems recently with the server oss-binaries.phusionpassenger.com, where we host our APT repository and precompiled binaries for Phusion Passenger. Although it was originally meant to be a simple file server meant for speeding up installation (by avoiding the need to compile Phusion Passenger), it has grown a lot in importance in the past few Phusion Passenger releases, so that any down time causes major problems for many users:

  • Our APT repository has grown more popular than we thought.
  • Many Heroku users are unable to start new dynos as long as the server is down. The Heroku dyno environment does not provide the necessary compiler toolchain, nor the hardware resources, to compile Phusion Passenger. Which is why when run on Heroku, Phusion Passenger downloads binaries from our server.

The server had first gone down on Sunday and was fixed later that day. Unfortunately it had gone down again on Tuesday morning, which we fixed soon after.

We sincerely apologize for this problem. But of course, apologies are not going to cut it. Since the first outage on Sunday we realized just how important this — originally minor — server became. Since Sunday we’ve begun work to solve this issue permanently. It’s clear that relying on a single server is a mistake, so we’re taking the following actions:

  • We’re adjusting the download timeouts in Phusion Passenger so that server problems don’t freeze it indefinitely. This allows Phusion Passenger to detect server problems quicker, and to fall back to compilation, without triggering any timeouts that may abort Phusion Passenger entirely. This work has been implemented yesterday but requires some more testing.
  • Instead of trying to download the native_support binary, Phusion Passenger should try to compile it first, because compiling native_support takes less than 1 second. If the correct compiler toolchain is installed on the server then it will avoid using the network entirely, so that it’s unaffected by any server outages of ours. This has also been implemented yesterday. (The rest of Phusion Passenger takes longer to compile so we can’t apply the same strategy there.)
  • For Heroku users: having the binaries downloaded at Heroku deploy time, not at dyno boot time, so that Heroku users are less susceptible to download problems. This has been implemented yesterday.
  • Reverting any server changes that we’ve made recently to oss-binaries.phusionpassenger.com, in the hope that it would increase the server’s uptime. The true reason for the downtime is still under investigation, but we’re giving the other items in this list more priority because they have more potential to fix the problem permanently. This has been implemented today.
  • Setting up an Amazon S3 mirror for high availability. If the main server is down, Phusion Passenger should automatically download from the mirror instead. We’re currently working on this.

The goal is to finish all these items this week and to release a new version that includes these fixes. We’re working around the clock on this.

Workarounds for now

Users can apply the following workaround for now in order to prevent Phusion Passenger from freezing during downloading of binaries:

Edit /etc/hosts and add “127.0.0.1 oss-binaries.phusionpassenger.com”

Phusion Passenger will automatically fall back to compiling if it can’t download binaries.

Unfortunately, this workaround will not be useful for users who rely on our APT repository, or Heroku users. We’re working on a true fix as quickly as we can.

How We Test Rails Applications

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I'm frequently asked what it takes to begin testing Rails applications. The hardest part of being a beginner is that you often don't know the terminology or what questions you should be asking. What follows is a high-level overview of the tools we use, why we use them, and some tips to keep in mind as you are starting out.

RSpec

We use RSpec over Test::Unit because the syntax encourages human readable tests. While you could spend days arguing over what testing framework to use, and they all have their merits, the most important thing is that you are testing.

Feature specs

Feature specs, a kind of acceptance test, are high-level tests that walk through your entire application ensuring that each of the components work together. They're written from the perspective of a user clicking around the application and filling in forms. We use RSpec and Capybara, which allow you to write tests that can interact with the web page in this manner.

Here is an example RSpec feature test:

# spec/features/user_creates_a_foobar_spec.rb

feature 'User creates a foobar' do
  scenario 'they see the foobar on the page' do
    visit new_foobar_path

    fill_in 'Name', with: 'My foobar'
    click_button 'Create Foobar'

    expect(page).to have_css '.foobar-name', 'My foobar'
  end
end

This test emulates a user visiting the new foobar form, filling it in, and clicking "Create". The test then asserts that the page has the text of the created foobar where it expects it to be.

While these are great for testing high level functionality, keep in mind that feature specs are slow to run. Instead of testing every possible path through your application with Capybara, leave testing edge cases up to your model, view, and controller specs.

I tend to get questions about distinguishing between RSpec and Capybara methods. Capybara methods are the ones that are actually interacting with the page, i.e. clicks, form interaction, or finding elements on the page. Check out the docs for more info on Capybara's finders, matchers, and actions.

Model specs

Model specs are similar to unit tests in that they are used to test smaller parts of the system, such as classes or methods. Sometimes they interact with the database, too. They should be fast and handle edge cases for the system under test.

In RSpec, they look something like this:

# spec/models/user_spec.rb

# Prefix class methods with a '.'
describe User, '.active' do
  it 'returns only active users' do
    # setup
    active_user = create(:user, active: true)
    non_active_user = create(:user, active: false)

    # exercise
    result = User.active

    # verify
    expect(result).to eq [active_user]

    # teardown is handled for you by RSpec
  end
end

# Prefix instance methods with a '#'
describe User, '#name' do
  it 'returns the concatenated first and last name' do
    # setup
    user = build(:user, first_name: 'Josh', last_name: 'Steiner')

    # excercise and verify
    expect(user.name).to eq 'Josh Steiner'
  end
end

To maintain readability, be sure you are writing Four Phase Tests.

Controller specs

When testing multiple paths through a controller is necessary, we favor using controller specs over feature specs, as they are faster to run and often easier to write.

A good use case is for testing authentication:

# spec/controllers/sessions_controller_spec.rb

describe 'POST #create' do
  context 'when password is invalid' do
    it 'renders the page with error' do
      user = create(:user)

      post :create, session: { email: user.email, password: 'invalid' }

      expect(response).to render_template(:new)
      expect(flash[:notice]).to match(/^Email and password do not match/)
    end
  end

  context 'when password is valid' do
    it 'sets the user in the session and redirects them to their dashboard' do
      user = create(:user)

      post :create, session: { email: user.email, password: user.password }

      expect(response).to redirect_to '/dashboard'
      expect(controller.current_user).to eq user
    end
  end
end

View specs

View specs are great for testing the conditional display of information in your templates. A lot of developers forget about these tests and use feature specs instead, then wonder why they have a long running test suite. While you can cover each view conditional with a feature spec, I prefer to use view specs like the following:

# spec/views/products/_product.html.erb_spec.rb

describe 'products/_product.html.erb' do
  context 'when the product has a url' do
    it 'displays the url' do
      assign(:product, build(:product, url: 'http://example.com')

      render

      expect(rendered).to have_link 'Product', href: 'http://example.com'
    end
  end

  context 'when the product url is nil' do
    it "displays 'None'" do
      assign(:product, build(:product, url: nil)

      render

      expect(rendered).to have_content 'None'
    end
  end
end

FactoryGirl

While writing your tests you will need a way to set up database records in a way to test against them in different scenarios. You could use the built-in User.create, but that gets tedious when you have many validations on your model. With User.create you have to specify attributes to fulfill the validations, even if your test has nothing to do with those validations. On top of that, if you ever change your validations later, you have to reflect those changes across every test in your suite. The solution is to use either factories or fixtures to create models.

We prefer factories (with FactoryGirl) over Rails fixtures, because fixtures are a form of Mystery Guest. Fixtures make it hard to see cause and effect, because part of the logic is defined in a file far away from the context in which you are using it. Because fixtures are implemented so far away from your tests, they tend to be hard to control.

Factories, on the other hand, put the logic right in the test. They make it easy to see what is happening at a glance and are more flexible to different scenarios you may want to set up. While factories are slower than fixtures, we think the benefits in flexibility and readability outweigh the costs.

Persisting to the database slows down tests. Whenever possible, favor using FactoryGirl's build_stubbed over create. build_stubbed will generate the object in memory and save you from having to write to the disk. If you are testing something in which you have to query for the object (like User.where(admin: true)), your database will be expecting to find it in the database, meaning you must use create.

Running specs with JavaScript

You will eventually run into a scenario where you need to test some functionality that depends on a piece of JavaScript. Running your specs with the default driver will not run any JavaScript on the page.

You need two things to run a feature spec with JavaScript.

  1. Install a JavaScript driver

    There are two types of JavaScript drivers. Something like Selenium will open a GUI browser and click around your page while you watch it. This can be a useful tool to visualize while debugging. Unfortunately, booting up an entire GUI browser is slow. For this reason, we prefer using a headless browser. For Rails, you will want to use either Poltergeist or Capybara Webkit.

  2. Tell the specific test to run with the JavaScript metadata key

     feature 'User creates a foobar' do
       scenario 'they see the foobar on the page', js: true do
         ...
       end
     end
    

With the following in place, RSpec will run any JavaScript necessary.

Database Cleaner

When running your tests by default, Rails wraps each scenario in a database transaction. This means, at the end of each test, Rails will rollback any changes to the database that happened within that spec. This is a good thing, as we don't want any of our tests having side effects on other tests.

Unfortunately, when we use a JavaScript driver, the test is run in another thread. This means it does not share a connection to the database and your test will have to commit the transactions in order for the running application to see the data. To get around this, we can allow the database to commit the data and subsequently truncate the database after each spec. This is slower than transactions, however, so we want to use truncation only when necessary.

This is where Database Cleaner comes in. Database Cleaner allows you to configure when each strategy is used. I recommend reading Avdi's post for all the gory details. It's a pretty painless setup, and I typically copy this file from project to project, or use Suspenders so that it's set up out of the box.

Test doubles and stubs

Test doubles are simple objects that emulate another object in your system. Often, you will want a simpler stand-in and only need to test one attribute, so it is not worth loading an entire ActiveRecord object.

car = double(:car)

When you use stubs, you are telling an object to respond to a given method in a known way. If we stub our double from before

car.stub(:max_speed).and_return(120)

we can now expect our car object to always return 120 when prompted for its max_speed. This is a great way to get an impromptu object that responds to a method without having to use a real object in your system that brings its dependencies with it. In this example, we stubbed a method on a double, but you can stub virtually any method on any object.

We can simplify this into one line:

car = double(:car, max_speed: 120)

Test spies

While testing your application, you are going to run into scenarios where you want to validate that an object receives a specific method. In order to follow Four Phase Test best practices, we use test spies so that our expectations fall into the verify stage of the test. Previously we used Bourne for this, but RSpec now includes this functionality in RSpec Mocks. Here's an example from the docs:

invitation = double('invitation', accept: true)

user.accept_invitation(invitation)

expect(invitation).to have_received(:accept)

Stubbing external requests with Webmock

Test suites that rely on third party services are slow, fail without an internet connection, and may have trouble with the services' rate limits or lack of a sandbox environment.

Ensure that your test suite does not interact with third party services by stubbing out external HTTP requests with Webmock. This can be configured in spec/spec_helper.rb:

require 'webmock/rspec'
WebMock.disable_net_connect!(allow_localhost: true)

Instead of making third party requests, learn how to stub external services in tests.

What's next?

This was just an overview of how to get started testing Rails. To expedite your learning, I highly encourage you to take our TDD workshop, where you cover these subjects in depth by building two Rails apps from the ground up. It covers refactoring both application and test code to ensure both are maintainable. Students of the TDD workshop also have access to office hours, where you can ask thoughtbot developers any questions you have in real time.

I took this class as an apprentice, and I can't recommend it enough.