Considering dropping support for Rails 1.0-2.2

Posted 29 days back at Phusion Corporate Blog

We work very hard to maintain backward compatibility in Phusion Passenger. Even the latest version still supports Ruby 1.8.5 and Rails 1.0. We’ve finally reached a point where we believe dropping support for Rails 1.0-2.2 will benefit the quality of our codebase. Is there anybody here who would object to us dropping support for Rails 1.0-2.2? If so, please let us know by posting a comment. Rails 2.3 will still be supported.

The post Considering dropping support for Rails 1.0-2.2 appeared first on Phusion Corporate Blog.

Acceptance Tests at a Single Level of Abstraction


Each acceptance test tells a story: a logical progression through a task within an application. As developers, it’s our responsibility to tell each story in a concise manner and to keep the reader (either other developers or our future selves) engaged, aware of what is happening, and understanding of the purpose of the story.

At the heart of understanding the story being told is a consistent, single level of abstraction; that is, each piece of behavior is roughly similar in terms of functionality extracted and its overall purpose.

An example of multiple levels of abstraction

Let’s first focus on an example of how not to write an acceptance test by writing a test at multiple levels of abstraction.

# spec/features/user_marks_todo_complete_spec.rb
feature "User marks todo complete" do
  scenario "updates todo as completed" do
    create_todo "Buy milk"

    find(".todos li", text: "Buy milk").click_on "Mark complete"

    expect(page).to have_css(".todos li.completed", text: "Buy milk")

  def create_todo(name)
    click_on "Add new todo"
    fill_in "Name", with: name
    click_on "Submit"

Let’s focus on the scenario. We’ve followed the four-phase test, separating each step:

scenario "updates todo as completed" do
  # setup
  create_todo "Buy milk"

  # exercise
  find(".todos li", text: "Buy milk").click_on "Mark complete"

  # verify
  expect(page).to have_css(".todos li.completed", text: "Buy milk")

  # teardown not needed

To prepare for testing that marking a todo complete works, we sign in and create a todo to mark complete. Once the todo is complete, we find it on the page and click the ‘Mark complete’ anchor tag associated with it. Finally, we ensure the same <li> is present, this time with a completed class.

From a behavior standpoint, this touches on each part of the app necessary to verify marking a todo as complete works; however, there are varying levels of abstraction in this test between the setup phase and exercise/verify phases. There are Capybara methods (find and have_css) interspersed with helper methods (sign_in and create_todo) which force developers to switch from user-level needs and outcomes to page-specific interactions like checking for presence of specific elements with CSS selectors.

An example of a single level of abstraction

Let’s now look at a scenario written at a single level of abstraction:

feature "User marks todo complete" do
  scenario "updates todo as completed" do
    create_todo "Buy milk"

    mark_complete "Buy milk"

    expect(page).to have_completed_todo "Buy milk"

  def create_todo(name)
    click_on "Add new todo"
    fill_in "Name", with: name
    click_on "Submit"

  def mark_complete(name)
    find(".todos li", text: name).click_on "Mark complete"

  def have_completed_todo(name)
    have_css(".todos li.completed", text: name)

This spec follows the Composed Method pattern, discussed in Smalltalk Best Practice Patterns, wherein each piece of functionality is extracted to well-named methods. Each method should be written at a single level of abstraction.

While we’re still following the four-phase test, the clarity provided by reducing the number of abstractions is obvious. There’s largely no context-switching as a developer reads the test because there’s no interspersion of Capybara helper methods with our methods (sign_in, create_todo mark_complete, and have_completed_todo).

The most common ways to introduce a single level of abstraction are to extract behavior to helper methods (either within the spec or to a separate file if the behavior is used across the suite) or to extract page objects.

The cost of going down the path of high-level helpers across a suite isn’t nonexistent, however; by extracting behavior to files outside the spec (especially as the suite grows and similar patterns emerge), the page interactions are separated from the tests themselves, which reduces cohesion.

Maintaining a single level of abstraction is a tool in every developer’s arsenal to help achieve clear, understandable tests. By extracting behavior to well-named methods, the developer can better tell the story of each scenario by describing behaviors consistently and at a high enough level that others will understand the goal and outcomes of the test.

Weather Lights

Posted about 1 month back at techno weenie - Home

I recently spoke at the GitHub Patchwork event in Boulder last month. My son Nathan tagged along to get his first taste of the GitHub Flow. I don't necessarily want him to be a programmer, but I do push him to learn a little to augment his interest in meteorology and astronomy.

The night was a success. He made it through the tutorial with only one complaint: the Patchwork credit went to my wife, who had created a GitHub login that night.

Since then, I've been looking for a project to continue his progress. I settled on a weather light, which consists of a ruby script that changes the color of a Philips Hue bulb. If you're already an experienced coder, jump straight to the source at


Unfortunately, there's one hefty requirement: You need a Philips Hue light kit, which consists of a Hue bridge and a light. Once you have the kit, you'll have to use the Hue API to create a user and figure out the ID of your light.

Next, you need to setup an account for the Weather2 API. There are a lot of services out there, but this one is free, supports JSON responses, and also gives simple forecasts. They allow 500 requests a day. If you set this script to run every 5 minutes, you'll only use 288 requests.

After you're done, you should have five values. Write these down somewhere.

  • HUE_API - The address of your Hue bridge. Probably something like ""
  • HUE_USER - The username you setup with the Hue API.
  • HUE_LIGHT - The ID of the Hue light. Probably 1-3.
  • WEATHER2_TOKEN - Your token for the weather2 API.
  • WEATHER2_QUERY - The latitude and longitude of your house. For example, Pikes Peak is at "38.8417832,-105.0438213."

Finally, you need ruby, with the following gems: faraday, color, and dotenv. If you're on a version of ruby lower than 1.9, you'll also want the json gem.

Writing the script

I'm going to describe the process I used to write the weatherhue.rb script. Due to the way ruby runs, it's not necessarily in the order that the code is written. If you look at the file, you'll see 4 sections:

  1. Lines requiring ruby gems.
  2. A few defined helper functions.
  3. A list of temperatures and their HSL values.
  4. Running code that gets the temperature and sets the light.

You'll likely find yourself bouncing around as you write the various sections.

Step 1: Get the temperature

The first thing the script needs is the temperature. There are two ways to get it: through an argument in the script (useful for testing), or a Weather API. This is a simple script that pulls the current temperature from the API forecast results.

if temp = ARGV[0]
  # Get the temperature from the first argument.
  temp = temp.to_i
  # Get the temperature from the weather2 api
  url = "{ENV["WEATHER2_TOKEN"]}&temp_unit=f&output=json&query=#{ENV["WEATHER2_QUERY"]}"
  res = Faraday.get(url)
  if res.status != 200
    puts res.status
    puts res.body

  data = JSON.parse(res.body)
  temp = data["weather"]["curren_weather"][0]["temp"].to_i

Step 2: Choose a color based on the temperature

I wanted the color to match color ranges on local news forecasts.

I actually went through the tedious process of making a list of HSL values at 5 degree increments. The eye dropper app I used gave me RGB values from 0-255, which had to be converted to the HSL values that the Hue lights take. Here's how I did it in ruby with the color gem:

rgb = [250, 179, 250]

# convert the values 0-255 to a decimal between 0 and 1.
rgb_color = Color::RGB.from_fraction 250/255.0, 179/255.0, 255/255.0
hsl_color = rgb_color.to_hsl

# convert hsl decimals to the Philips Hue values
hsl = [
  # hue
  (hsl_color.h * 65535).to_i,
  # saturation
  (hsl_color.s * 255).to_i,
  # light (brightness)
  (hsl_color.l * 255).to_i,

I simply wrote the result in an HSL hash:

HSL = {
  -20=>[53884, 255, 217],
  -15=>[53988, 198, 187],
  -10=>[53726, 161, 167],
  # ...

After I had converted everything, I noticed a couple things. First, the saturation and brightness values don't change that much, especially for the hotter temperatures. Second, the hue values range from 53884 to 1492. I probably didn't need to convert all those RGB values by hand :)

We can use this list of HSL values to convert any temperature to a color.

def color_for_temp(temp)
  remainder = temp % 5
  if remainder == 0
    return HSL[temp]

  # get the lower and upper bound around a temp
  lower = temp - remainder
  upper = lower + 5

  # convert the HSL values to Color::HSL objects
  lower_color = hsl_to_color(HSL[lower])
  upper_color = hsl_to_color(HSL[upper])

  # use Color::HSL#mix_with to get a color between two colors
  color = lower_color.mix_with(upper_color, remainder / 5.0)

  color_to_hsl color

Step 3: Set the light color

Now that we have the HSL values for the temperature, it's time to set the Philip Hue light. First, create a state object for the light:

# temp_color is an array of HSL colors: [53884, 255, 217]
state = {
  :on => true,
  :hue => temp_color[0],
  :sat => temp_color[1],
  :bri => temp_color[2],
  # performs a smooth transition to the new color for 1 second
  :transitiontime => 10,

A simple HTTP PUT call will change the color.

hueapi = ENV["HUE_API"]
hueapi.put "/api/#{ENV["HUE_USER"]}/lights/#{ENV["HUE_LIGHT"]}/state", state.to_json

Step 4: Schedule the script

If you don't want to set the environment variables each time, you can create a .env file in the root of the application.


You can then run the script with dotenv:

$ dotenv ruby weatherhue.rb 75

A crontab can be used to run this every 5 minutes. Run crontab -e to add a new entry:

# note: put tabs between the `*` values
*/5 * * * * cd /path/to/script; dotenv ruby weatherhue.rb

Confirm the crontab with crontab -l.

Bonus Round

  1. Can we simplify the function to get the HSL values for a temperature? Instead of looking up by temperature, use a percentage to get a hue range from 55,000 to 1500.
  2. Can we do something interesting with the saturation and brightness values?
    Maybe tweak them based on the time of day.
  3. Update the script to use the forecast for the day, and not the current temperature.
  4. Set a schedule that automatically only keeps the light on in the mornings when you actually care what the temperature will be.

I hope you enjoyed this little tutorial. I'd love to hear any experiences from working with it! Send me pictures or emails either to the GitHub issue for this post, or my email address.

Git pairing aliases, prompts and avatars

Posted about 1 month back at The Pug Automatic

When we pair program at Barsoom, we’ve started making commits in both users’ names.

This emphasizes shared ownership and makes commands like git blame and git shortlog -sn (commit counts) more accurate.

There are tools like hitch to help you commit as a pair, but I found it complex and buggy, and I’ve been happy with something simpler.


I just added some simple aliases to my Bash shell:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
alias pair='echo "Committing as: `git config` <`git config`>"'
alias unpair="git config --remove-section user 2> /dev/null; echo Unpaired.; pair"

alias pairf="git config user.pair 'FF+HN' && git config 'Foo Fooson and Henrik Nyh' && git config ''; pair"
alias pairb="git config user.pair 'BB+HN' && git config 'Bar Barson and Henrik Nyh' && git config ''; pair"

pair tells me who I’m committing as. pairf will pair me up with Foo Fooson; pairb will pair me up with Bar Barson. unpair will unpair me.

All this is done via Git’s own persistent per-repository configuration.

The emails use plus addressing, supported by Gmail and some others: ends up at

I recommend consistently putting the names in alphabetical order so the same pair is always represented the same way.

If you’re quite promiscuous in your pairing, perhaps in a large team, the aliases will add up, and you may prefer something like hitch. But in a small team like ours, it’s not an issue.


A killer feature of my solution, that doesn’t seem built into hitch or other tools, is that it’s easy to show in your prompt:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
function __git_prompt {
  [ `git config user.pair` ] && echo " (pair: `git config user.pair`)"

PS1="\W\$(__git_prompt)$ "

This will give you a prompt like ~/myproject (pair: FF+HN)$ when paired, or ~/myproject$ otherwise.


GitHub looks better if pairs have a user picture.

You just need to add a Gravatar for the pair’s email address.

When we started committing as pairs, I toyed a little with generating pair images automatically. When Thoughtbot wrote about pair avatars yesterday, I was inspired to ship something.

So I released Pairicon, a tiny open source web app that uses the GitHub API and a free Cloudinary plan to generate pair avatars.

Try it out!

The risks of feature branches and pre-merge code review

Posted about 1 month back at The Pug Automatic

Our team has been doing only spontaneous code review for a good while – on commits in the master branch that happen to pique one’s interest. This week, we started systematically reviewing all non-trivial features, as an experiment, but this is still after it’s pushed to master and very likely after it has been deployed to production.

This is because we feel strongly about continuous delivery. Many teams adopt the practice of feature branches, pull requests and pre-merge (sometimes called “pre-commit”) code review – often without, I think, realizing the downsides.

Continuous delivery

Continuous delivery is about constantly deploying your code, facilitated by a pipeline: if some series of tests pass, the code is good to go. Ideally it deploys production automatically at the end of this pipeline.

Cycles are short and features are often released incrementally, perhaps using feature toggles.

This has major benefits. But if something clogs that pipeline, the benefits are reduced. And pre-merge code review clogs that pipeline.

The downsides of pre-merge review

Many of these downsides can be mitigated by keeping feature branches small and relatively short-lived, and reviewing continuously instead of just before merging – but even then, most apply to some extent.

  • Bugs are discovered later. With long-lived feature branches, sometimes much later.

    With continuous delivery, you may have to look for a bug in that one small commit you wrote 30 minutes ago. You may have to revert or fix that one commit.

    With a merged feature branch, you may have several commits or one big squashed commit. The bug might be in code you wrote a week ago. You may need to roll back an entire, complex feature.

    There is an obvious risk to deploying a large change all at once vs. deploying small changes iteratively.

  • Feedback comes later.

    Stakeholders and end-users are more likely to use the production site than to review your feature branch. By incrementally getting it out there, you can start getting real feedback quickly – in the form of user comments, support requests, adoption rates, performance impact and so on. Would you rather get this feedback on day 1 or after a week or more of work?

  • Merge conflicts or other integration conflicts are more likely.

  • It is harder for multiple pairs to work on related features since integration happens less often.

    If they share a feature branch, all features have to be reviewed and merged together.

  • The value added by your feature or bug fix takes longer to reach the end user.

    Reviews can take a while to do and to get to.

  • It is frustrating to the code author not to see their code shipped.

    It may steal focus from the next task, or even block them or someone else from starting on it.

The downsides of post-merge review

Post-merge review isn’t without its downsides.

  • Higher risk of releasing bugs and other defects.

    Anything a pre-merge review would catch may instead go into production.

    Then again, since releases are small and iterative, usually these are smaller bugs and easier to track down.

  • Renaming database tables and columns without downtime in production is a lot of work.

    Assuming you want to deploy without downtime, database renames are a lot of work. When you release iteratively, you will add tables and columns to production sooner, perhaps before discovering better names. Then you have to rename them in production.

    This can be annoying, but it’s not a dealbreaker.

    We try to mitigate this by always getting a second opinion on non-obvious table or column names.

  • New hires may be insecure about pushing straight to master.

    Pair programming could help that to some extent. You can also do pre-merge code review temporarily for some person or feature.

I fully acknowledge these downsides. This is a trade-off. It’s not that post-merge review is flawless; I just feel it has more upsides and fewer downsides all in all.

Technical solutions for post-merge review

I think GitHub’s excellent tools around pull requests is a major reason for the popularity of pre-merge review.

We only started with systematic post-merge reviews this week, and we’re doing it the simplest way we could think of: a “Review” column for tickets (“cards”) in Trello, digging through git log and writing comments on the GitHub commits.

This is certainly less smooth than pull requests.

We have some ideas. Maybe use pull requests anyway with immediately-merged feature branches. Or some commit-based review tool like Barkeep or Codebrag.

But we don’t know yet. Suggestions are welcome.

Our context

To our team, the benefits are clear.

We are a small team of 6 developers, mostly working from the same room in the same office. We often discuss complex code in person before it’s committed.

We’re not in day trading or life-support systems, so we can accept a little risk for other benefits. Though I’m not sure pre-release review actually reduces risks overall, as discussed above.

If you’re in another situation than ours, your trade-off may be different. It would be interesting to hear about that in the comments.

Don't mix in your privates

Posted about 1 month back at The Pug Automatic

Say we have this module:

<figure class="code">
module Greeter
  def greet(name)
    "HELLO, #{normalize(name)}!"


  def normalize(name)

We can include it to make instances of a class correspond to a “greeter” interface:

<figure class="code">
class Person
  include Greeter

person =
person.greet("Joe")  # => "HELLO, JOE!"

Is greet the whole interface?

It is the only public methods the module gives us, but it also has a private normalize method, part of its internal API.

The risk of collision

The private method has a pretty generic name, so there’s some risk of collision:

<figure class="code">
class Person
  include Greeter

  def initialize(age)
    @age = normalize(age)


  def normalize(age)
    [age.to_i, 25].min

person =
person.greet("Joe")  # => "HELLO, 0!"

The module’s greet method will call Person’s normalize method instead of the module’s – modules are much like superclasses in this respect.

You could reduce the risk by making the method names unique enough, but it’s easy to forget and reads poorly.

Extract a helper

Instead, you can move the module’s internals into a separate module or class that is not mixed in:

<figure class="code">
module Greeter
  module Mixin
    def greet(name)
      "HELLO, #{Name.normalize(name)}!"

  module Name
    def self.normalize(name)

class Person
  include Greeter::Mixin

  # …

Since the helper class is outside the mixin, collisions are highly unlikely.

This is for example how my Traco gem does it.

Introducing additional objects also makes it easier to refactor the code further.

Note that if the helper object is defined inside the mixin itself, there is a collision risk as Gregory Brown pointed out in a comment.

Intentionally mixing in privates

Sometimes, it does make sense to mix in private methods. Namely when they’re part of the interface that you want to mix in, and not just internal details of the module.

You often see this with the Template Method pattern:

<figure class="code">
module Greeter
  def greet(name)
    "#{greeting_phrase}, #{name}!#{post_greeting}"


  def greeting_phrase
    raise "You must implement this method!"

  def post_greeting
    # Defaults to empty.

class Person
  include Greeter


  def greeting_phrase

  def post_greeting


Mind the private methods of your modules, since they are mixed in along with the public methods. If they’re not part of the interface you intend to mix in, they should probably be extracted to some helper object.

What I dislike about Draper

Posted about 1 month back at The Pug Automatic

My team used Draper a few years back ago and its design inspired us to make poor decisions that we’ve come to regret.

If you want presenters in Rails (or “view models”, or “decorators” as Draper calls them), just use a plain old Ruby object. Pass in the view context when you need it. Don’t decorate. Don’t delegate all the things.

The problem with Draper

Draper encourages you to do things like

<figure class="code"><figcaption>app/decorators/article_decorator.rb</figcaption>
class ArticleDecorator < Draper::Decorator

  def published_at
    h.content_tag(:time, object.published_at.strftime("%A, %B %e"))
</figure> <figure class="code"><figcaption>app/controllers/articles_controller.rb</figcaption>
class ArticlesController < ApplicationController
  def show
    @article = Article.first.decorate
</figure> <figure class="code"><figcaption>app/views/articles/show.html.erb</figcaption>
<h1><%= @article.title %></h1>
<p><%= @article.published_at %></p>

I think this is problematic.

Using the name @article for what is actually an ArticleDecorator instance muddies the object responsibilities.

If you want to grep for where a certain presenter (“decorator”) method is called, that’s a bit difficult.

If you look at a view or partial and want to know what object you’re actually dealing with, that’s not clear.

Draper jumps through hoops so that Rails and its ecosystem will accept a non-model in forms, links, pagination etc. This is complexity that can break with new gems and new versions of Rails. I can’t speak to the current state of things, but this abstraction had a number of leaks back when we used it.

For you to be able to use helpers and routes with Draper, there’s some fairly dark magic going on. That’s also complexity that might break with new versions of Rails, and no fun debugging if there are issues. This magic made it difficult for us to test in isolation, but perhaps that has improved since.

On a side note, I really disagree with the “decorator” naming choice. The decorator pattern is applicable to any component in a MVC system: you can decorate models, controllers, mailers and pretty much anything else. Using app/decorators for this is a bit like using app/mixins for mailer mixins. It’s a poor name for a class of component.

Draper also makes some poor choices in the small: accessing the decorated model as model or object instead of article hurts readability.

PORO presenters

What should you do instead? Just use plain old Ruby objects.

<figure class="code"><figcaption>app/presenters/article_presenter.rb</figcaption>
class ArticlePresenter
  def initialize(article)
    @article = article

  def published_at(view)
    view.content_tag(:time, article.published_at.strftime("%A, %B %e"))


  attr_reader :article
</figure> <figure class="code"><figcaption>app/controllers/articles_controller.rb</figcaption>
class ArticlesController < ApplicationController
  def show
    @article = Article.first
    @article_presenter =
</figure> <figure class="code"><figcaption>app/views/articles/show.html.erb</figcaption>
<h1><%= @article.title %></h1>
<p><%= @article_presenter.published_at(self) %></p>

This is not much more code than a Draper decorator.

With our attr_extras library you could get rid of the initializer and reader boilerplate and just do pattr_initialize :article. And you can of course add whatever convenience methods you like, by inheritance or otherwise.

The model is available to the presenter as article rather than by a generic name like model or object.

You no longer have that complex, black magic dependency.

Forms and plugins just use regular ActiveRecord objects.

When you use a presenter, that’s made perfectly clear. If it makes sense to do some limited delegation, that’s just a delegate :title, to: :article away.

It’s obvious where the view context comes from. If you use it a lot, feel free to pass it into the initializer instead (e.g., view_context) in the controller).

I can’t see why anyone would prefer Draper to this, but I’m looking forward to discussion in the comments.

SimpleDelegator autoloading issues with Rails

Posted about 1 month back at The Pug Automatic

Be aware that if you use SimpleDelegator with Rails, you may see autoloading issues.

The problem

Specifically, if you have something like

<figure class="code"><figcaption>app/models/my_thing.rb</figcaption>
class MyThing < SimpleDelegator
</figure> <figure class="code"><figcaption>app/models/my_thing/subthing.rb</figcaption>
class MyThing::Subthing

then that class won’t be autoloaded. In a Rails console:

>> MyThing
=> MyThing
>> MyThing::Subthing
NameError: uninitialized constant Subthing
>> require "my_thing/subthing"
=> true
>> MyThing::Subthing
=> MyThing::Subthing

Both Rails (code) and SimpleDelegator (code) hook into const_missing. Rails does it for autoloading. Since the SimpleDelegator superclass is earlier in the inheritance chain than Module (where Rails mixes it in), this breaks Rails autoloading.


How do you get around this?

You could stop using SimpleDelegator.

An explicit require won’t work well – I think what happens is that the Rails development reloading magic undefines the constant when the file is changed. An explicit require_dependency does appear to work:

<figure class="code">
class MyThing < SimpleDelegator
  require_dependency "my_thing/subthing"

Or you could override the const_missing you get from SimpleDelegator to do both what it used to do, and what Rails does:

<figure class="code">
class RailsySimpleDelegator < SimpleDelegator
  # Fix Rails autoloading.
  def self.const_missing(const_name)
    if ::Object.const_defined?(const_name)
      # Load top-level constants even though SimpleDelegator inherits from BasicObject.
      # Rails autoloading.
      ::ActiveSupport::Dependencies.load_missing_constant(self, const_name)

class MyThing < RailsySimpleDelegator

But keep in mind that this may vary with Rails versions and may break on Rails updates.

The ::Object.const_get(const_name) thing is explained in the BasicObject docs.

This is a tricky problem. Perhaps the best solution would be for Rails itself to monkeypatch SimpleDelegator. That fixes the autoloading gotcha but may cause others – I once had a long debugging session when I refused to believe that Rails would monkeypatch the standard lib ERB (but it does, for html_safe).

This blog post is mainly intended to describe the problem – I’m afraid I don’t know of a great solution. If you have any insight, please share in a comment.

Vim's life-changing c%

Posted about 1 month back at The Pug Automatic

When my pair programming partner saw how I use Vim’s c% operator-motion combo, he described it as “life-changing”. While that might be overstating things, it is quite useful.

Say you want to change link_to("text", my_path("one", "two")) into link_to("text", one_two_path).

Assume the caret is on the m in my_path:

link_to("text", my_path("one", "two"))

You could hit cf) to change up-to-and-including the next “)”.

Or you could hit c% to do the same thing.

This saves you one character. Nice, but not a big deal.

Now let’s say the input text was link_to("text", my_path(singularize("one"), pluralize(double("two")))).

You could count the brackets carefully and hit c4f).

Or you could just hit c%.

How does this work?

The % motion finds the next parenthesis on the current line and then jumps to its matching parenthesis.

link_to("text", my_path(singularize("one"), pluralize(double("two"))))
                ^      A                                            B

So the % motion finds A, then jumps to its matching parenthesis B. Everything between ^ and B (inclusive) will be changed.

That’s not quite all % does. It also handles [] square brackets, {} curly braces and some other things. It can be used as a standalone motion or with other operators than c.

For example, you could use %d% to change remove_my_argument(BigDecimal(123)) into remove_my_argument.

Or if you’re at the beginning of the line hash.merge(one: BigDecimal(1), two: BigDecimal(2)).invert and want to add a key just before the ending parenthesis, just hit % to go there.

See :help % and :help matchit for more.

Stacked Vim searches down :cold

Posted about 1 month back at The Pug Automatic

When you do a project-wide search in Vim, you probably use something like ack.vim or git-grep-vim. Those plugins make use of the Vim quickfix list – a split window that shows each matching line.

There are some useful commands to navigate the quickfix list.

You probably already know that you can use :cn (or :cnext) to jump to the next result and :cp (or :cprevious) to jump to the previous result.

Maybe you even know about :cnf (:cnfile) and :cpf (:cpfile) to jump to the next or previous file with a result.

But my favorite quickfix list command is :cold (:colder).

Say you project-search for “foo” to look into an issue. You hit :cn a few times.

But then you realize the rabbit hole runs deeper. What’s that “bar” doing there?

So you project-search for “bar” and navigate that quickfix list for a while.

Now you want to get back to “foo”.

You could search for “foo” again… or you could run :cold.

:cold jumps back to the last (older) list. It even remembers which item in the list you were on.

This effectively gives you a stack of project searches. You can make searches within searches and then jump back to previous ones.

To go forward again, there’s :cnew (:cnewer). Vim remembers the ten last used lists for you.

Exceptions with documentation URLs

Posted about 1 month back at The Pug Automatic

In a discussion today about GitHub wikis for internal documentation, I mentioned that we include a wiki URL in some exceptions.

More than one person thought it was a great idea that they hadn’t thought of, so I’m writing it up, though it’s a simple thing.

For example, we send events from our auction system to an accounting system by way of a serial (non-parallel) queue. It’s important that event A is sent successfully before we attempt to send event B. So if sending fails unrecoverably, we stop the queue and raise an exception.

This happens maybe once a month or so, and requires some investigation and manual action each time.

We raise something like this:

<figure class="code">
if broken?
  raise SerialJobError, "A serial job failed, so the queue is paused! Job arguments: #{arguments.inspect}"

That wiki page documents what the exception is about and what to do about it.

This saves time: when the error happens, the next step is a click away. You don’t have to chase down that vaguely recalled piece of documentation.

And for new developers that have never seen this exception nor the docs, it’s pretty clear where they can go to read up.

Admittedly, it’s better if errors can be recovered from automatically. But for those few cases where a page of documentation is the best solution, be sure to provide a URL in the exception message.

The emoji consensus model

Posted about 1 month back at The Pug Automatic

Our team was inspired by Thoughtbot’s Code Review guide to start making changes to our styleguide through pull requests rather than discussions during the retro.

Since we already make styleguide decisions by consensus, I thought it would be silly, fun and possibly useful to translate the Crisp consensus model

Crisp's consensus and meeting signs

…into GitHub emoji:

:thumbsup: Yes please! I actively support this.

:point_right: Let’s discuss further. We don’t have all the facts yet.

:thumbsdown: Veto! I feel that I have all the facts needed to say no.

:punch: I stand aside. I neither support nor oppose this.

There’s no thumb-to-the-side emoji, and the pointing hand isn’t a great substitute. Any ideas?

I don’t know yet how this will turn out, or if we’ll do it at all, but I do like the idea.

Photo credit: From Peter Antman’s post in the Crisp blog, drawn by Jimmy Janlén.

Use Active Record's last! in tests

Posted about 1 month back at The Pug Automatic

I thought I’d start a series of short blog posts on things I remark on during code review that could be of wider interest.

Do you see anything to improve in this extract from a feature spec, assuming we’re fine with doing assertions against the DB?

<figure class="code">
fill_in "Title", with: "My title"
click_link "Create item"

item = Item.last
expect(item.title).to eq "My title"

What happens if item is nil? The test will explode on the last line with NoMethodError: undefined method 'title' for nil:NilClass.

If we would instead do

<figure class="code">
item = Item.last!

then it would explode on that line, with ActiveRecord::RecordNotFound.

That’s a less cryptic error that triggers earlier, at the actual point where your assumption is wrong.

Active Record’s FinderMethods also include first!, second!, third!, fourth!, fifth! and forty_two!.

Midnight fragile tests

Posted about 1 month back at The Pug Automatic

This is a post in my series on things I’ve remarked on in code review.

Do you see a problem with this test?

<figure class="code">
describe EventFinder, ".find_all_on_date" do
  it "includes events that happen on that date" do
    event = create(Event, happens_on:
    found_events = EventFinder.find_all_on_date(
    expect(found_events).to include(event)

What happens if this test runs right around midnight, or around a daylight saving transition?

The event might be created just as Monday ends, and EventFinder.find_all_on_date may run just as Tuesday starts.

So we’d create a Monday event but search for Tuesday events, and the test will fail.

In this case the fix is trivial:

<figure class="code">
describe EventFinder, ".find_all_on_date" do
  it "includes events that happen on that date" do
    today =
    event = create(Event, happens_on: today)
    found_events = EventFinder.find_all_on_date(today)
    expect(found_events).to include(event)

In more complex cases, you may need to call a time cop and ask them to freeze time for the duration of your test.

Admittedly, this isn’t a major issue. I see these types of tests failures a few times a year – it doesn’t happen often if you don’t often run your tests around midnight.

But it’s also easy to avoid once you’re aware of it, so I recommend writing tests not to be midnight fragile from now on.

Using a headless Rails for tasks and tests

Posted about 1 month back at The Pug Automatic

You might know that the Rails console gives you an app object to interact with:

<figure class="code">
>> app.items_path
# => "/items"
>> app.get "/items"
# => 200
>> app.response.body
# => "<!DOCTYPE html><html>My items!</html>"
>> app.response.success?
# => true

You might also know that this is the same thing you’re using in Rails integration tests:

<figure class="code">
get "/items"
expect(response).to be_success

In both cases you’re interacting with an ActionDispatch::Integration::Session.

Here are two more uses for it.

Rake tasks

If you have an app that receives non-GET webhooks, it’s a bit of a bother to curl those when you want to trigger them in development.

Instead, you can do it from a Rake task:

<figure class="code">
namespace :dev do
  desc "Fake webhook"
  task :webhook => :environment do
    session =
    response = "/my_webhook", { my: "data" }, { "X-Some-Header" => "some value" }
    puts "Done with response code #{response}"

I used this in a current project to fake incoming GitHub webhooks.

You could of course make your controller a thin wrapper around some object that does most of the work, and just call that object from tasks, but the HTTP part isn’t neglible with things like webhooks, and it can be useful to go through the whole thing sometimes during development.

Non-interactive sessions in feature tests

Your Capybara tests can alternate between multiple interactive sessions very easily, which is super handy for testing real time interactions, e.g. a chat.

But Capybara only wants you to test through the fingers of a user. If the user doesn’t click to submit a form, you can’t easily trigger a POST request.

So if you want to test something like an incoming POST webhook during an interactive user session, you can again use our friend ActionDispatch::Integration::Session:

<figure class="code">
it "reloads when a webhook comes in" do
  visit "/"

  expect_page_to_reload do

def expect_page_to_reload
  page.evaluate_script "window.notReloaded = true"
  sleep 0.01  # Sadly, we need this.
  expect(page.evaluate_script("window.notReloaded")).to be_falsy

def the_heroku_webhook_is_triggered
  session ="/heroku_webhook")

This too is extracted from a current project.