Git pairing aliases, prompts and avatars

Posted 3 days back at The Pug Automatic

When we pair program at Barsoom, we’ve started making commits in both users’ names.

This emphasizes shared ownership and makes commands like git blame and git shortlog -sn (commit counts) more accurate.

There are tools like hitch to help you commit as a pair, but I found it complex and buggy, and I’ve been happy with something simpler.

Aliases

I just added some simple aliases to my Bash shell:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
alias pair='echo "Committing as: `git config user.name` <`git config user.email`>"'
alias unpair="git config --remove-section user 2> /dev/null; echo Unpaired.; pair"

alias pairf="git config user.pair 'FF+HN' && git config user.name 'Foo Fooson and Henrik Nyh' && git config user.email 'all+foo+henrik@barsoom.se'; pair"
alias pairb="git config user.pair 'BB+HN' && git config user.name 'Bar Barson and Henrik Nyh' && git config user.email 'all+bar+henrik@barsoom.se'; pair"
</figure>

pair tells me who I’m committing as. pairf will pair me up with Foo Fooson; pairb will pair me up with Bar Barson. unpair will unpair me.

All this is done via Git’s own persistent per-repository configuration.

The emails use plus addressing, supported by Gmail and some others: all+whatever@barsoom.se ends up at all@barsoom.se.

I recommend consistently putting the names in alphabetical order so the same pair is always represented the same way.

If you’re quite promiscuous in your pairing, perhaps in a large team, the aliases will add up, and you may prefer something like hitch. But in a small team like ours, it’s not an issue.

Prompt

A killer feature of my solution, that doesn’t seem built into hitch or other tools, is that it’s easy to show in your prompt:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
function __git_prompt {
  [ `git config user.pair` ] && echo " (pair: `git config user.pair`)"
}

PS1="\W\$(__git_prompt)$ "
</figure>

This will give you a prompt like ~/myproject (pair: FF+HN)$ when paired, or ~/myproject$ otherwise.

Avatars

GitHub looks better if pairs have a user picture.

You just need to add a Gravatar for the pair’s email address.

When we started committing as pairs, I toyed a little with generating pair images automatically. When Thoughtbot wrote about pair avatars yesterday, I was inspired to ship something.

So I released Pairicon, a tiny open source web app that uses the GitHub API and a free Cloudinary plan to generate pair avatars.

Try it out!

The risks of feature branches and pre-merge code review

Posted 3 days back at The Pug Automatic

Our team has been doing only spontaneous code review for a good while – on commits in the master branch that happen to pique one’s interest. This week, we started systematically reviewing all non-trivial features, as an experiment, but this is still after it’s pushed to master and very likely after it has been deployed to production.

This is because we feel strongly about continuous delivery. Many teams adopt the practice of feature branches, pull requests and pre-merge (sometimes called “pre-commit”) code review – often without, I think, realizing the downsides.

Continuous delivery

Continuous delivery is about constantly deploying your code, facilitated by a pipeline: if some series of tests pass, the code is good to go. Ideally it deploys production automatically at the end of this pipeline.

Cycles are short and features are often released incrementally, perhaps using feature toggles.

This has major benefits. But if something clogs that pipeline, the benefits are reduced. And pre-merge code review clogs that pipeline.

The downsides of pre-merge review

Many of these downsides can be mitigated by keeping feature branches small and relatively short-lived, and reviewing continuously instead of just before merging – but even then, most apply to some extent.

  • Bugs are discovered later. With long-lived feature branches, sometimes much later.

    With continuous delivery, you may have to look for a bug in that one small commit you wrote 30 minutes ago. You may have to revert or fix that one commit.

    With a merged feature branch, you may have several commits or one big squashed commit. The bug might be in code you wrote a week ago. You may need to roll back an entire, complex feature.

    There is an obvious risk to deploying a large change all at once vs. deploying small changes iteratively.

  • Feedback comes later.

    Stakeholders and end-users are more likely to use the production site than to review your feature branch. By incrementally getting it out there, you can start getting real feedback quickly – in the form of user comments, support requests, adoption rates, performance impact and so on. Would you rather get this feedback on day 1 or after a week or more of work?

  • Merge conflicts or other integration conflicts are more likely.

  • It is harder for multiple pairs to work on related features since integration happens less often.

    If they share a feature branch, all features have to be reviewed and merged together.

  • The value added by your feature or bug fix takes longer to reach the end user.

    Reviews can take a while to do and to get to.

  • It is frustrating to the code author not to see their code shipped.

    It may steal focus from the next task, or even block them or someone else from starting on it.

The downsides of post-merge review

Post-merge review isn’t without its downsides.

  • Higher risk of releasing bugs and other defects.

    Anything a pre-merge review would catch may instead go into production.

    Then again, since releases are small and iterative, usually these are smaller bugs and easier to track down.

  • Renaming database tables and columns without downtime in production is a lot of work.

    Assuming you want to deploy without downtime, database renames are a lot of work. When you release iteratively, you will add tables and columns to production sooner, perhaps before discovering better names. Then you have to rename them in production.

    This can be annoying, but it’s not a dealbreaker.

    We try to mitigate this by always getting a second opinion on non-obvious table or column names.

  • New hires may be insecure about pushing straight to master.

    Pair programming could help that to some extent. You can also do pre-merge code review temporarily for some person or feature.

I fully acknowledge these downsides. This is a trade-off. It’s not that post-merge review is flawless; I just feel it has more upsides and fewer downsides all in all.

Technical solutions for post-merge review

I think GitHub’s excellent tools around pull requests is a major reason for the popularity of pre-merge review.

We only started with systematic post-merge reviews this week, and we’re doing it the simplest way we could think of: a “Review” column for tickets (“cards”) in Trello, digging through git log and writing comments on the GitHub commits.

This is certainly less smooth than pull requests.

We have some ideas. Maybe use pull requests anyway with immediately-merged feature branches. Or some commit-based review tool like Barkeep or Codebrag.

But we don’t know yet. Suggestions are welcome.

Our context

To our team, the benefits are clear.

We are a small team of 6 developers, mostly working from the same room in the same office. We often discuss complex code in person before it’s committed.

We’re not in day trading or life-support systems, so we can accept a little risk for other benefits. Though I’m not sure pre-release review actually reduces risks overall, as discussed above.

If you’re in another situation than ours, your trade-off may be different. It would be interesting to hear about that in the comments.

Don't mix in your privates

Posted 3 days back at The Pug Automatic

Say we have this module:

<figure class="code">
module Greeter
  def greet(name)
    "HELLO, #{normalize(name)}!"
  end

  private

  def normalize(name)
    name.strip.upcase
  end
end
</figure>

We can include it to make instances of a class correspond to a “greeter” interface:

<figure class="code">
class Person
  include Greeter
end

person = Person.new
person.greet("Joe")  # => "HELLO, JOE!"
</figure>

Is greet the whole interface?

It is the only public methods the module gives us, but it also has a private normalize method, part of its internal API.

The risk of collision

The private method has a pretty generic name, so there’s some risk of collision:

<figure class="code">
class Person
  include Greeter

  def initialize(age)
    @age = normalize(age)
  end

  private

  def normalize(age)
    [age.to_i, 25].min
  end
end

person = Person.new(12)
person.greet("Joe")  # => "HELLO, 0!"
</figure>

The module’s greet method will call Person’s normalize method instead of the module’s – modules are much like superclasses in this respect.

You could reduce the risk by making the method names unique enough, but it’s easy to forget and reads poorly.

Extract a helper

Instead, you can move the module’s internals into a separate module or class that is not mixed in:

<figure class="code">
module Greeter
  module Mixin
    def greet(name)
      "HELLO, #{Name.normalize(name)}!"
    end
  end

  module Name
    def self.normalize(name)
      name.strip.upcase
    end
  end
end

class Person
  include Greeter::Mixin

  # …
end
</figure>

Since the helper class is outside the mixin, collisions are highly unlikely.

This is for example how my Traco gem does it.

Introducing additional objects also makes it easier to refactor the code further.

Note that if the helper object is defined inside the mixin itself, there is a collision risk as Gregory Brown pointed out in a comment.

Intentionally mixing in privates

Sometimes, it does make sense to mix in private methods. Namely when they’re part of the interface that you want to mix in, and not just internal details of the module.

You often see this with the Template Method pattern:

<figure class="code">
module Greeter
  def greet(name)
    "#{greeting_phrase}, #{name}!#{post_greeting}"
  end

  private

  def greeting_phrase
    raise "You must implement this method!"
  end

  def post_greeting
    # Defaults to empty.
  end
end

class Person
  include Greeter

  private

  def greeting_phrase
    "Hello"
  end

  def post_greeting
    "!!1"
  end
end
</figure>

Summary

Mind the private methods of your modules, since they are mixed in along with the public methods. If they’re not part of the interface you intend to mix in, they should probably be extracted to some helper object.

What I dislike about Draper

Posted 3 days back at The Pug Automatic

My team used Draper a few years back ago and its design inspired us to make poor decisions that we’ve come to regret.

If you want presenters in Rails (or “view models”, or “decorators” as Draper calls them), just use a plain old Ruby object. Pass in the view context when you need it. Don’t decorate. Don’t delegate all the things.

The problem with Draper

Draper encourages you to do things like

<figure class="code"><figcaption>app/decorators/article_decorator.rb</figcaption>
class ArticleDecorator < Draper::Decorator
  delegate_all

  def published_at
    h.content_tag(:time, object.published_at.strftime("%A, %B %e"))
  end
end
</figure> <figure class="code"><figcaption>app/controllers/articles_controller.rb</figcaption>
class ArticlesController < ApplicationController
  def show
    @article = Article.first.decorate
  end
end
</figure> <figure class="code"><figcaption>app/views/articles/show.html.erb</figcaption>
<h1><%= @article.title %></h1>
<p><%= @article.published_at %></p>
</figure>

I think this is problematic.

Using the name @article for what is actually an ArticleDecorator instance muddies the object responsibilities.

If you want to grep for where a certain presenter (“decorator”) method is called, that’s a bit difficult.

If you look at a view or partial and want to know what object you’re actually dealing with, that’s not clear.

Draper jumps through hoops so that Rails and its ecosystem will accept a non-model in forms, links, pagination etc. This is complexity that can break with new gems and new versions of Rails. I can’t speak to the current state of things, but this abstraction had a number of leaks back when we used it.

For you to be able to use helpers and routes with Draper, there’s some fairly dark magic going on. That’s also complexity that might break with new versions of Rails, and no fun debugging if there are issues. This magic made it difficult for us to test in isolation, but perhaps that has improved since.

On a side note, I really disagree with the “decorator” naming choice. The decorator pattern is applicable to any component in a MVC system: you can decorate models, controllers, mailers and pretty much anything else. Using app/decorators for this is a bit like using app/mixins for mailer mixins. It’s a poor name for a class of component.

Draper also makes some poor choices in the small: accessing the decorated model as model or object instead of article hurts readability.

PORO presenters

What should you do instead? Just use plain old Ruby objects.

<figure class="code"><figcaption>app/presenters/article_presenter.rb</figcaption>
class ArticlePresenter
  def initialize(article)
    @article = article
  end

  def published_at(view)
    view.content_tag(:time, article.published_at.strftime("%A, %B %e"))
  end

  private

  attr_reader :article
end
</figure> <figure class="code"><figcaption>app/controllers/articles_controller.rb</figcaption>
class ArticlesController < ApplicationController
  def show
    @article = Article.first
    @article_presenter = ArticlePresenter.new(@article)
  end
end
</figure> <figure class="code"><figcaption>app/views/articles/show.html.erb</figcaption>
<h1><%= @article.title %></h1>
<p><%= @article_presenter.published_at(self) %></p>
</figure>

This is not much more code than a Draper decorator.

With our attr_extras library you could get rid of the initializer and reader boilerplate and just do pattr_initialize :article. And you can of course add whatever convenience methods you like, by inheritance or otherwise.

The model is available to the presenter as article rather than by a generic name like model or object.

You no longer have that complex, black magic dependency.

Forms and plugins just use regular ActiveRecord objects.

When you use a presenter, that’s made perfectly clear. If it makes sense to do some limited delegation, that’s just a delegate :title, to: :article away.

It’s obvious where the view context comes from. If you use it a lot, feel free to pass it into the initializer instead (e.g. ArticlePresenter.new(@article, view_context) in the controller).

I can’t see why anyone would prefer Draper to this, but I’m looking forward to discussion in the comments.

SimpleDelegator autoloading issues with Rails

Posted 3 days back at The Pug Automatic

Be aware that if you use SimpleDelegator with Rails, you may see autoloading issues.

The problem

Specifically, if you have something like

<figure class="code"><figcaption>app/models/my_thing.rb</figcaption>
class MyThing < SimpleDelegator
end
</figure> <figure class="code"><figcaption>app/models/my_thing/subthing.rb</figcaption>
class MyThing::Subthing
end
</figure>

then that class won’t be autoloaded. In a Rails console:

>> MyThing
=> MyThing
>> MyThing::Subthing
NameError: uninitialized constant Subthing
>> require "my_thing/subthing"
=> true
>> MyThing::Subthing
=> MyThing::Subthing

Both Rails (code) and SimpleDelegator (code) hook into const_missing. Rails does it for autoloading. Since the SimpleDelegator superclass is earlier in the inheritance chain than Module (where Rails mixes it in), this breaks Rails autoloading.

Workarounds

How do you get around this?

You could stop using SimpleDelegator.

An explicit require won’t work well – I think what happens is that the Rails development reloading magic undefines the constant when the file is changed. An explicit require_dependency does appear to work:

<figure class="code">
class MyThing < SimpleDelegator
  require_dependency "my_thing/subthing"
end
</figure>

Or you could override the const_missing you get from SimpleDelegator to do both what it used to do, and what Rails does:

<figure class="code">
class RailsySimpleDelegator < SimpleDelegator
  # Fix Rails autoloading.
  def self.const_missing(const_name)
    if ::Object.const_defined?(const_name)
      # Load top-level constants even though SimpleDelegator inherits from BasicObject.
      ::Object.const_get(const_name)
    else
      # Rails autoloading.
      ::ActiveSupport::Dependencies.load_missing_constant(self, const_name)
    end
  end
end

class MyThing < RailsySimpleDelegator
end
</figure>

But keep in mind that this may vary with Rails versions and may break on Rails updates.

The ::Object.const_get(const_name) thing is explained in the BasicObject docs.

This is a tricky problem. Perhaps the best solution would be for Rails itself to monkeypatch SimpleDelegator. That fixes the autoloading gotcha but may cause others – I once had a long debugging session when I refused to believe that Rails would monkeypatch the standard lib ERB (but it does, for html_safe).

This blog post is mainly intended to describe the problem – I’m afraid I don’t know of a great solution. If you have any insight, please share in a comment.

Vim's life-changing c%

Posted 3 days back at The Pug Automatic

When my pair programming partner saw how I use Vim’s c% operator-motion combo, he described it as “life-changing”. While that might be overstating things, it is quite useful.

Say you want to change link_to("text", my_path("one", "two")) into link_to("text", one_two_path).

Assume the caret is on the m in my_path:

link_to("text", my_path("one", "two"))
                ^

You could hit cf) to change up-to-and-including the next “)”.

Or you could hit c% to do the same thing.

This saves you one character. Nice, but not a big deal.

Now let’s say the input text was link_to("text", my_path(singularize("one"), pluralize(double("two")))).

You could count the brackets carefully and hit c4f).

Or you could just hit c%.

How does this work?

The % motion finds the next parenthesis on the current line and then jumps to its matching parenthesis.

link_to("text", my_path(singularize("one"), pluralize(double("two"))))
                ^      A                                            B

So the % motion finds A, then jumps to its matching parenthesis B. Everything between ^ and B (inclusive) will be changed.

That’s not quite all % does. It also handles [] square brackets, {} curly braces and some other things. It can be used as a standalone motion or with other operators than c.

For example, you could use %d% to change remove_my_argument(BigDecimal(123)) into remove_my_argument.

Or if you’re at the beginning of the line hash.merge(one: BigDecimal(1), two: BigDecimal(2)).invert and want to add a key just before the ending parenthesis, just hit % to go there.

See :help % and :help matchit for more.

Stacked Vim searches down :cold

Posted 3 days back at The Pug Automatic

When you do a project-wide search in Vim, you probably use something like ack.vim or git-grep-vim. Those plugins make use of the Vim quickfix list – a split window that shows each matching line.

There are some useful commands to navigate the quickfix list.

You probably already know that you can use :cn (or :cnext) to jump to the next result and :cp (or :cprevious) to jump to the previous result.

Maybe you even know about :cnf (:cnfile) and :cpf (:cpfile) to jump to the next or previous file with a result.

But my favorite quickfix list command is :cold (:colder).

Say you project-search for “foo” to look into an issue. You hit :cn a few times.

But then you realize the rabbit hole runs deeper. What’s that “bar” doing there?

So you project-search for “bar” and navigate that quickfix list for a while.

Now you want to get back to “foo”.

You could search for “foo” again… or you could run :cold.

:cold jumps back to the last (older) list. It even remembers which item in the list you were on.

This effectively gives you a stack of project searches. You can make searches within searches and then jump back to previous ones.

To go forward again, there’s :cnew (:cnewer). Vim remembers the ten last used lists for you.

Exceptions with documentation URLs

Posted 3 days back at The Pug Automatic

In a discussion today about GitHub wikis for internal documentation, I mentioned that we include a wiki URL in some exceptions.

More than one person thought it was a great idea that they hadn’t thought of, so I’m writing it up, though it’s a simple thing.

For example, we send events from our auction system to an accounting system by way of a serial (non-parallel) queue. It’s important that event A is sent successfully before we attempt to send event B. So if sending fails unrecoverably, we stop the queue and raise an exception.

This happens maybe once a month or so, and requires some investigation and manual action each time.

We raise something like this:

<figure class="code">
if broken?
  stop_the_queue
  raise SerialJobError, "A serial job failed, so the queue is paused! https://github.com/our_team/our_project/wiki/Serial-queue Job arguments: #{arguments.inspect}"
end
</figure>

That wiki page documents what the exception is about and what to do about it.

This saves time: when the error happens, the next step is a click away. You don’t have to chase down that vaguely recalled piece of documentation.

And for new developers that have never seen this exception nor the docs, it’s pretty clear where they can go to read up.

Admittedly, it’s better if errors can be recovered from automatically. But for those few cases where a page of documentation is the best solution, be sure to provide a URL in the exception message.

The emoji consensus model

Posted 3 days back at The Pug Automatic

Our team was inspired by Thoughtbot’s Code Review guide to start making changes to our styleguide through pull requests rather than discussions during the retro.

Since we already make styleguide decisions by consensus, I thought it would be silly, fun and possibly useful to translate the Crisp consensus model

Crisp's consensus and meeting signs

…into GitHub emoji:

:thumbsup: Yes please! I actively support this.

:point_right: Let’s discuss further. We don’t have all the facts yet.

:thumbsdown: Veto! I feel that I have all the facts needed to say no.

:punch: I stand aside. I neither support nor oppose this.

There’s no thumb-to-the-side emoji, and the pointing hand isn’t a great substitute. Any ideas?

I don’t know yet how this will turn out, or if we’ll do it at all, but I do like the idea.

Photo credit: From Peter Antman’s post in the Crisp blog, drawn by Jimmy Janlén.

Use Active Record's last! in tests

Posted 3 days back at The Pug Automatic

I thought I’d start a series of short blog posts on things I remark on during code review that could be of wider interest.

Do you see anything to improve in this extract from a feature spec, assuming we’re fine with doing assertions against the DB?

<figure class="code">
fill_in "Title", with: "My title"
click_link "Create item"

item = Item.last
expect(item.title).to eq "My title"
</figure>

What happens if item is nil? The test will explode on the last line with NoMethodError: undefined method 'title' for nil:NilClass.

If we would instead do

<figure class="code">
item = Item.last!
</figure>

then it would explode on that line, with ActiveRecord::RecordNotFound.

That’s a less cryptic error that triggers earlier, at the actual point where your assumption is wrong.

Active Record’s FinderMethods also include first!, second!, third!, fourth!, fifth! and forty_two!.

Midnight fragile tests

Posted 3 days back at The Pug Automatic

This is a post in my series on things I’ve remarked on in code review.

Do you see a problem with this test?

<figure class="code">
describe EventFinder, ".find_all_on_date" do
  it "includes events that happen on that date" do
    event = create(Event, happens_on: Time.zone.today)
    found_events = EventFinder.find_all_on_date(Time.zone.today)
    expect(found_events).to include(event)
  end
end
</figure>

What happens if this test runs right around midnight, or around a daylight saving transition?

The event might be created just as Monday ends, and EventFinder.find_all_on_date may run just as Tuesday starts.

So we’d create a Monday event but search for Tuesday events, and the test will fail.

In this case the fix is trivial:

<figure class="code">
describe EventFinder, ".find_all_on_date" do
  it "includes events that happen on that date" do
    today = Time.zone.today
    event = create(Event, happens_on: today)
    found_events = EventFinder.find_all_on_date(today)
    expect(found_events).to include(event)
  end
end
</figure>

In more complex cases, you may need to call a time cop and ask them to freeze time for the duration of your test.

Admittedly, this isn’t a major issue. I see these types of tests failures a few times a year – it doesn’t happen often if you don’t often run your tests around midnight.

But it’s also easy to avoid once you’re aware of it, so I recommend writing tests not to be midnight fragile from now on.

Using a headless Rails for tasks and tests

Posted 3 days back at The Pug Automatic

You might know that the Rails console gives you an app object to interact with:

<figure class="code">
>> app.items_path
# => "/items"
>> app.get "/items"
# => 200
>> app.response.body
# => "<!DOCTYPE html><html>My items!</html>"
>> app.response.success?
# => true
</figure>

You might also know that this is the same thing you’re using in Rails integration tests:

<figure class="code">
get "/items"
expect(response).to be_success
</figure>

In both cases you’re interacting with an ActionDispatch::Integration::Session.

Here are two more uses for it.

Rake tasks

If you have an app that receives non-GET webhooks, it’s a bit of a bother to curl those when you want to trigger them in development.

Instead, you can do it from a Rake task:

<figure class="code">
namespace :dev do
  desc "Fake webhook"
  task :webhook => :environment do
    session = ActionDispatch::Integration::Session.new(Rails.application)
    response = session.post "/my_webhook", { my: "data" }, { "X-Some-Header" => "some value" }
    puts "Done with response code #{response}"
  end
end
</figure>

I used this in a current project to fake incoming GitHub webhooks.

You could of course make your controller a thin wrapper around some object that does most of the work, and just call that object from tasks, but the HTTP part isn’t neglible with things like webhooks, and it can be useful to go through the whole thing sometimes during development.

Non-interactive sessions in feature tests

Your Capybara tests can alternate between multiple interactive sessions very easily, which is super handy for testing real time interactions, e.g. a chat.

But Capybara only wants you to test through the fingers of a user. If the user doesn’t click to submit a form, you can’t easily trigger a POST request.

So if you want to test something like an incoming POST webhook during an interactive user session, you can again use our friend ActionDispatch::Integration::Session:

<figure class="code">
it "reloads when a webhook comes in" do
  visit "/"

  expect_page_to_reload do
    the_heroku_webhook_is_triggered
  end
end

def expect_page_to_reload
  page.evaluate_script "window.notReloaded = true"
  yield
  sleep 0.01  # Sadly, we need this.
  expect(page.evaluate_script("window.notReloaded")).to be_falsy
end

def the_heroku_webhook_is_triggered
  session = ActionDispatch::Integration::Session.new(Rails.application)
  session.post("/heroku_webhook")
end
</figure>

This too is extracted from a current project.

Explain the trade-off

Posted 3 days back at The Pug Automatic

This is a post in my series on things I’ve remarked on in code review.

Say there’s a commit with a message like

Bump number of workers to fix image upload problem

and a change like

<figure class="code">
- "number_of_workers": 4,
+ "number_of_workers": 8,
</figure>

Then I’m left wondering why it wasn’t at 8 to start with. Or why we don’t bump it all the way to 16, or 32. Surely more workers are better?

Usually there’s a trade-off at play. More workers use more memory, perhaps.

If you tweak a value I want to know that you’ve considered that trade-off.

Ideally, a code comment (rather than the commit message) will explain what the trade-off is. Then the next tweaker sees it, and the next code reviewer too.

Make your assumptions explode

Posted 3 days back at The Pug Automatic

This is a post in my series on things I’ve remarked on in code review.

You constantly make simplifying assumptions in code. Like this one:

<figure class="code">
class Order < ActiveRecord::Base
  has_many :line_items

  def currency
    # All line items have the same currency.
    line_items.first.currency
  end
end
</figure>

The assumption in itself is a good thing: we shouldn’t design for multiple currencies if we don’t need to.

But it’s dangerous that the assumption is left unchecked. It may be true today, but will it be true tomorrow? Mixing up currencies can have pretty dire consequences.

There’s a simple solution. Make your assumptions explode!

<figure class="code">
def currency
  currencies = line_items.map(&:currency).uniq
  raise "Assumed all line items have the same currency" if currencies.length > 1
  currencies.first
end
</figure>

You still reap the benefits of the simplifying assumption but can also trust that it remains valid.

If you make the same type of assertion in multiple places, you can of course extract it, or find some gem. We do CurrencyVerification.assert_single_currency(records) in one project, for example.

Assertive programming is a thing, but I haven’t see it a lot in Ruby outside tests.

The Golden Rule for programmers

Posted 3 days back at The Pug Automatic

The Golden Rule is the idea that one should treat others as one would like to be treated.

I try to follow this rule when it comes to solving problems in programming, and I wish all of us would do so.

We all search for solutions to issues we encounter. When we do, we’re obviously expecting someone else to have provided the answer. So help the next person out in the same way. I would feel dirty and selfish if I didn’t.

If you solve (or partially solve, or work around) a tricky issue that others may also run into, try to do one or more of these:

  • answer a Stack Overflow question you found
  • add a new Stack Overflow question and answer it yourself
  • comment on a blog post or GitHub issue you found
  • report a bug
  • contribute a code or documentation fix
  • create a code snippet (e.g. on Gist) or a public repository
  • write a blog post

In my experience Twitter isn’t great as the only place to put a solution – search engine indexing can be iffy.

When you write it up, consider how you tried to find the answer. Where did you look? What terms did you search for? Make sure there’s something to find.

You’ll be surprised at the number of people you end up helping.