Episode #494 - September 9th, 2014

Posted 12 days back at Ruby5

This episode covers RSpec 3.1, unifying multiple analytics services with Rack::Tracker, new features in Rails 4.2, the Fearless Rails Deployment book, a spike for thoughts about Rack 2.0 with The_Metal and RubyConf Portugal.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

RSpec 3.1

RSpec 3.1 has been released. Myron Marston wrote up another thorough blog post with the notable changes in this release.
RSpec 3.1

Rack::Tracker

Rack::Tracker is a Rack middleware that can be hooked up to multiple analytics services and exposes them in a unified fashion. It comes with built­-in support for Google Analytics and Facebook, but you can also add your own.
Rack::Tracker

New in Rails 4.2

Rashmi Yadav wrote about some smaller and lesser known features coming up in Rails 4.2.
New in Rails 4.2

Fearless Rails Deployment

Zach Campbell has released the final version of his "Fearless Rails Deployment" book, with everything you need to know about deploying Rails applications. The book is available for $39.99 with a money-back guarantee if you're not 100% satisfied.
Fearless Rails Deployment

the_metal

the_metal is a project from Aaron "Tenderlove" Patterson which is a spike for thoughts about Rack 2.0. Some of its design goals are listed on the README. So if you care about Rack, you should probably look into the_metal.
the_metal

RubyConf Portugal

RubyConf Portugal will be taking place October 13th and 14th in Braga. Speakers include Terrence Lee, Gau­tam Rege, Steve Klabnik, Katrina Owen and Erik Michaels­Ober, just to name a few. Use discount code Ruby5<3PT for a 10% discount.
RubyConf Portugal

Pluralizing I18n Translations in Your Rails Application

Posted 13 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Say we have some I18n text that tells users how many notifications they have. One option for dealing with a singular vs plural situation would look like this:

# config/locales/en.yml

en:
  single_notification: You have 1 notification
  other_notification_count: You have %{count} notifications
<%# app/views/notifications/index.html.erb %>

<% if current_user.notifications.count == 1 %>
  <%= t("single_notification") %>
<% else %>
  <%= t("other_notification_count", count: current_user.notifications.count) %>
<% end %>

Kind of ugly, right? Luckily, Rails provides a simple way to deal with pluralization in translations. Let’s try it out:

# config/locales/en.yml

en:
  notification:
    one: You have 1 notification
    other: You have %{count} notifications
<%# app/views/notifications/index.html.erb %>

<%= t("notification", count: current_user.notifications.count) %>

Same result, and no conditionals in our views. Awesome.

Make sure you are not using methods like “1.day.ago” when dealing with a Date column

Posted 13 days back at Ruby Fleebie

TL;DR: Read the title ;)

I spent too many hours debugging a feature spec in an application. Turns out I was simply not paying attention to what I was doing!

In a fixture file, I was setting a field the following way:

some_date: <%=1.day.ago.to_s :db%>

I didn’t pay attention to the fact that “some_date” was a Date and not a DateTime. This single line was responsible for some intermittent failures in my test suite.

The technical explanation
The problem is that methods like “days.ago” return ActiveSupport:TimeWithZone objects which contain, well, the time portion along with the current timezone info. Then there is the to_s(:db) part which convert the resulting DateTime in UTC for storage in the DB. It means that if I call “1.day.ago.to_s :db” on september 7th at 8:00 PM (local timezone, which is UTC -4 for me), the result will be “2014-09-07 00:00:00″ UTC time. And since I was storing this in a Date column, the time portion would simply gets discarded… so I ended up with a date of September 7th instead of the expected September 6th in the DB.

Solution
Of course this problem was very easy to fix once I understood what the problem was. I simple changed the fixture file so that it looks like:

some_date: <%=1.day.ago.to_date.to_s :db%>

This works as well:

some_date: <%=Date.yesterday.to_s :db%>

Hoping this will save others some painful debugging!

Aspen Ghost

Posted 15 days back at Mike Clark

Aspen Ghost

The great gray owl haunts the aspen forest, ever watchful of careless prey.

Episode #493 - September 5th, 2014

Posted 16 days back at Ruby5

Reading Rails talks TimeWithZone, descriptive_statistics, new gems in Rails 4.2, Paperdragon, and using Ruby's English operators all in this episode of the Ruby5 podcast!

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Reading Rails - TimeWithZone

Baffled by time zones in Rails? This blog post will dive deep into the implementation and give you insight into what's going on in ActiveSupport.
Reading Rails - TimeWithZone

descriptive_statistics

Want to cool statistical methods on enumerables? The descriptive_statistics gem has just what you've been looking for.
descriptive_statistics

ActiveJob and GlobalID

Learn about some of the new goodies in Rails 4.2 with this blog post from Mikamai.
ActiveJob and GlobalID

Paperdragon

The Paperdragon gem puts a layer on top of Dragonfly to integrate more nicely with Ruby objects.
Paperdragon

Ruby's English Operators

Avdi explains Ruby's `and` and `or` operators in this free episode of RubyTapas.
Ruby's English Operators

An Introduction to WebGL

Posted 17 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

On a recent project we had to do a lot of work with WebGL. The most difficult and frustrating thing about the project was the lack of good resources on working with WebGL, especially for non-trivial projects. What I did find was usually focused on just having code that puts pixels on the screen, but not how it worked or why. If you’re interested in learning WebGL, these posts will take you from zero to a working 3D application, with an emphasis on how the various concepts work and why.

What is WebGL?

WebGL is a thin wrapper around OpenGL ES 2.0 that is exposed through a JavaScript API. OpenGL is a low level library for drawing 2D graphics. This was a major misconception for me. I always thought that it was used to produce 3D. Instead, our job is to do the math to convert 3D coordinates into a 2D image. What OpenGL provides for us is the ability to push some data to the GPU, and execute specialized code that we write on the GPU rather than the CPU. This code is called a shader, and we write it in a language called GLSL.

To get started, we need to understand a few core concepts.

  • Clip Space: This will be the coordinate system we use in our final output. It is represented as a number between -1 and 1, regardless of the size of the canvas. This is how the GPU sees things.
  • Pixel Space: This is how we commonly think about graphics, where X is a number between 0 and the width of the canvas, and Y is a number between 0 and the height of the canvas.
  • Vertex Shader: This is the function which is responsible for converting our inputs into coordinates in clip space to draw on the screen.
  • Fragment Shader: This is the function which is responsible for determining the color of each pixel we told the GPU to draw in the vertex shader.

Boilerplate

We need to write a bit of boilerplate to get everything wired up to start drawing on the screen. The first thing we’ll need is a canvas tag.

<canvas width="600" height="600">
</canvas>

In our JavaScript code, we need to find the canvas, and use it to get an instance of WebGLRenderingContext. This is the object that contains all of the OpenGL methods we are going to use. The documentation for WebGL is generally quite lacking, but every method and constant maps to an equivalent method in the C API. The function glVertexAttrib1f in C would be gl.vertexAttrib1f in WebGL, assuming the variable gl is your WebGLRenderingContext. The constant GL_STATIC_DRAW in C would be gl.STATIC_DRAW in WebGL.

main = ->
  canvas = document.getElementByTagName("canvas")
  gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl")

The next thing we’re going to need is an instance of WebGLProgram. This is an object that will hold information about which shaders we’re using, and what data we’ve passed into it. As we initialize the program, we are going to want to compile and link our shaders. We’re going to need the source code of the shaders as a string. I prefer to write them in separate files, in order to get syntax highlighting and other file type specific helpers from my editor. In a Rails app, we can just spit out the files into the page server side.

module ApplicationHelper
  def shaders
    shaders = {}
    Dir.chdir(Rails.root.join("app", "assets", "shaders")) do
      Dir["**"].each do |path|
        shaders[path] = open(path).read
      end
    end
    shaders
  end
end
<% # app/assets/layouts/application.html.erb %>
<script>
  window.shaders = #{shaders.to_json.html_safe}
</script>

Compiling the Shaders

Now in our JavaScript, we can create a few helper functions to create a program, compile our shaders, and link them together. To compile the shaders, we need to do three things:

  1. Get the source of the shader
  2. Determine if it’s a vertex or fragment shader
  3. Call the appropriate methods on our WebGlRenderingContext

Once we’ve got both of our shaders, we can create the program, and link up the shaders. Let’s create an object that wraps up this process for us. You can find a gist here.

class WebGLCompiler
  constructor: (@gl, @shaders) ->

  createProgramWithShaders: (vertexShaderName, fragmentShaderName) ->
    vertexShader = @_createShader(vertexShaderName)
    fragmentShader = @_createShader(fragmentShaderName)
    @_createProgram(vertexShader, fragmentShader)

  _createShader: (shaderName) ->
    shaderSource = @shaders["#{shaderName}.glsl"]
    unless shaderSource
      throw "Unknown shader: #{shaderName}"

    @_compileShader(shaderSource, @_typeForShader(shaderName))

  _typeForShader: (name) ->
    if name.indexOf("vertex") != -1
      @gl.VERTEX_SHADER
    else if name.indexOf("fragment") != -1
      @gl.FRAGMENT_SHADER
    else
      throw "Unknown shader type for #{name}"

  _compileShader: (shaderSource, shaderType) ->
    shader = @gl.createShader(shaderType)
    @gl.shaderSource(shader, shaderSource)
    @gl.compileShader(shader)

    unless @gl.getShaderParameter(shader, @gl.COMPILE_STATUS)
      error = @gl.getShaderInfoLog(shader)
      console.error(error)
      throw "Could not compile shader. Error: #{error}"

    shader

  _createProgram: (vertexShader, fragmentShader) ->
    program = @gl.createProgram()
    @gl.attachShader(program, vertexShader)
    @gl.attachShader(program, fragmentShader)
    @gl.linkProgram(program)

    unless @gl.getProgramParameter(program, @gl.LINK_STATUS)
      error = @gl.getProgramInfoLog(program)
      console.error(error)
      throw "Program failed to link. Error: #{error}"

    program

We’re going to call createProgramWithShaders, giving it the name of the files to use for the vertex and fragment shaders. We assume that all vertex shaders are going to have the word “vertex” in the name, and that fragment shaders will have the word “fragment”. After compiling each shader, we attempt to compile it and check for errors. Finally, we attach the shaders to our program, and try to link the shaders. If all of this succeeded, the result will be an instance of WebGLProgram

main = ->
  canvas = document.getElementsByTagName("canvas")[0]
  gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl")
  compiler = new WebGLCompiler(gl, window.shaders)
  program = compiler.createProgramWithShaders("main_vertex", "main_fragment")

Now we can start writing actual code! We’ll start by writing the simplest possible vertex shader. It will do nothing but return the input unchanged.

attribute vec2 vertexCoord;

void main() {
  gl_Position = vec4(vertexCoord, 0.0, 1.0);
}

An attribute is the primary input to the vertex shader. We’re going to give it an array of values. OpenGL will loop over them, and call this function once per element. The function doesn’t actually return anything. Instead, we set a local variable called gl_Position. That variable expects a vec4, which means it has an x, y, z, and w, rather than a vec2, which just has x and y. z works like the z-index property in CSS. w is a value that every other axis will be divided by. We’ll set it to 1.0 for now, so nothing is affected.

Once the vertex shader has set enough points to draw a triangle, the fragment shader will be called once per pixel in that triangle. For now, we’ll just always return blue.

void main() {
  gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}

Sending Data to the GPU

The last step is to wire up our program to our rendering context, pass in the data, and draw a triangle. First we’ll make sure our screen is in a consistent state.

gl.clearColor(1.0, 1.0, 1.0, 1.0)
gl.clear(gl.COLOR_BUFFER_BIT)

clearColor tells the GPU what color to use for pixels where we don’t draw anything. We’ve set it to white. Then, we tell it to reset the canvas so nothing has been drawn. The next then we need to do is give our program some data. In order to do this, we’ll need to create a buffer. A buffer is essentially an address in memory where we can shove an arbitrary number of bits.

gl.useProgram(program)
buffer = gl.createBuffer()
gl.bindBuffer(gl.ARRAY_BUFFER, buffer)
gl.bufferData(
  gl.ARRAY_BUFFER
  new Float32Array([
    0.0, 0.8
    -0.8, -0.8
    0.8, -0.8
  ])
  gl.STATIC_DRAW
)

OpenGL is highly stateful. When we call bufferData, we never specify which buffer is being used. Instead, it works with the last buffer we passed to bindBuffer. gl.ARRAY_BUFFER tells OpenGL that the contents of this buffer are going to be used for an attribute. gl.STATIC_DRAW is a performance hint that says this data is going to be used often, but won’t change much.

Now that we’ve put the data in memory, we need to tell OpenGL which attribute to use it for, and how it should interpret that data. Right now it just sees it as a bunch of bits.

vertexCoord = gl.getAttribLocation(program, "vertexCoord")

gl.enableVertexAttribArray(vertexCoord)
gl.vertexAttribPointer(vertexCoord, 2, gl.FLOAT, false, 0, 0)

The first thing we need to do is get the location of the attribute in our program. This is going to be a numeric index, based on the order that we use it in our program. In this case, it’ll be 0. Next we call enableVertexAttribArray, which takes the location of an attribute, and tells us that we want to use the data that we’re going to populate it with. I’ll admit, I don’t know why you would have an attribute present in your application, but not enable it. Finally, vertexAttribPointer will populate the attribute with the currently bound buffer, and tell it how to interpret the data. This is what each of the arguments mean:

gl.vertexAttribPointer(
  # Which attribute to use
  vertexCoord

  # The number of floats to use for each element. Since it's a vec2, every
  # 2 floats is a single vector.
  2

  # The type to read the data as
  gl.FLOAT

  # Whether the data should be normalized, or used as is
  false

  # The number of floats to skip in between loops
  0

  # The index to start from
  0
)

Finally, we need to tell it that we’ve finished giving it all of the data it needs, and we’re ready to draw something to the screen.

gl.drawArrays(gl.TRIANGLES, 0, 3)

drawArrays means that we want to loop through the attribute data, in the order that it was given. The first argument is the method we should use for drawing. TRIANGLES means that it should use every three points as a surface. It would take 6 points to draw two triangles. There are other options, such as TRIANGLE_STRIP, which would only take 4 points to draw 2 triangles. There’s also POINTS or LINES, which completely change how a single triangle is drawn. The second argument is which element in the array we should start from. The final argument is the number of points we’re going to draw. The end result, is a simple triangle. All of the code used for this sample is available here.

blue triangle

Go interfaces communicate intent

Posted 18 days back at techno weenie - Home

Interfaces are one of my favorite features of Go. When used properly in arguments, they tell you what a function is going to do with your object.

// from io
func Copy(dst Writer, src Reader) (written int64, err error)

Right away, you know Copy() is going to call dst.Write() and src.Read().

Interfaces in return types tell you what you can and should do with the object.

// from os/exec
func (c *Cmd) StdoutPipe() (io.ReadCloser, error) {

It's unclear what type of object StdoutPipe() is returning, but I do know that I can read it. Since it also implements io.Closer, I know that I should probably close it somewhere.

This brings up a good rule of thumb when designing Go APIs. Prefer an io.Reader over an io.ReadCloser for arguments. Let the calling code handle its own resource cleanup. Simple enough. So what breaks this rule? Oh, my dumb passthrough package.

Here's the intended way to use it:

func main() {
  fakeResWriter := pseudoCodeForExample()
  res, _ := http.Get("SOMETHING")
  passthrough.Pass(res, fakeResWriter, 200)
}

However, on a first glance without any knowledge of how the passthrough package works, you may be inclined to close the body manually.

func main() {
  fakeResWriter := pseudoCodeForExample()
  res, _ := http.Get("SOMETHING")
  // hopefully you're not ignoring this possible error :)

  // close body manually
  defer res.Body.Close()

  // passthrough also closes it???
  passthrough.Pass(res, fakeResWriter, 200)
}

Now, you're closing the Body twice. That's not great.

Resource management is very important, so we commonly review code to ensure everything is handled properly. Helper functions that try to do too much like passthrough have caused us enough issues that I've rethought how I design Go packages. Don't get in the way of idiomatic Go code.

Back to Basics: SOLID

Posted 19 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

SOLID is an acronym created by Bob Martin and Michael Feathers that refers to five fundamental principles that help engineers write maintainable code. We like to think of these principles as the foundational elements we use when evaluating the health of my codebase and architectural approach. The principles that make up the acronym are as follows:

Let’s take a closer look at each of these principles with some examples.

Single Responsibility Principle

The Single Responsibility Principle is the most abstract of the bunch. It helps keep classes and methods small and maintainable. In addition to keeping classes small and focused it also makes them easier to understand.

While we all agree that focusing on a single responsibility is important, it’s difficult to determine what a class’s responsibility is. Generally it is said that anything that gives a class a reason to change can be viewed as a responsibility. By change I am talking about structural changes to the class itself (as in modifying the text in the class’s file) not the object’s in-memory state. Let’s look at an example of some code that isn’t following the principle:

class DealProcessor
  def initialize(deals)
    @deals = deals
  end

  def process
    @deals.each do |deal|
      Commission.create(deal: deal, amount: calculate_commission)
      mark_deal_processed
    end
  end

  private

  def mark_deal_processed
    @deal.processed = true
    @deal.save!
  end

  def calculate_commission
    @deal.dollar_amount * 0.05
  end
end

In the above class we have a single command interface that processes commission payments for deals. At first glance the class seems simple enough, but let’s look at reasons we might want to change this class. Any change in how we calculate commissions would require a change to this class. We could introduce new commission rules or strategies that would cause our calculate_commission method to change. For instance, we might want vary the percentage based on deal amount. Any change in the steps required to mark a deal as processed in the mark_deal_processed method would result in a change in the file as well. An example of this might be adding support for sending an email summary of a specific person’s commissions after marking a deal processed. The fact that we can identify multiple reasons to change signals a violation of the Single Responsibility Principle.

We can do a quick refactor and get our code in compliance with the Single Responsibility Principle. Let’s take a look:

class DealProcessor
  def initialize(deals)
    @deals = deals
  end

  def process
    @deals.each do |deal|
      mark_deal_processed
      CommissionCalculator.new.create_commission(deal)
    end
  end

  private

  def mark_deal_processed
    @deal.processed = true
    @deal.save!
  end
end

class CommissionCalculator
  def create_commission(deal)
    Commission.create(deal: deal, amount: deal.dollar_amount * 0.05)
  end
end

We now have two smaller classes that handle the two specific tasks. We have our processor that is responsible for processing and our calculator that computes the amount and creates any data associated with our new commission.

Open/Closed Principle

The Open/Closed Principle states that classes or methods should be open for extension, but closed for modification. This tells us we should strive for modular designs that make it possible for us to change the behavior of the system without making modifications to the classes themselves. This is generally achieved through the use of patterns such as the strategy pattern. Let’s look at an example of some code that is violating the Open/Closed Principle:

class UsageFileParser
  def initialize(client, usage_file)
    @client = client
    @usage_file = usage_file
  end

  def parse
    case @client.usage_file_format
      when :xml
        parse_xml
      when :csv
        parse_csv
    end

    @client.last_parse = Time.now
    @client.save!
  end

  private

  def parse_xml
    # parse xml
  end

  def parse_csv
    # parse csv
  end
end

In the above example we can see that we’ll have to modify our file parser anytime we add a client that reports usage information to us in a different file format. This violates the Open/Closed Principle. Let’s take a look at how we might modify this code to make it open to extension:

class UsageFileParser
  def initialize(client, parser)
    @client = client
    @parser = parser
  end

  def parse(usage_file)
    parser.parse(usage_file)
    @client.last_parse = Time.now
    @client.save!
  end
end

class XmlParser
  def parse(usage_file)
    # parse xml
  end
end

class CsvParser
  def parse(usage_file)
    # parse csv
  end
end

With this refactor we’ve made it possible to add new parsers without changing any code. Any additional behavior will only require the addition of a new handler. This makes our UsageFileParser reusable and in many cases will keep us in compliance with the Single Responsibility Principle as well by encouraging us to create smaller more focused classes.

Liskov Substitution Principle

Liskov’s principle tends to be the most difficult to understand. The principle states that you should be able to replace any instances of a parent class with an instance of one of it’s children without creating any unexpected or incorrect behaviors.

Let’s look at a example of a Liskov violation. Let’s look at a simple example of a Liskov violation. We’ll start with the classic example of the relationship between a rectangle and a square.. Let’s take a look:

class Rectangle
  def set_height(height)
    @height = height
  end

  def set_width(width)
    @width = width
  end
end

class Square < Rectangle
  def set_height(height)
    super(height)
    @width = height
  end

  def set_width(width)
    super(width)
    @height = width
  end
end

For our Square class to make sense we need to modify both height and width when we call either set_height or set_width. This is the classic example of a Liskov violation. The modification of the other instance method is an unexpected consequence. If we were taking advantage of polymorphism and iterating over a collection of Rectangle objects one of which happened to be a Square, calling either method will result in a surprise. An engineer writing code with an instance of the Rectangle class in mind would never expect calling set_height to modify the width of the object.

Another common instance of a Liskov violation is raising an exception for an overridden method in a child class. It’s also not uncommon to see methods overridden with modified method signatures causing branching on type in classes that depend on objects of the parent’s type. All of these either lead to unstable code or unnecessary and ugly branching.

Interface Segregation Principle

This principle is less relevant in dynamic languages. Since duck typing languages don’t require that types be specified in our code this principle can’t be violated.

That doesn’t mean we shouldn’t take a look at a potential violation in case we’re working with another language. The principle states that a client should not be forced to depend on methods that it does not use.

Let’s take a closer look at what this means in Swift. In Swift we can use protocols to define types that will require concrete classes to conform to the structures they outline. This makes it possible to create classes and methods that require only the minimum API.

In this example we’ll create two classes Test and User. We’ll also reference a Question class I will omit since it will not be necessary for the sake of this example. Our user will take tests. And tests can be scored and taken. Let’s take a look:

class Test {
  leg questions: [Question]
  init(testQuestions: [Question]) {
    questions = testQuestions
  }

  func answerQuestion(questionNumber: Int, choice: Int) {
    questions[questionNumber].answer(choice)
  }

  func gradeQuestion(questionNumber: Int, correct: Bool) {
    question[questionNumber].grade(correct)
  }
}

class User {
  func takeTest(test: Test) {
    for question in test.questions {
      test.answerQuestion(question.number, arc4random(4))
    }
  }
}

Our User would not get a very good grade because they’re randomly choosing test answers, but we also have a violation of the Interface Segregation Principle. Our takeTest requires we provide an argument of type Test. The Test type has two methods one of which our takeTest method depends, but the takeTest method does not care about gradeQuestion method. Let’s take advantage of Swift’s protocols to fix this and get back on the right side of our Interface Segregation Principle.

protocol QuestionAnswering {
  var questions: [Question] { get }
  func answerQuestion(questionNumber: Int, choice: Int)
}

class Test: QuestionAnswering {
  let questions: [Question]
  init(testQuestions: [Question]) {
    self.questions = testQuestions
  }

  func answerQuestion(questionNumber: Int, choice: Int) {
    questions[questionNumber].answer(choice)
  }

  func gradeQuestion(questionNumber: Int, correct: Bool) {
    question[questionNumber].grade(correct)
  }
}

class User {
  func takeTest(test: QuestionAnswering) {
    for question in test.questions {
      test.answerQuestion(question.number, arc4random(4))
    }
  }
}

Our takeTest method now requires a QuestionAnswering type. This is an improvement because we can now use this same logic for any type that conforms to this protocol. Perhaps down the road we would like to add a Quiz type, or even different types of tests. With our new implementation we could easily reuse this code.

Dependency Inversion Principle

The Dependency Inversion Principle has to do with high level (think business logic) objects not depending on low-level (think database querying and IO) implementation details. This can be achieved with duck typing and the Dependency Inversion Principle. Often this pattern is used to achieve the open closed principle we discussed above. In fact, we can even reuse that same example as a demonstration of this principle. Let’s take a look:

class UsageFileParser
  def initialize(client, parser)
    @client = client
    @parser = parser
  end

  def parse(usage_file)
    parser.parse(usage_file)
    @client.last_parse = Time.now
    @client.save!
  end
end

class XmlParser
  def parse(usage_file)
    # parse xml
  end
end

class CsvParser
  def parse(usage_file)
    # parse csv
  end
end

As you can see, our high level object, the file parser, does not depend directly on an implementation of a lower level object, XML and CSV parsers. The only thing that is required for an object to be used by our high level class is that it respond to the parse message. This decouples our high level functionality from low level implementation details and allows us to easily modify what those low level implementation details are. Having to write a separate usage file parser per file type would require lots of unnecessary duplication.

What’s next?

If you found this useful, learn more by watching the SOLID videos on The Weekly Iteration:

Considering dropping support for Rails 1.0-2.2

Posted 19 days back at Phusion Corporate Blog

We work very hard to maintain backward compatibility in Phusion Passenger. Even the latest version still supports Ruby 1.8.5 and Rails 1.0. We’ve finally reached a point where we believe dropping support for Rails 1.0-2.2 will benefit the quality of our codebase. Is there anybody here who would object to us dropping support for Rails 1.0-2.2? If so, please let us know by posting a comment. Rails 2.3 will still be supported.

The post Considering dropping support for Rails 1.0-2.2 appeared first on Phusion Corporate Blog.

Considering dropping support for Rails 1.0-2.2

Posted 19 days back at Phusion Corporate Blog

We work very hard to maintain backward compatibility in Phusion Passenger. Even the latest version still supports Ruby 1.8.5 and Rails 1.0. We’ve finally reached a point where we believe dropping support for Rails 1.0-2.2 will benefit the quality of our codebase. Is there anybody here who would object to us dropping support for Rails 1.0-2.2? If so, please let us know by posting a comment. Rails 2.3 will still be supported.

The post Considering dropping support for Rails 1.0-2.2 appeared first on Phusion Corporate Blog.

Acceptance Tests at a Single Level of Abstraction

Posted 20 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Each acceptance test tells a story: a logical progression through a task within an application. As developers, it’s our responsibility to tell each story in a concise manner and to keep the reader (either other developers or our future selves) engaged, aware of what is happening, and understanding of the purpose of the story.

At the heart of understanding the story being told is a consistent, single level of abstraction; that is, each piece of behavior is roughly similar in terms of functionality extracted and its overall purpose.

An example of multiple levels of abstraction

Let’s first focus on an example of how not to write an acceptance test by writing a test at multiple levels of abstraction.

# spec/features/user_marks_todo_complete_spec.rb
feature "User marks todo complete" do
  scenario "updates todo as completed" do
    sign_in
    create_todo "Buy milk"

    find(".todos li", text: "Buy milk").click_on "Mark complete"

    expect(page).to have_css(".todos li.completed", text: "Buy milk")
  end

  def create_todo(name)
    click_on "Add new todo"
    fill_in "Name", with: name
    click_on "Submit"
  end
end

Let’s focus on the scenario. We’ve followed the four-phase test, separating each step:

scenario "updates todo as completed" do
  # setup
  sign_in
  create_todo "Buy milk"

  # exercise
  find(".todos li", text: "Buy milk").click_on "Mark complete"

  # verify
  expect(page).to have_css(".todos li.completed", text: "Buy milk")

  # teardown not needed
end

To prepare for testing that marking a todo complete works, we sign in and create a todo to mark complete. Once the todo is complete, we find it on the page and click the ‘Mark complete’ anchor tag associated with it. Finally, we ensure the same <li> is present, this time with a completed class.

From a behavior standpoint, this touches on each part of the app necessary to verify marking a todo as complete works; however, there are varying levels of abstraction in this test between the setup phase and exercise/verify phases. There are Capybara methods (find and have_css) interspersed with helper methods (sign_in and create_todo) which force developers to switch from user-level needs and outcomes to page-specific interactions like checking for presence of specific elements with CSS selectors.

An example of a single level of abstraction

Let’s now look at a scenario written at a single level of abstraction:

feature "User marks todo complete" do
  scenario "updates todo as completed" do
    sign_in
    create_todo "Buy milk"

    mark_complete "Buy milk"

    expect(page).to have_completed_todo "Buy milk"
  end

  def create_todo(name)
    click_on "Add new todo"
    fill_in "Name", with: name
    click_on "Submit"
  end

  def mark_complete(name)
    find(".todos li", text: name).click_on "Mark complete"
  end

  def have_completed_todo(name)
    have_css(".todos li.completed", text: name)
  end
end

This spec follows the Composed Method pattern, discussed in Smalltalk Best Practice Patterns, wherein each piece of functionality is extracted to well-named methods. Each method should be written at a single level of abstraction.

While we’re still following the four-phase test, the clarity provided by reducing the number of abstractions is obvious. There’s largely no context-switching as a developer reads the test because there’s no interspersion of Capybara helper methods with our methods (sign_in, create_todo mark_complete, and have_completed_todo).

The most common ways to introduce a single level of abstraction are to extract behavior to helper methods (either within the spec or to a separate file if the behavior is used across the suite) or to extract page objects.

The cost of going down the path of high-level helpers across a suite isn’t nonexistent, however; by extracting behavior to files outside the spec (especially as the suite grows and similar patterns emerge), the page interactions are separated from the tests themselves, which reduces cohesion.

Maintaining a single level of abstraction is a tool in every developer’s arsenal to help achieve clear, understandable tests. By extracting behavior to well-named methods, the developer can better tell the story of each scenario by describing behaviors consistently and at a high enough level that others will understand the goal and outcomes of the test.

Weather Lights

Posted 21 days back at techno weenie - Home

I recently spoke at the GitHub Patchwork event in Boulder last month. My son Nathan tagged along to get his first taste of the GitHub Flow. I don't necessarily want him to be a programmer, but I do push him to learn a little to augment his interest in meteorology and astronomy.

The night was a success. He made it through the tutorial with only one complaint: the Patchwork credit went to my wife, who had created a GitHub login that night.

Since then, I've been looking for a project to continue his progress. I settled on a weather light, which consists of a ruby script that changes the color of a Philips Hue bulb. If you're already an experienced coder, jump straight to the source at github.com/technoweenie/weatherhue.

Requirements

Unfortunately, there's one hefty requirement: You need a Philips Hue light kit, which consists of a Hue bridge and a light. Once you have the kit, you'll have to use the Hue API to create a user and figure out the ID of your light.

Next, you need to setup an account for the Weather2 API. There are a lot of services out there, but this one is free, supports JSON responses, and also gives simple forecasts. They allow 500 requests a day. If you set this script to run every 5 minutes, you'll only use 288 requests.

After you're done, you should have five values. Write these down somewhere.

  • HUE_API - The address of your Hue bridge. Probably something like "http://10.0.0.1"
  • HUE_USER - The username you setup with the Hue API.
  • HUE_LIGHT - The ID of the Hue light. Probably 1-3.
  • WEATHER2_TOKEN - Your token for the weather2 API.
  • WEATHER2_QUERY - The latitude and longitude of your house. For example, Pikes Peak is at "38.8417832,-105.0438213."

Finally, you need ruby, with the following gems: faraday, color, and dotenv. If you're on a version of ruby lower than 1.9, you'll also want the json gem.

Writing the script

I'm going to describe the process I used to write the weatherhue.rb script. Due to the way ruby runs, it's not necessarily in the order that the code is written. If you look at the file, you'll see 4 sections:

  1. Lines requiring ruby gems.
  2. A few defined helper functions.
  3. A list of temperatures and their HSL values.
  4. Running code that gets the temperature and sets the light.

You'll likely find yourself bouncing around as you write the various sections.

Step 1: Get the temperature

The first thing the script needs is the temperature. There are two ways to get it: through an argument in the script (useful for testing), or a Weather API. This is a simple script that pulls the current temperature from the API forecast results.

if temp = ARGV[0]
  # Get the temperature from the first argument.
  temp = temp.to_i
else
  # Get the temperature from the weather2 api
  url = "http://www.myweather2.com/developer/forecast.ashx?uac=#{ENV["WEATHER2_TOKEN"]}&temp_unit=f&output=json&query=#{ENV["WEATHER2_QUERY"]}"
  res = Faraday.get(url)
  if res.status != 200
    puts res.status
    puts res.body
    exit
  end

  data = JSON.parse(res.body)
  temp = data["weather"]["curren_weather"][0]["temp"].to_i
end

Step 2: Choose a color based on the temperature

I wanted the color to match color ranges on local news forecasts.

I actually went through the tedious process of making a list of HSL values at 5 degree increments. The eye dropper app I used gave me RGB values from 0-255, which had to be converted to the HSL values that the Hue lights take. Here's how I did it in ruby with the color gem:

rgb = [250, 179, 250]

# convert the values 0-255 to a decimal between 0 and 1.
rgb_color = Color::RGB.from_fraction 250/255.0, 179/255.0, 255/255.0
hsl_color = rgb_color.to_hsl

# convert hsl decimals to the Philips Hue values
hsl = [
  # hue
  (hsl_color.h * 65535).to_i,
  # saturation
  (hsl_color.s * 255).to_i,
  # light (brightness)
  (hsl_color.l * 255).to_i,
]

I simply wrote the result in an HSL hash:

HSL = {
  -20=>[53884, 255, 217],
  -15=>[53988, 198, 187],
  -10=>[53726, 161, 167],
  # ...

After I had converted everything, I noticed a couple things. First, the saturation and brightness values don't change that much, especially for the hotter temperatures. Second, the hue values range from 53884 to 1492. I probably didn't need to convert all those RGB values by hand :)

We can use this list of HSL values to convert any temperature to a color.

def color_for_temp(temp)
  remainder = temp % 5
  if remainder == 0
    return HSL[temp]
  end

  # get the lower and upper bound around a temp
  lower = temp - remainder
  upper = lower + 5

  # convert the HSL values to Color::HSL objects
  lower_color = hsl_to_color(HSL[lower])
  upper_color = hsl_to_color(HSL[upper])

  # use Color::HSL#mix_with to get a color between two colors
  color = lower_color.mix_with(upper_color, remainder / 5.0)

  color_to_hsl color
end

Step 3: Set the light color

Now that we have the HSL values for the temperature, it's time to set the Philip Hue light. First, create a state object for the light:

# temp_color is an array of HSL colors: [53884, 255, 217]
state = {
  :on => true,
  :hue => temp_color[0],
  :sat => temp_color[1],
  :bri => temp_color[2],
  # performs a smooth transition to the new color for 1 second
  :transitiontime => 10,
}

A simple HTTP PUT call will change the color.

hueapi = Faraday.new ENV["HUE_API"]
hueapi.put "/api/#{ENV["HUE_USER"]}/lights/#{ENV["HUE_LIGHT"]}/state", state.to_json

Step 4: Schedule the script

If you don't want to set the environment variables each time, you can create a .env file in the root of the application.

WEATHER2_TOKEN=MONKEY
WEATHER2_QUERY=38.8417832,-105.0438213
HUE_API=http://192.168.1.50
HUE_USER=technoweenie
HUE_LIGHT=1

You can then run the script with dotenv:

$ dotenv ruby weatherhue.rb 75

A crontab can be used to run this every 5 minutes. Run crontab -e to add a new entry:

# note: put tabs between the `*` values
*/5 * * * * cd /path/to/script; dotenv ruby weatherhue.rb

Confirm the crontab with crontab -l.

Bonus Round

  1. Can we simplify the function to get the HSL values for a temperature? Instead of looking up by temperature, use a percentage to get a hue range from 55,000 to 1500.
  2. Can we do something interesting with the saturation and brightness values?
    Maybe tweak them based on the time of day.
  3. Update the script to use the forecast for the day, and not the current temperature.
  4. Set a schedule that automatically only keeps the light on in the mornings when you actually care what the temperature will be.

I hope you enjoyed this little tutorial. I'd love to hear any experiences from working with it! Send me pictures or emails either to the GitHub issue for this post, or my email address.

Git pairing aliases, prompts and avatars

Posted 23 days back at The Pug Automatic

When we pair program at Barsoom, we’ve started making commits in both users’ names.

This emphasizes shared ownership and makes commands like git blame and git shortlog -sn (commit counts) more accurate.

There are tools like hitch to help you commit as a pair, but I found it complex and buggy, and I’ve been happy with something simpler.

Aliases

I just added some simple aliases to my Bash shell:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
alias pair='echo "Committing as: `git config user.name` <`git config user.email`>"'
alias unpair="git config --remove-section user 2> /dev/null; echo Unpaired.; pair"

alias pairf="git config user.pair 'FF+HN' && git config user.name 'Foo Fooson and Henrik Nyh' && git config user.email 'all+foo+henrik@barsoom.se'; pair"
alias pairb="git config user.pair 'BB+HN' && git config user.name 'Bar Barson and Henrik Nyh' && git config user.email 'all+bar+henrik@barsoom.se'; pair"
</figure>

pair tells me who I’m committing as. pairf will pair me up with Foo Fooson; pairb will pair me up with Bar Barson. unpair will unpair me.

All this is done via Git’s own persistent per-repository configuration.

The emails use plus addressing, supported by Gmail and some others: all+whatever@barsoom.se ends up at all@barsoom.se.

I recommend consistently putting the names in alphabetical order so the same pair is always represented the same way.

If you’re quite promiscuous in your pairing, perhaps in a large team, the aliases will add up, and you may prefer something like hitch. But in a small team like ours, it’s not an issue.

Prompt

A killer feature of my solution, that doesn’t seem built into hitch or other tools, is that it’s easy to show in your prompt:

<figure class="code"><figcaption>~/.bash_profile</figcaption>
function __git_prompt {
  [ `git config user.pair` ] && echo " (pair: `git config user.pair`)"
}

PS1="\W\$(__git_prompt)$ "
</figure>

This will give you a prompt like ~/myproject (pair: FF+HN)$ when paired, or ~/myproject$ otherwise.

Avatars

GitHub looks better if pairs have a user picture.

You just need to add a Gravatar for the pair’s email address.

When we started committing as pairs, I toyed a little with generating pair images automatically. When Thoughtbot wrote about pair avatars yesterday, I was inspired to ship something.

So I released Pairicon, a tiny open source web app that uses the GitHub API and a free Cloudinary plan to generate pair avatars.

Try it out!

The risks of feature branches and pre-merge code review

Posted 23 days back at The Pug Automatic

Our team has been doing only spontaneous code review for a good while – on commits in the master branch that happen to pique one’s interest. This week, we started systematically reviewing all non-trivial features, as an experiment, but this is still after it’s pushed to master and very likely after it has been deployed to production.

This is because we feel strongly about continuous delivery. Many teams adopt the practice of feature branches, pull requests and pre-merge (sometimes called “pre-commit”) code review – often without, I think, realizing the downsides.

Continuous delivery

Continuous delivery is about constantly deploying your code, facilitated by a pipeline: if some series of tests pass, the code is good to go. Ideally it deploys production automatically at the end of this pipeline.

Cycles are short and features are often released incrementally, perhaps using feature toggles.

This has major benefits. But if something clogs that pipeline, the benefits are reduced. And pre-merge code review clogs that pipeline.

The downsides of pre-merge review

Many of these downsides can be mitigated by keeping feature branches small and relatively short-lived, and reviewing continuously instead of just before merging – but even then, most apply to some extent.

  • Bugs are discovered later. With long-lived feature branches, sometimes much later.

    With continuous delivery, you may have to look for a bug in that one small commit you wrote 30 minutes ago. You may have to revert or fix that one commit.

    With a merged feature branch, you may have several commits or one big squashed commit. The bug might be in code you wrote a week ago. You may need to roll back an entire, complex feature.

    There is an obvious risk to deploying a large change all at once vs. deploying small changes iteratively.

  • Feedback comes later.

    Stakeholders and end-users are more likely to use the production site than to review your feature branch. By incrementally getting it out there, you can start getting real feedback quickly – in the form of user comments, support requests, adoption rates, performance impact and so on. Would you rather get this feedback on day 1 or after a week or more of work?

  • Merge conflicts or other integration conflicts are more likely.

  • It is harder for multiple pairs to work on related features since integration happens less often.

    If they share a feature branch, all features have to be reviewed and merged together.

  • The value added by your feature or bug fix takes longer to reach the end user.

    Reviews can take a while to do and to get to.

  • It is frustrating to the code author not to see their code shipped.

    It may steal focus from the next task, or even block them or someone else from starting on it.

The downsides of post-merge review

Post-merge review isn’t without its downsides.

  • Higher risk of releasing bugs and other defects.

    Anything a pre-merge review would catch may instead go into production.

    Then again, since releases are small and iterative, usually these are smaller bugs and easier to track down.

  • Renaming database tables and columns without downtime in production is a lot of work.

    Assuming you want to deploy without downtime, database renames are a lot of work. When you release iteratively, you will add tables and columns to production sooner, perhaps before discovering better names. Then you have to rename them in production.

    This can be annoying, but it’s not a dealbreaker.

    We try to mitigate this by always getting a second opinion on non-obvious table or column names.

  • New hires may be insecure about pushing straight to master.

    Pair programming could help that to some extent. You can also do pre-merge code review temporarily for some person or feature.

I fully acknowledge these downsides. This is a trade-off. It’s not that post-merge review is flawless; I just feel it has more upsides and fewer downsides all in all.

Technical solutions for post-merge review

I think GitHub’s excellent tools around pull requests is a major reason for the popularity of pre-merge review.

We only started with systematic post-merge reviews this week, and we’re doing it the simplest way we could think of: a “Review” column for tickets (“cards”) in Trello, digging through git log and writing comments on the GitHub commits.

This is certainly less smooth than pull requests.

We have some ideas. Maybe use pull requests anyway with immediately-merged feature branches. Or some commit-based review tool like Barkeep or Codebrag.

But we don’t know yet. Suggestions are welcome.

Our context

To our team, the benefits are clear.

We are a small team of 6 developers, mostly working from the same room in the same office. We often discuss complex code in person before it’s committed.

We’re not in day trading or life-support systems, so we can accept a little risk for other benefits. Though I’m not sure pre-release review actually reduces risks overall, as discussed above.

If you’re in another situation than ours, your trade-off may be different. It would be interesting to hear about that in the comments.

Don't mix in your privates

Posted 23 days back at The Pug Automatic

Say we have this module:

<figure class="code">
module Greeter
  def greet(name)
    "HELLO, #{normalize(name)}!"
  end

  private

  def normalize(name)
    name.strip.upcase
  end
end
</figure>

We can include it to make instances of a class correspond to a “greeter” interface:

<figure class="code">
class Person
  include Greeter
end

person = Person.new
person.greet("Joe")  # => "HELLO, JOE!"
</figure>

Is greet the whole interface?

It is the only public methods the module gives us, but it also has a private normalize method, part of its internal API.

The risk of collision

The private method has a pretty generic name, so there’s some risk of collision:

<figure class="code">
class Person
  include Greeter

  def initialize(age)
    @age = normalize(age)
  end

  private

  def normalize(age)
    [age.to_i, 25].min
  end
end

person = Person.new(12)
person.greet("Joe")  # => "HELLO, 0!"
</figure>

The module’s greet method will call Person’s normalize method instead of the module’s – modules are much like superclasses in this respect.

You could reduce the risk by making the method names unique enough, but it’s easy to forget and reads poorly.

Extract a helper

Instead, you can move the module’s internals into a separate module or class that is not mixed in:

<figure class="code">
module Greeter
  module Mixin
    def greet(name)
      "HELLO, #{Name.normalize(name)}!"
    end
  end

  module Name
    def self.normalize(name)
      name.strip.upcase
    end
  end
end

class Person
  include Greeter::Mixin

  # …
end
</figure>

Since the helper class is outside the mixin, collisions are highly unlikely.

This is for example how my Traco gem does it.

Introducing additional objects also makes it easier to refactor the code further.

Note that if the helper object is defined inside the mixin itself, there is a collision risk as Gregory Brown pointed out in a comment.

Intentionally mixing in privates

Sometimes, it does make sense to mix in private methods. Namely when they’re part of the interface that you want to mix in, and not just internal details of the module.

You often see this with the Template Method pattern:

<figure class="code">
module Greeter
  def greet(name)
    "#{greeting_phrase}, #{name}!#{post_greeting}"
  end

  private

  def greeting_phrase
    raise "You must implement this method!"
  end

  def post_greeting
    # Defaults to empty.
  end
end

class Person
  include Greeter

  private

  def greeting_phrase
    "Hello"
  end

  def post_greeting
    "!!1"
  end
end
</figure>

Summary

Mind the private methods of your modules, since they are mixed in along with the public methods. If they’re not part of the interface you intend to mix in, they should probably be extracted to some helper object.