Buttons with Hold Events in Angular.js

Posted 16 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Creating an interaction with a simple button in Angular only requires adding the ngClick directive. However, sometimes an on click style interaction isn’t sufficient. Let’s take a look at how we can have a button which performs an action as long as it’s pressed.

For the example, we’ll use two buttons which can be used to zoom a camera in and out. We want the camera to continue zooming, until the button is released. The final effect will work like this:

Zooming in Martial Codex

Our template might look something like this:

<a href while-pressed="zoomOut()">
  <i class="fa fa-minus"></i>
</a>
<a href while-pressed="zoomIn()">
  <i class="fa fa-plus"></i>
</a>

We’re making a subtle assumption with this interface. By adding the parenthesis, we imply that whilePressed will behave similarly to ngClick. The given value is an expression that will get evaluated continuously when the button is pressed, rather than us handing it a function object for it to call. In practice, we can use the '&' style of arguments in our directive to capture the expression. You can find more information about the different styles of scopes here.

whilePressed = ->
  restrict: "A"

  scope:
    whilePressed: '&'

Binding the Events

When defining more complex interactions such as this one, Angular’s built-in directives won’t give us the control we need. Instead, we’ll fall back to manual event binding on the element. For clarity, I tend prefer to separate the callback function from the event bindings. Since we’re manipulating the DOM, our code will go into a link function. Our initial link function will look like this:

link: (scope, elem, attrs) ->
  action = scope.whilePressed

  bindWhilePressed = ->
    elem.on("mousedown", beginAction)

  beginAction = (e) ->
    e.preventDefault()
    # Do stuff

  bindWhilePressed()

Inside of our action we’ll need to do two things:

  1. Start running the action
  2. Bind to mouseup to stop running the action.

For running the action, we’ll use Angular’s $interval service. $interval is a wrapper around JavaScript’s setInterval, but gives us a promise interface, better testability, and hooks into Angular’s digest cycle.

In addition to running the action continuously, we’ll also want to run it immediately to avoid a delay. We’ll run the action every 15 milliseconds, as this will roughly translate to once per browser frame.

+TICK_LENGTH = 15
+
-whilePressed = ->
+whilePressed = ($interval) ->
   restrict: "A"

   link:
     action = scope.whilePressed

@@ -23,7 +24,7 @@
     beginAction = (e) ->
       e.preventDefault()
+      action()
+      $interval(action, TICK_LENGTH)
+      bindEndAction()

In our beginAction function, we call bindEndAction to set up the events to stop running the event. We know that we’ll at least want to bind to mouseup on our button, but we have to decide how to handle users who move the mouse off of the button before releasing it. We can handle this by listening for mouseleave on the element, in addition to mouseup.

bindEndAction = ->
  elem.on('mouseup', endAction)
  elem.on('mouseleave', endAction)

In our endAction function, we’ll want to cancel the $interval for our action, and unbind the event listeners for mouseup and mouseleave.

unbindEndAction = ->
  elem.off('mouseup', endAction)
  elem.off('mouseleave', endAction)

endAction = ->
  $interval.cancel(intervalPromise)
  unbindEndAction()

We’ll also need to store the promise that $interval returned so that we can cancel it when the mouse is released.

 whilePressed = ($parse, $interval) ->
   link: (scope, elem, attrs) ->
     action = scope.whilePressed
+    intervalPromise = null

     bindWhilePressed = ->
       elem.on('mousedown', beginAction)
@@ -23,7 +24,7 @@
     beginAction = (e) ->
       e.preventDefault()
       action()
-      $interval(action, TICK_LENGTH)
+      intervalPromise = $interval(action, TICK_LENGTH)
       bindEndAction()

Cleaning Up

Generally I consider it a smell to have an isolated scope on any directive that isn’t an element. Each DOM element can only have one isolated scope, and attribute directives are generally meant to be composed. So let’s replace our scope with a manual use of $parse instead.

$parse takes in an expression, and will return a function that can be called with a scope and an optional hash of local variables. This means we can’t call action directly anymore, and instead need a wrapper function which will pass in the scope for us.

-whilePressed = ($interval) ->
-  scope:
-    whilePressed: "&"
-
+whilePressed = ($parse, $interval) ->
   link: (scope, elem, attrs) ->
-    action = scope.whilePressed
+    action = $parse(attrs.whilePressed)
     intervalPromise = null

     bindWhilePressed = ->
@@ -26,14 +23,17 @@ whilePressed = ($interval) ->

     beginAction = (e) ->
       e.preventDefault()
-      action()
-      intervalPromise = $interval(action, TICK_LENGTH)
+      tickAction()
+      intervalPromise = $interval(tickAction, TICK_LENGTH)
       bindEndAction()

     endAction = ->
       $interval.cancel(intervalPromise)
       unbindEndAction()

+    tickAction = ->
+      action(scope)

And that’s it. Our end result is a nicely decoupled Angular UI component that can easily be reused across applications. The final code looks like this.

TICK_LENGTH = 15

whilePressed = ($parse, $interval) ->
  restrict: "A"

  link: (scope, elem, attrs) ->
    action = $parse(attrs.whilePressed)
    intervalPromise = null

    bindWhilePressed = ->
      elem.on('mousedown', beginAction)

    bindEndAction = ->
      elem.on('mouseup', endAction)
      elem.on('mouseleave', endAction)

    unbindEndAction = ->
      elem.off('mouseup', endAction)
      elem.off('mouseleave', endAction)

    beginAction = (e) ->
      e.preventDefault()
      tickAction()
      intervalPromise = $interval(tickAction, TICK_LENGTH)
      bindEndAction()

    endAction = ->
      $interval.cancel(intervalPromise)
      unbindEndAction()

    tickAction = ->
      action(scope)

    bindWhilePressed()

Silver Searcher Tab Completion with Exuberant Ctags

Posted 17 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

I’m a heavy Vim user and demand speedy navigation between files. I rely on Exuberant Ctags and tag navigation (usually Ctrl-]) to move quickly around the codebase.

There were times, however, when I wasn’t in Vim but wanted to use tags to access information quickly; most noticeable was time spent in my shell, searching the codebase with ag.

As a zsh user, I was already aware of introducing tab completion by way of compdef and compadd:

_fn_completion() {
  if (( CURRENT == 2 )); then
    compadd foo bar baz
  fi
}

compdef _fn_completion fn

In this example, fn is the binary we want to add tab completion to, and we only attempt to complete after typing fn and then TAB. By checking CURRENT == 2, we’re verifying the position of the cursor as the second field in the command. This will complete with options foo, bar, and baz, and filter the options accordingly as you start typing and hit TAB again.

Now that we understand how to configure tab completion for commands, next up is determining how to extract useful information from the tags file. Here’s the first few lines of the file from a project I worked on recently:

==      ../app/models/week.rb   /^  def ==(other)$/;"   f       class:Week
AccessToken     ../app/models/access_token.rb   /^class AccessToken < ActiveRecord::Base$/;"    c
AccessTokensController  ../app/controllers/access_tokens_controller.rb  /^class AccessTokensController < ApplicationController$/;"      c

The tokens we want to use for tab completion are the first set of characters per line, so we can use cut -f 1 path/to/tags to grab the first field. We then use grep -v to ignore autogenerated ctags metadata we don’t care about. With a bit of extra work (like writing stderr to /dev/null in the instance where the tags file doesn’t exist yet), the end result looks like this:

_ag() {
  if (( CURRENT == 2 )); then
    compadd $(cut -f 1 .git/tags tmp/tags 2>/dev/null | grep -v '!_TAG')
  fi
}

compdef _ag ag

With this in place, we can now ag a project and tab complete from the generated tags file. With ag AccTAB:

$ ag AccessToken
AccessToken             AccessTokensController

And the result:

[ ~/dev/thoughtbot/project master ] ✔ ag AccessToken
app/controllers/access_tokens_controller.rb
1:class AccessTokensController < ApplicationController
15:    @project = AccessToken.find_project(params[:id])

app/models/access_token.rb
1:class AccessToken < ActiveRecord::Base

db/migrate/20140416195446_create_access_tokens.rb
1:class CreateAccessTokens < ActiveRecord::Migration

db/migrate/20140718175701_add_index_on_access_tokens_project_id.rb
1:class AddIndexOnAccessTokensProjectId < ActiveRecord::Migration

spec/models/access_token_spec.rb
3:describe AccessToken, 'Associations' do
7:describe AccessToken, '.find_project' do
12:      result = AccessToken.find_project(access_token.to_param)
20:      expect { AccessToken.find_project('unknown') }.
26:describe AccessToken, '.generate' do
40:describe AccessToken, '#to_param' do
50:    expect(AccessToken.find(access_token.to_param)).to eq(access_token)

Voila! Tab completion with ag based on the tags file.

If you’re using thoughtbot’s dotfiles, you already have this behavior.

Automatic versioning in Xcode with git-describe

Posted 18 days back at zargony.com

Do you manually set a new version number in Xcode every time you release a new version of your app? Or do you use some tool that updates the Info.plist in your project like agvtool or PlistBuddy? Either way, you probably know that it's a pain to keep track of the version number in the project.

I recently spent some time trying out various methods to automatically get the version number from git and put it into the app that Xcode builds. I found that most of them have drawbacks, but in the end I found a way that I finally like most. Here's how.

Getting a version number from git

Why should we choose a version number manually if we're using git to manage all source files anyway. Git is a source code management tool that keeps track of every change and is able to uniquely identify every snapshot of the source by its commit ids. The most obvious idea would be to simply use the commit ids as the version number of your software, but unfortunately (because of the distributed nature of git) commit ids are not very useful to the human reader: you can't tell at once which one is earlier and which is later.

But git has a very useful command called git describe that extracts a human readable version number from the repository. If you check out a specific tagged revision of your code, git describe will print the tag's name. If you check out any another commit, it will go back the commit history to find the latest tag and print its name followed by the number of commits and the current commit id. This is incredible useful to exactly describe the currently checked out version (hence the name of this command).

If you additionally use the --dirty option, git describe will append the string '-dirty' if your working directory isn't clean (i.e. you have uncommitted changes). Perfect!

So if tag all releases of your app (which you should be doing already anyway), it's easy to automatically create a version number with git describe --dirty for any commit, even between releases (e.g. for betas).

Here are some examples of version numbers:

v1.0                   // the release version tagged 'v1.0'
v1.0-8-g1234567        // 8 commits after release v1.0, at commit id 1234567
v1.0-8-g1234567-dirty  // same as above but with unspecified local changes (dirty workdir)

Automatically set the version number in Xcode

You'll find several ideas how to use automaticly generated version numbers in Xcode projects if you search the net. However most of them have drawbacks that I'd like to avoid. Some ways use a custom build phase to create a header file containing the version number. Besides that this approach gets more complicated with Swift, it only allows you to display the version number in your app, but doesn't set it in the app's Info.plist. Most libraries like crash reporters or analytics will take the version number from Info.plist, so it's useful to have the correct version number in there.

So let's modify the Info.plist using PlistBuddy in a custom build phase. But we don't want to modify the source Info.plist, because that would change a checked-in file and lead to a dirty workdir. We need to modify the Info.plist inside the target build directory (after the ProcessInfoPlistFile build rule ran).

Instructions

  • Add a new run script build phase with the below script.
  • Make sure it runs late during building by moving it to the bottom.
  • Make sure that the list of input files and output files is empty and that "run script only when installing" is turned off.
# This script sets CFBundleVersion in the Info.plist of a target to the version
# as returned by 'git describe'.
# Info: http://zargony.com/2014/08/10/automatic-versioning-in-xcode-with-git-describe
set -e
VERSION=`git describe --dirty |sed -e "s/^[^0-9]*//"`
echo "Updating Info.plist version to: ${VERSION}"
/usr/libexec/PlistBuddy -c "Set :CFBundleVersion ${VERSION}" "${TARGET_BUILD_DIR}/${INFOPLIST_PATH}"
/usr/bin/plutil -convert ${INFOPLIST_OUTPUT_FORMAT}1 "${TARGET_BUILD_DIR}/${INFOPLIST_PATH}"

Thoughts

  • By keeping the list of output files empty, Xcode runs the script every time (otherwise it would detect an existing file and skip running the script even if changes were made and the version number may have changed)
  • Some sed magic strips any leading non-numbers from the version string so that you can use tags like release-1.0 or v1.5.
  • PlistBuddy converts the plist to XML, so we're running plutil at the end to convert it back to the desired output format (binary by default)
  • If you need more information than just the output of git describe, try the excellent "autorevision" script.

Episode #487 - August 8th, 2014

Posted 18 days back at Ruby5

Beautiful API documentation, deprecating paths in Rails mailers, taking RubySteps, meeting Starboard, and the new Heroku Button

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

tripit/slate

Beautiful static documentation for your API
tripit/slate

Deprecating *_path in Mailers

"Email does not support relative links since there is no implicit host. Therefore all links inside of emails must be fully qualified URLs. All path helpers are now deprecated."
Deprecating *_path in Mailers

RubySteps

Daily coding practice via email and interactive lessons, every weekday.
RubySteps

Starboard

Starboard is a tool which creates Trello boards for tracking the various tasks necessary when onboarding, offboarding, or crossboarding employees.
Starboard

Heroku Button

One-click deployment of publicly-available applications on GitHub
Heroku Button

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

Intent to Add

Posted 19 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

The git add command runs blind, but can be controlled with more fine-grained precision using the --patch option. This works great for modified and deleted files, but untracked files do not show up.

$ echo "Hello, World!" > untracked
$ git status --short
?? untracked
$ git add --patch
No changes.

To remedy this, the --intent-to-add option can be used. According to git-add(1), --intent-to-add changes git add’s behavior to:

Record only the fact that the path will be added later. An entry for the path is placed in the index with no content. This is useful for, among other things, showing the unstaged content of such files with git diff and committing them with git commit -a.

What this means is that after running git add --intent-to-add, the specified untracked files will be added to the index, but without content. Now, when git add --patch is run it will show a diff for each previously staged untracked file with every line as an addition. This gives you a chance to look through the file, line by line, before staging it. You can even decide not to stage specific lines by deleting them from the patch using the edit command.

$ echo "Hello, World!" > untracked
$ git status --short
?? untracked
$ git add --intent-to-add untracked
$ git status --short
AM untracked
$ git add --patch
diff --git a/untracked b/untracked
index e69de29..8ab686e 100644
--- a/untracked
+++ b/untracked
@@ -0,0 +1 @@
+Hello, World!
Stage this hunk [y,n,q,a,d,/,e,?]?

In my .gitconfig I alias add --all --intent-to-add to aa and add --patch to ap which means that for most commits, I type:

$ git aa
$ git ap

Or in gitsh:

& aa
& ap

Linked Development: Linked Data from CABI and DFID

Posted 20 days back at RicRoberts :

In March this year we launched the beta version of Linked Development. It’s a linked open data site for CABI and the DFID, which provides data all about international development projects and research.

Linked Development site screenshot

This is a slightly unusual one for us: we’re taking it on after others have worked on it in the past. It’s currently in beta release and we’re hosting it on our PublishMyData service so users can easily get hold of the data they want in both human, and machine readable, formats. So it’s comes with most of the usual benefits we offer: thematic data browsing, a SPARQL endpoint and Linked Data APIs. And, because the data’s linked, each data point has a unique identifier so users can select and combine data from different data sources to get the exact information they’re after.

We’ve also rebuilt the site’s custpom Research Documents API, that was offered by the alpha version of the site to make it faster and more robust (it’s backward-compatible with the previous version).

Linked Development custom API screenshot

This site illustrates what’s possible for government organisations using linked data: it allows for collective ownership of data and data integration whilst aiming to improve audience reach and data availablility. It’s great to see linked data being embraced by an increasing number of public bodies.

DNS to CDN to Origin

Posted 20 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Content Distribution Networks (CDNs) such as Amazon CloudFront and Fastly have the ability to “pull” content from their origin server during HTTP requests in order to cache them. They can also proxy POST, PUT, PATCH, DELETE, and OPTION HTTP requests, which means they can “front” our web application’s origin like this:

DNS -> CDN -> Origin

Swapping out the concepts for actual services we use, the architecture can look like this:

DNSimple -> CloudFront -> Heroku

Or like this:

DNSimple -> Fastly -> Heroku

Or many other combinations.

Without Origin Pull or an Asset Host

Let’s first examine what it looks like serve static assets (CSS, JavaScript, font, and image files) from a Rails app without a CDN.

We could point our domain name to our Rails app running on Heroku using a CNAME record (apex domains in cloud environments have their own set of eccentricities):

www.thoughtbot.com -> thoughtbot-production.herokuapp.com

We’ll also need to set the following configuration:

# config/environments/{staging,production}.rb
config.serve_static_assets = true

In this setup, we’ll then see something like the following in our logs:

no asset host

That screenshot is from development mode but the same effect will occur in production:

  • all the application’s requests to static assets will go through the Heroku routing mesh,
  • get picked up by one of our web dynos,
  • passed to one of the Unicorn workers on the dyno,
  • then routed by Rails to the asset

This isn’t the best use of our Ruby processes. They should be reserved for handling real logic. Each process should have the fastest possible response time. Overall response time is affected by waiting for other processes to finish their work.

How Can We Solve This?

AssetSync is a popular approach that we have used in the past with success. We no longer use it because there’s no need to copy all files to S3 during deploy (rake assets:precompile). Copying files across the network is wasteful and slow, and gets slower as the codebase grows. S3 is also not a CDN, does not have edge servers, and therefore is slower than CDN options.

Asset Hosts that Support “Origin Pull”

A better alternative is to use services that “pull” the assets from the origin (Heroku) “Just In Time” the first time they are needed. Services we’ve used include CloudFront and Fastly. Fastly is our usual default due to its amazingly quick cache invalidation. Both have “origin pull” features that work well with Rails' asset pipeline.

Because of the asset pipeline, in production, every asset has a hash added to its name. Whenever the file changes, the browser requests the latest version as the hash and therefore the whole filename changes.

The first time a user requests an asset, it will look like this:

GET 123abc.cloudfront.net/application-ql4h2308y.css

A CloudFront cache miss “pulls from the origin” by making another GET request:

GET your-app-production.herokuapp.com/application-ql4h2308y.css

All future GET and HEAD requests to the CloudFront URL within the cache duration will be cached, with no second HTTP request to the origin:

GET 123abc.cloudfront.net/application-ql4h2308y.css

All HTTP requests using verbs other than GET and HEAD proxy through to the origin, which follows the Write-Through Mandatory portion of the HTTP specification.

Making it Work with Rails

We have standard configuration in our Rails apps that make this work:

# Gemfile
gem "coffee-rails"
gem "sass-rails"
gem "uglifier"

group :staging, :production do
  gem "rails_12factor"
end

# config/environments/{staging,production}.rb:
config.action_controller.asset_host = ENV["ASSET_HOST"] # will look like //123abc.cloudfront.net
config.assets.compile = false
config.assets.digest = true
config.assets.js_compressor = :uglifier
config.assets.version = ENV["ASSETS_VERSION"]
config.static_cache_control = "public, max-age=#{1.year.to_i}"

We don’t have to manually set config.serve_static_assets = true because the rails_12factor gem does it for us, in addition to handling any other current or future Heroku-related settings.

Fastly and other reverse proxy caches respect the Surrogate-Control standard. To get entire HTML pages cached in Fastly, we only need to include the Surrogate-Control header in the response. Fastly will cache the page for the duration we specify, protecting the origin from unnecessary requests and serving the HTML from Fastly’s edge servers.

Caching Entire HTML Pages (Why Use Memcache?)

While setting the asset host is a great start, a DNS to CDN to Origin architecture also lets us cache entire HTML pages. Here’s an example of caching entire HTML pages in Rails with High Voltage:

class PagesController < HighVoltage::PagesController
  before_filter :set_cache_headers

  private

  def set_cache_headers
    response.headers["Surrogate-Control"] = "max-age=#{1.day.to_i}"
  end
end

This will allow us to cache entire HTML pages in the CDN without using a Memcache add-on, which still goes through the Heroku router, then our app’s web processes, then Memcache. This architecture entirely protects the Rails app from HTTP requests that don’t require Ruby logic specific to our domain.

Rack Middleware

If we want to cache entire HTML pages site-wide, we might want to use Rack middleware. Here’s our typical config.ru for a Middleman app:

$:.unshift File.dirname(__FILE__)

require "rack/contrib/try_static"
require "lib/rack_surrogate_control"

ONE_WEEK = 604_800
FIVE_MINUTES = 300

use Rack::Deflater
use Rack::SurrogateControl
use Rack::TryStatic,
  root: "tmp",
  urls: %w[/],
  try: %w[.html index.html /index.html],
  header_rules: [
    [
      %w(css js png jpg woff),
      { "Cache-Control" => "public, max-age=#{ONE_WEEK}" }
    ],
    [
      %w(html), { "Cache-Control" => "public, max-age=#{FIVE_MINUTES}" }
    ]
  ]

run lambda { |env|
  [
    404,
    {
      "Content-Type"  => "text/html",
      "Cache-Control" => "public, max-age=#{FIVE_MINUTES}"
    },
    File.open("tmp/404.html", File::RDONLY)
  ]
}

We build the Middleman app at rake assets:precompile time during deploy to Heroku, as described in Styling a Middleman Blog with Bourbon, Neat, and Bitters. In production, we serve the app using Rack, so we are able to insert middleware to handle the Surrogate-Control header:

module Rack
  class SurrogateControl
    # Cache content in a reverse proxy cache (such as Fastly) for a year.
    # Use Surrogate-Control in response header so cache can be busted after
    # each deploy.
    ONE_YEAR = 31557600

    def initialize(app)
      @app = app
    end

    def call(env)
      status, headers, body = @app.call(env)
      headers["Surrogate-Control"] = "max-age=#{ONE_YEAR}"
      [status, headers, body]
    end
  end
end

CloudFront Setup

If we want to use CloudFront, we use the following settings:

  • “Download” CloudFront distribution
  • “Origin Domain Name” as www.thoughtbot.com (our app’s URL)
  • “Origin Protocol Policy” to “Match Viewer”
  • “Object Caching” to “Use Origin Cache Headers”
  • “Forward Query Strings” to “No (Improves Caching)”
  • “Distribution State” to “Enabled”

As a side benefit, in combination with CloudFront logging, we could replay HTTP requests on the Rails app if we had downtime at the origin for any reason, such as a Heroku platform issue.

Fastly Setup

If we use Fastly instead of CloudFront, there’s no “Origin Pull” configuration we need to do. It will work “out of the box” with our Rails configuration settings.

We often have a rake task in our Ruby apps fronted by Fastly like this:

# Rakefile
task :purge do
  api_key = ENV["FASTLY_KEY"]
  site_key = ENV["FASTLY_SITE_KEY"]
  `curl -X POST -H 'Fastly-Key: #{api_key}' https://api.fastly.com/service/#{site_key}/purge_all`
  puts 'Cache purged'
end

That turns our deployment process into:

git push production
heroku run rake purge --remote production

For more advanced caching and cache invalidation at an object level, see the fastly-rails gem.

Back to the Future

Fastly is really “Varnish as a Service”. Early in its history, Heroku used to include Varnish as a standard part of its “Bamboo” stack. When they decoupled the reverse proxy in their “Cedar” stack, we gained the flexibility of using different reverse proxy caches and CDNs fronting Heroku.

Love is Real

We have been using this stack in production for thoughtbot.com, robots.thoughtbot.com, playbook.thoughtbot.com, and many other apps for almost a year. It’s a stack in real use and is strong enough to consider as a good default architecture.

Give it a try on your next app!

Avoid AngularJS Dependency Annotation with Rails

Posted 21 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In AngularJS, it is a common practice to annotate injected dependencies for controllers, services, directives, etc.

For example:

angular.module('exampleApp', [])
  .controller('ItemsCtrl', ['$scope', '$http', function ($scope, $http) {
    $http.get('items/index.json').success(function(data) {
      $scope.items = data;
    });
  }]);

Notice how the annotation of the injected parameters causes duplication ($scope and $http appears twice). It becomes the responsibility of the developer to ensure that the actual parameters and the annotation are always in sync. If they are not, problems will occur that can cause lost time while head-scratching. As the list of parameters grows, it gets even harder to maintain.

The reason for needing to annotate the injected dependencies is documented in the AngularJS docs under “Dependency Annotation”:

To allow the minifiers to rename the function parameters and still be able to inject right services, the function needs to be annotated…

So, JavaScript minifiers will rename function parameters to something short and ambiguous (usually using just one letter). In that case, AngularJS does not know what service to inject since it tries to match dependencies to the parameter names.

Variable mangling and Rails

When using AngularJS with Rails, developers typically rely on the asset pipeline to handle the minification of JavaScript. Namely, the uglifier gem is the default, and most commonly used. With uglifier we are given an option to disable the mangling of variable names. JavaScript will still be minified – whitespace stripped out and code nicely compacted – but the variable and parameter names will remain the same.

To do this, in your Rails project disable the mangle setting for uglifier in the production (and staging) environment config, like so:

# config/environments/production.rb
ExampleApp::Application.configure do
  ...
  config.assets.js_compressor = Uglifier.new(mangle: false)
  ...
end

With that in place, you can write your AngularJS code without needing to annotate the injected dependencies. The previous code can now be written as:

angular.module('exampleApp', [])
  .controller('ItemsCtrl', function ($scope, $http) {
    $http.get('items/index.json').success(function(data) {
      $scope.items = data;
    });
  });

With this trick, there is no duplication of parameter names and no strange array notation.

The catch

Here are a couple of screenshots that show the difference in HTTP responses between mangling and non-mangling variable names, on a project of about 500 lines of production AngularJS code:

With full uglifier minification (variables are mangled): full-minification-screenshot

With variable mangling disabled (no variable mangling): mangling-disabled-screenshot

Disabling variable name mangling comes at the cost of about 200KB more. The size difference between the two settings can be more significant on larger projects with a lot more JavaScript code. The dilemma is to decide whether the convenience gained during development outweighs the size cost. Keep in mind that HTTP compression of web requests can help reduce the size difference. Benchmarking and comparison is advised on a per project basis.

What’s next?

If you found this useful, you might also enjoy:

KB Ratings

Posted 21 days back at entp hoth blog - Home

Howdy!

What is this new section that just appeared in the sidebar for KB articles?

Screenshot of the sidebar showing a 'Is this article helpful, thumbs up/down' section

Well, starting today, users can now rate your KB articles! If you are logged in as a regular user, you will see the rating widget, and if you are logged in as staff, you will see the actual rating:

Same section showing the actual rating

Click through and you will be able to see all ratings and comments for the article, as well as the version they are associated with, so that you can keep track of your progress when improving articles:

Or head over to Knowledge Base > Ratings to see all ratings for all articles.

I hope you enjoy the change, and let us know if you have any feedback ;)

Cheers!

Efficient JSON in Swift with Functional Concepts and Generics

Posted 22 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

A few months ago Apple introduced a new programming language, Swift, that left us excited about the future of iOS and OS X development. People were jumping into Swift with Xcode Beta1 immediately and it didn’t take long to realize that parsing JSON, something almost every app does, was not going to be as easy as in Objective-C. Swift being a statically typed language meant we could no longer haphazardly throw objects into typed variables and have the compiler trust us that it was the actually the type we claimed it would be. Now, in Swift, the compiler is doing the checking, making sure we don’t accidentally cause runtime errors. This allows us to lean on the compiler to create bug free code, but means we have to do a bit more work to make it happy. In this post, I discuss a method of parsing JSON APIs that uses functional concepts and Generics to make readable and efficient code.

Request the User Model

The first thing we need is a way to parse the data we receive from a network request into JSON. In the past, we’ve used NSJSONSerialization.JSONObjectWithData(NSData, Int, &NSError) which gives us an optional JSON data type and a possible error if there were problems with the parsing. The JSON object data type in Objective-C is NSDictionary which can hold any object in its values. With Swift, we have a new dictionary type that requires us to specify the types held within. JSON objects now map to Dictionary<String, AnyObject>. AnyObject is used because a JSON value could be a String, Double, Bool, Array, Dictionary or null. When we try to use the JSON to populate a model we’ve created, we’ll have to test that each key we get from the JSON dictionary is of that model’s property type. As an example, let’s look at a user model:

struct User {
  let id: Int
  let name: String
  let email: String
}

Now let’s take a look at what a request and response for the current user might look like:

func getUser(request: NSURLRequest, callback: (User) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    var jsonErrorOptional: NSError?
    let jsonOptional: AnyObject! = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions(0), error: &jsonErrorOptional)

    if let json = jsonOptional as? Dictionary<String, AnyObject> {
      if let id = json["id"] as AnyObject? as? Int { // Currently in beta 5 there is a bug that forces us to cast to AnyObject? first
        if let name = json["name"] as AnyObject? as? String {
          if let email = json["email"] as AnyObject? as? String {
            let user = User(id: id, name: name, email: email)
            callback(user)
          }
        }
      }
    }
  }
  task.resume()
}

After a lot of if-let statements, we finally have our User object. You can imagine that a model with more properties will just get uglier and uglier. Also, we are not handling any errors so if any of the steps don’t succeed, we have nothing. Finally, we would have to write this code for every model we want from the API, which would be a lot of code duplication.

Before we start to refactor, let’s define some typealias’s to simplify the JSON types.

typealias JSON = AnyObject
typealias JSONDictionary = Dictionary<String, JSON>
typealias JSONArray = Array<JSON>

Refactoring: Add Error Handling

First, we will refactor our function to handle errors by introducing the first functional programming concept, the Either<A, B> type. This will let us return the user object when everything runs smoothly or an error when it doesn’t. We can implement an Either<A, B> type in Swift like this:

enum Either<A, B> {
  case Left(A)
  case Right(B)
}

We can use Either<NSError, User>as the type we’ll pass to our callback so the caller can handle the successfully parsed User or the error.

func getUser(request: NSURLRequest, callback: (Either<NSError, User>) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    // if the response returned an error send it to the callback
    if let err = error {
      callback(.Left(err))
      return
    }

    var jsonErrorOptional: NSError?
    let jsonOptional: JSON! = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions(0), error: &jsonErrorOptional)

    // if there was an error parsing the JSON send it back
    if let err = jsonErrorOptional {
      callback(.Left(err))
      return
    }

    if let json = jsonOptional as? JSONDictionary {
      if let id = json["id"] as AnyObject? as? Int {
        if let name = json["name"] as AnyObject? as? String {
          if let email = json["email"] as AnyObject? as? String {
            let user = User(id: id, name: name, email: email)
            callback(.Right(user))
            return
          }
        }
      }
    }

    // if we couldn't parse all the properties then send back an error
    callback(.Left(NSError()))
  }
  task.resume()
}

Now the function calling our getUser can switch on the Either and do something with the user or display the error.

getUser(request) { either in
  switch either {
  case let .Left(error):
    // display error message

  case let .Right(user):
    // do something with user
  }
}

We will simplify this a bit by assuming that the Left will always be an NSError. Instead let’s use a different type Result<A> which will either hold the value we are looking for or an error. It’s implementation might look like this:

enum Result<A> {
  case Error(NSError)
  case Value(A)
}

Replacing Either with Result will look like this:

func getUser(request: NSURLRequest, callback: (Result<User>) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    // if the response returned an error send it to the callback
    if let err = error {
      callback(.Error(err))
      return
    }

    var jsonErrorOptional: NSError?
    let jsonOptional: JSON! = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions(0), error: &jsonErrorOptional)

    // if there was an error parsing the JSON send it back
    if let err = jsonErrorOptional {
      callback(.Error(err))
      return
    }

    if let json = jsonOptional as? JSONDictionary {
      if let id = json["id"] as AnyObject? as? Int {
        if let name = json["name"] as AnyObject? as? String {
          if let email = json["email"] as AnyObject? as? String {
            let user = User(id: id, name: name, email: email)
            callback(.Value(user))
            return
          }
        }
      }
    }

    // if we couldn't parse all the properties then send back an error
    callback(.Error(NSError()))
  }
  task.resume()
}
getUser(request) { result in
  switch result {
  case let .Error(error):
    // display error message

  case let .Value(user):
    // do something with user
  }
}

Not a big change but let’s keep going.

Refactoring: Eliminate Type Checking Tree

Next, we will get rid of the ugly JSON parsing by creating separate JSON parsers for each type. We only have a String, Int, and Dictionary in our object so we need three functions to parse those types.

func JSONString(object: JSON?) -> String? {
  return object as? String
}

func JSONInt(object: JSON?) -> Int? {
  return object as? Int
}

func JSONObject(object: JSON?) -> JSONDictionary? {
  return object as? JSONDictionary
}

Now the JSON parsing will look like this:

if let json = JSONObject(jsonOptional) {
  if let id = JSONInt(json["id"]) {
    if let name = JSONString(json["name"]) {
      if let email = JSONString(json["email"]) {
        let user = User(id: id, name: name, email: email)
      }
    }
  }
}

Using these functions we’ll still need a bunch of if-let syntax. The functional programming concepts Monads, Applicative Functors, and Currying will help to condense this parsing. First, let’s look at the Maybe Monad which is similar to Swift optionals. Monads have a bind operator which, when used with optionals, allows us to bind an optional with a function that takes a non-optional and returns an optional. If the first optional is .None then it returns .None, otherwise it unwraps the first optional and applies the function to it.

infix operator >>> { associativity left precedence 150 }

func >>><A, B>(a: A?, f: A -> B?) -> B? {
  if let x = a {
    return f(x)
  } else {
    return .None
  }
}

In other functional languages, >>= is used for bind; however, in Swift that operator is used for bitshifting so we will use >>> instead. Applying this to the JSON parsing we get:

if let json = jsonOptional >>> JSONObject {
  if let id = json["id"] >>> JSONInt {
    if let name = json["name"] >>> JSONString {
      if let email = json["email"] >>> JSONString {
        let user = User(id: id, name: name, email: email)
      }
    }
  }
}

Then we can remove the optional parameters from our parsers:

func JSONString(object: JSON) -> String? {
  return object as? String
}

func JSONInt(object: JSON) -> Int? {
  return object as? Int
}

func JSONObject(object: JSON) -> JSONDictionary? {
  return object as? JSONDictionary
}

Functors have an fmap operator for applying functions to values wrapped in some context. Applicative Functors also have an apply operator for applying wrapped functions to values wrapped in some context. The context here is an Optional which wraps our value. This means that we can combine multiple optional values with a function that takes multiple non-optional values. If all values are present, .Some, then we get a result wrapped in an optional. If any of the values are .None, we get .None. We can define these operators in Swift like this:

infix operator <^> { associativity left } // Functor's fmap (usually <$>)
infix operator <*> { associativity left } // Applicative's apply

func <^><A, B>(f: A -> B?, a: A?) -> B? {
  if let x = a {
    return f(x)
  } else {
    return .None
  }
}

func <*><A, B>(f: (A -> B)?, a: A?) -> B? {
  if let x = a {
    if let fx = f {
      return fx(x)
    }
  }
  return .None
}

Before we put it all together, we will need to manually curry our User’s init since Swift doesn’t support auto-currying. Currying means that if we give a function fewer parameters than it takes, it will return a function that takes the remaining parameters. Our User model will now look like this:

struct User {
  let id: Int
  let name: String
  let email: String

  static func create(id: Int)(name: String)(email: String) -> User {
    return User(id: id, name: name, email: email)
  }
}

Putting it all together, our JSON parsing now looks like this:

if let json = jsonOptional >>> JSONObject {
  let user = User.create <^>
              json["id"]    >>> JSONInt    <*>
              json["name"]  >>> JSONString <*>
              json["email"] >>> JSONString
}

If any of our parser’s return .None then user will be .None. This looks much better, but we’re not done yet.

Now, our getUser function looks like this:

func getUser(request: NSURLRequest, callback: (Result<User>) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    // if the response returned an error send it to the callback
    if let err = error {
      callback(.Error(err))
      return
    }

    var jsonErrorOptional: NSError?
    let jsonOptional: JSON! = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions(0), error: &jsonErrorOptional)

    // if there was an error parsing the JSON send it back
    if let err = jsonErrorOptional {
      callback(.Error(err))
      return
    }

    if let json = jsonOptional >>> JSONObject {
      let user = User.create <^>
                  json["id"]    >>> JSONInt    <*>
                  json["name"]  >>> JSONString <*>
                  json["email"] >>> JSONString
      if let u = user {
        callback(.Value(u))
        return
      }
    }

    // if we couldn't parse all the properties then send back an error
    callback(.Error(NSError()))
  }
  task.resume()
}

Refactoring: Remove Multiple Returns with Bind

Notice that we’re calling callback four times in the previous function. If we were to forget one of the return statements, we could introduce a bug. We can eliminate this potential bug and clean up this function further by first breaking up this function into 3 distinct parts: parse the response, parse the data into JSON, and parse the JSON into our User object. Each of these steps takes one input and returns the next step’s input or an error. This sounds like a perfect case for using bind with our Result type.

The parseResponse function will need a Result with data and the status code of the response. The iOS API only gives us NSURLResponse and keeps the data separate, so we will make a small struct to help out here:

struct Response {
  let data: NSData
  let statusCode: Int = 500

  init(data: NSData, urlResponse: NSURLResponse) {
    self.data = data
    if let httpResponse = urlResponse as? NSHTTPURLResponse {
      statusCode = httpResponse.statusCode
    }
  }
}

Now we can pass our parseResponse function a Response and check the response for errors before handing back the data.

func parseResponse(response: Response) -> Result<NSData> {
  let successRange = 200..<300
  if !contains(successRange, response.statusCode) {
    return .Error(NSError()) // customize the error message to your liking
  }
  return .Value(response.data)
}

The next functions will require us to transform an optional to a Result type so let’s make one quick abstraction before we move on.

func resultFromOptional<A>(optional: A?, error: NSError) -> Result<A> {
  if let a = optional {
    return .Value(a)
  } else {
    return .Error(error)
  }
}

Next up is our data to JSON function:

func decodeJSON(data: NSData) -> Result<JSON> {
  let jsonOptional: JSON! = NSJSONSerialization.JSONObjectWithData(data, options: NSJSONReadingOptions(0), error: &jsonErrorOptional)
  return resultFromOptional(jsonOptional, NSError()) // use the error from NSJSONSerialization or a custom error message
}

Then, we add our JSON to model decoding on the model itself:

struct User {
  let id: Int
  let name: String
  let email: String

  static func create(id: Int)(name: String)(email: String) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> Result<User> {
    let user = JSONObject(json) >>> { dict in
      User.create <^>
          dict["id"]    >>> JSONInt    <*>
          dict["name"]  >>> JSONString <*>
          dict["email"] >>> JSONString
    }
    return resultFromOptional(user, NSError()) // custom error message
  }
}

Before we combine it all, let’s extend bind, >>>, to also work with the Result type:

func >>><A, B>(a: Result<A>, f: A -> Result<B>) -> Result<B> {
  switch a {
  case let .Value(x):     return f(x)
  case let .Error(error): return .Error(error)
  }
}

And add a custom initializer to Result:

enum Result<A> {
  case Error(NSError)
  case Value(A)

  init(_ error: NSError?, _ value: A) {
    if let err = error {
      self = .Error(err)
    } else {
      self = .Value(value)
    }
  }
}

Now, we combine all these functions with the bind operator.

func getUser(request: NSURLRequest, callback: (Result<User>) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    let responseResult = Result(error, Response(data: data, urlResponse: urlResponse))
    let result = responseResult >>> parseResponse
                                >>> decodeJSON
                                >>> User.decode
    callback(result)
  }
  task.resume()
}

Wow, even writing this again, I’m excited with this result. You might think, “This is really cool. Can’t wait to use it!”, but we’re not done yet!

Refactoring: Type Agnostic using Generics

This is great but we still have to write this for every model we want to get. We can use Generics to make this completely abstracted.

We introduce a Decodable protocol and tell our function that the type we want back must conform to that protocol. The protocol looks like this:

protocol Decodable {
  class func decode(json: JSON) -> Result<Self>
}

Now make User conform:

struct User: Decodable {
  let id: Int
  let name: String
  let email: String

  static func create(id: Int)(name: String)(email: String) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> Result<User> {
    let user = User.create <^>
                json["id"]    >>> JSONInt    <*>
                json["name"]  >>> JSONString <*>
                json["email"] >>> JSONString
    return resultFromOptional(user, NSError()) // custom error message
  }
}

Our final performRequest function now looks like this:

func performRequest<A: Decodable>(request: NSURLRequest, callback: (Result<A>) -> ()) {
  let task = NSURLSession.sharedSession().dataTaskWithRequest(request) { data, urlResponse, error in
    let responseResult = Result(error, Response(data: data, urlResponse: urlResponse))
    let result = responseResult >>> parseResponse
                                >>> decodeJSON
                                >>> A.decode
    callback(result)
  }
  task.resume()
}

Further Learning

If you are curious about functional programming or any of the concepts discussed in this post, check out Haskell and specifically this post from the Learn You a Haskell book. Also, check out Pat Brisbin’s post about options parsing using the Applicative.

Episode #486 - August 5th, 2014

Posted 22 days back at Ruby5

We React to some RubyGems Legal Stuff, Cover Unicorns and GitHub, and Serialize Matz's thoughts on the GIL in this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by CodeShip.io

Codeship is a hosted Continuous Delivery Service that just works.

Set up Continuous Integration in a few steps and automatically deploy when all your tests have passed. Integrate with GitHub and BitBucket and deploy to cloud services like Heroku and AWS, or your own servers.

Visit http://codeship.io/ruby5 and sign up for free. Use discount code RUBY5 for a 20% discount on any plan for 3 months.

Also check out the Codeship Blog!

CodeShip.io

ReactJS and Rails

If you want to take a look at React, the new JavaScript library released by Facebook, and how to use it with Rails Richard Nystrom wrote up a great tutorial which walks through the basics of writing your first app.
ReactJS and Rails

Rubygems.org - The Legal Stuff

Nick Quaranto started a thread over on the rubygems.org Google Group, highlighting the recent Privacy Policies and Code of Conduct posted by NPM (that’s Node’s package manager). And, overall, just putting a call out for help. Can you help him?
Rubygems.org - The Legal Stuff

Coverband

Coverband is a gem released at the end of last year by Dan Mayer who works at Living Social. It helps you generate “Production Ruby Code Coverage", and it's basically a middleware which you put on your production server to discover code which isn't being run that you can delete.
Coverband

Unicorn and GitHub

In the Unicorn mailing list, that’s the Rack HTTP server, maintainer Eric Wong didn't like the suggestion that Unicorn development be moved to GitHub.
Unicorn and GitHub

Oat - Another way to do JSON Serialization

Ismael Celis dropped me a line today about his API serialization library for Ruby called Oat. Oat uses serializer classes that kind of use the best of all the techniques for defining how JSON is serialized.
Oat - Another way to do JSON Serialization

Matz on the Ruby GIL

Matz put together his plans on the future of the Ruby GIL last week in 140 characters or less. Matz’s plan is to add actors, add a warning when developers use Threads directly, and then… finally… Remove the interpreter lock!
Matz on the Ruby GIL

Sponsored by Top Ruby Jobs

Keplar Agency is looking for a Rails developer in Amsterdam. Optoro is looking for a Ruby developer in Washington DC. SocialChorus is looking for a Chief Software Architect in San Francisco, CA. Smashing Boxes is looking for a Rails developer in Durham, NC or remote.
Top Ruby Jobs

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

Let's Talk About Dials

Posted 23 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

There are often arguments about dials in dashboard design. Dials, occasionally called gauges, are a way to represent a single data point in a range. Think of a speedometer in a car.

It’s easy to glance down and get the current speed on a car dashboard while driving and navigating traffic. Dials work well in this context: we only care about the current speed and the direction the speed is going. When we need to slow down, we press down on the brake and can see our needle moving downwards. The speedometer tells us if we’re slowing down fast enough, and if we’re not, we adjust our braking so the needle moves at the right pace.

When we’re monitoring our applications, our dashboards tracking our application metrics should work the same way. We should be able to know what’s wrong with our app at a glance and respond quickly. Because people often want to replace a familiar analog interface on the computer screen, dials in dashboards are popular.

However, dials don’t work the same way on your computer screen. Let’s look at Google Chart’s “gauge” visualization.

These gauges are mostly grey with some red and orange to represent critical and warning ranges. If our metric goes into these ranges, we should start reacting.

But our eyes are attracted to colors of greatest contrast: the red and orange blocks. Every time we look at a dashboard with these dials, we see the areas of bright colors, even when nothing is critical. This may not be a big deal with one gauge, but dashboards display many other metrics. Things get messy when we have several visuazations on a page.

At one glance, do these dials:

Look much different than these?:

The second set of dials show some values that are just barely in the critical and warning zones. What if you had 10 other graphs on your dashboard? Would you be able to notice a critical metric in one glance?

Alternatives

A popular solution is to eliminate the extra “ink” around the dial and fill the current value up to its appropriate pixel value.

We get a sleeker dial that doesn’t need labels and dashes. If the designer chooses, they can change the color of the dial to reflect the state of the value. For example, if the current value is critical, the dial turns red.

This display is an improvement the analog dial design for our web dashboards. However, if we need to display several single values, we still lose a lot of screen real estate to the circular shape. For this, we can use a bullet chart, a dial alternative designed by Stephen Few.

We can display three metrics for a lot less space, and we can see where those current values land in their ranges.

Source: Bullet Graph Design Spec (original image edited)

Few avoids common “status” colors like red, green, and yellow to reduce the number of contrasting, attention-grabbing colors on the page and to account for color blind users. When a value hits the critical range, he suggests an external indicator.

The downside is that this type of visualization requires a lot of ink to be effective. In my experience, they often require explanations for the people who are viewing them. If you want to display a current value with context, sometimes a simple line graph with some annotations does the trick.

While this metric isn’t in a critical range yet, we can see that it’s rapidly approaching that critical value and should probably take action to resolve that.

Dials are a great way to display analog information, but don’t work the same in your dashboard application. Depending on the situation, some of these examples make a better alternatives.

If you’re interested in learning more about maximizing the effectiveness of your dashboards, I highly recommend the writings of Stephen Few. I will also be teaching a workshop on D3.js for eurucamp in Berlin, Germany at the beginning of August.

Running WeeChat on a Server for IRC Backlogs

Posted 24 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In a previous blog post we walked you through setting up and configuring the text based chat client WeeChat for use with the Slack IRC gateway.

The blog post also explained how to set up Slacklog - a WeeChat script for retrieving backlog from the Slack API. This works great, but if you want more backlog or want backlog for channels on other IRC servers a good solution is to run WeeChat on a server. By doing this you won’t miss anything that happens in the channels you are connected to, even when you’re not at your computer or connected to the Internet. When you reconnect you can go through messages that were meant for you or that you’re interested in.

This tutorial assumes that you have a server running Debian 7.0 (Wheezy) that you can access over SSH using an account with sudo privileges. If you’re using a different setup you may have to alter the instructions. If you don’t have a server but would like to set one up, I recently wrote a blog post explaining how to set up a basic Debian server.

Setting up WeeChat

Connect to your server using ssh:

ssh YOURIP

To get the latest version of WeeChat you have to install it from the Wheezy backports. If you haven’t already, add the Wheezy backports to your /etc/apt/sources.list file:

sudo sh -c 'echo deb http://http.debian.net/debian wheezy-backports main >> /etc/apt/sources.list'

Update your package index and install WeeChat using apt-get:

sudo apt-get update
sudo apt-get -t wheezy-backports install weechat-curses

You can now start WeeChat using the weechat-curses command:

weechat-curses

To exit type /quit and press Enter.

If you disconnect from your server by typing Ctrl+D, you can now reconnect to the server and run weechat-curses by running this command locally:

ssh YOURIP -t weechat-curses

When you pass a command to ssh it will run the command on the remote machine instead of running your normal shell. When it does so, it will also skip the allocation of a pseudo-tty. This works fine for one-off commands with no output, but since WeeChat is an interactive program, it needs a tty to connect to. The -t flag in the above command forces the allocation of a pseudo-tty.

When you exit out of WeeChat, ssh will automatically disconnect from the server.

Great! We can now connect to and run WeeChat on our server in one command. One problem with this approach is that WeeChat won’t continue to run after you disconnect. This means that you will miss anything that happens in the channels you were connected to since you’re simply no longer connected. Always being connected means that you will have access to the full backlog.

To fix this we will have to run WeeChat using a terminal multiplexer (like GNU Screen or tmux) that has support for running programs in background sessions that can be detached from and reattached to.

Setting up GNU Screen

We will use GNU Screen in this tutorial mainly because a lot of us here at thoughtbot use tmux locally and nesting tmux sessions can cause problems. If you’re not using tmux locally, using tmux on your server is perfectly fine.

Reconnect to your server and install GNU Screen using apt-get:

ssh YOURIP
sudo apt-get install screen

If you disconnect from your server you can now reconnect and run weechat-curses in a screen session using this command:

ssh YOURIP -t screen -D -RR weechat weechat-curses

This command will log in to your server and look for a screen session named “weechat”. If a session with that name exists, it will reattach to it. If not it will create a new session and run the weechat-curses command inside of it.

To disconnect from the server without quitting WeeChat press Ctrl+A and then D.

Awesome! We can now enjoy the upside of always being online even when we’re not.

There is still one problem though. When you’re done for the day, close your laptop and head home, your SSH connection will eventually time out. When you get home and open up your laptop, the terminal running the ssh command will be frozen and the only way to quit it will be by pressing ~ and .. To continue chatting you’ll then have to reconnect. This will also happen if you suddenly lose your Internet connection for a few minutes and it can be really annoying.

Wouldn’t it be simpler if you were reconnected automatically as soon as you were reconnected to the Internet?

Setting up Mosh

Mosh stands for “mobile shell” and is a remote terminal application that allows roaming and intermittent connectivity. In our case mosh will replace ssh. Under the hood Mosh still uses ssh to log in to the server to then start a mosh-server that you can communicate with over UDP.

Mosh connections will not freeze after losing the Internet connection but will instead show a timer that tells how long ago it last got a response from the server. As soon as you come online it will reconnect and you can continue chatting.

Install Mosh locally using your favorite package manager.

On Debian:

sudo apt-get install mosh

On Arch Linux:

sudo pacman -S mosh

On Mac OS X (using Homebrew):

brew install mobile-shell

Installation instructions for other systems can be found on the Mosh website.

Reconnect to your server and install Mosh using apt-get:

ssh YOURIP
sudo apt-get install mosh

By default, mosh-server binds to a UDP port between 60000 and 61000. If you’re running a firewall, you have to open these ports to be able to connect. If you are using UFW to manage your firewall you can run this command to open the ports:

sudo ufw allow 60000:61000/udp

If you disconnect from your server, you can now reconnect to and attach to your WeeChat screen session using mosh:

mosh YOURIP -- screen -D -RR weechat weechat-curses

There we go! All set up.

Connecting to servers and joining channels

You can now connect to an IRC server using WeeChat:

/connect chat.freenode.net

Once connected you can join your favorite channel:

/join #ruby

For more information on how to use WeeChat check out WeeChat for Slack’s IRC Gateway.

Ship You a Haskell

Posted 26 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

A few weeks ago, we quietly shipped a new feature here on the Giant Robots blog: comments. If you hover over a paragraph or code block, a small icon should appear to the right, allowing you to comment on that section of any article.

comments

With the release of this feature, we can now say something we’ve hoped to say for some time: we shipped Haskell to production! In this post, I’ll outline what we shipped, how it’s working out for us, and provide some solutions to the various hurdles we encountered.

Architecture

Comments are handled by a service “installed” on the blog by including a small snippet of JavaScript on article pages. The separate service is called Carnival and can be found on GitHub. It’s comprised of a RESTful API for adding, retrieving, and editing comments and a JavaScript front-end to handle the UX. The back-end portion is written in Haskell using the Yesod web framework.

Why Haskell?

The answer to this question depends on who you ask. A number of us really like Haskell as a language and look for any excuse to use it. This may stem from safety, quality of abstraction, joy of development, or any number of other positives we feel the language brings. Some of us are recently exposed to Haskell and would love to have something being actively developed that we could pair on from time to time and get more exposure to a language so unlike what we’re used to.

Ultimately, we want to know if Haskell is something we can build and scale for client projects. If a client comes along where Haskell may be a good fit, we need to be confident that, beyond writing the code, we can do everything else that’s needed to deploy it to production.

Development Process

During the development of this service, much of what is said about the benefits of type safety when it comes to rapidly producing correct code proved true. The bulk of the API was written in about a day and subsequent iterations and refactorings went smoothly using a combination of Type Driven Development (TyDD) and acceptance tests. For programmers used to interpreted languages, the long compiles were frustrating, and we did have some small battles with Cabal Hell. That said, the introduction of sandboxes and freezing are a definite improvement over my own previous experiences with dependency management in Haskell.

Writing an API-only services meant working with a lot of JSON. Doing this via the aeson library was concise, and provided safe (de)serialization with very limited validation logic required on our part. A large number of validations that we would typically write in a Rails API service are handled by virtue of the type system.

Libraries exist for most of the things we need like markdown, gravatar, and heroku support. One notable exception was authentication via OAuth 2.0, which we needed because we wanted to use our own Upcase as the provider. While Yesod has great support for authentication in general and there exists a plugin for OAuth 1.0, the only thing we could find for OAuth 2.0 was an out of date gist. Luckily, it wasn’t much trouble to move that gist to a proper package, ensure it worked, and publish it ourselves. Even though Yesod didn’t ship with this feature out of the box, the modular way in which authentication logic is handled allowed us to add it as a separate, isolated package.

Deployment

Part of this experiment was to develop in Haskell using as much of our normal process as possible. That meant deploying to Heroku. Because a clean compilation of a Haskell application (especially with libraries like Yesod or Pandoc) can take some time, the 15 minute build limit became an issue.

Before you mention it, yes this pain point could’ve been avoided with a binary deployment strategy. We could have compiled locally in a VM (to match Heroku’s architecture) then copied the resulting binary to the Heroku instance. But that’s not our normal process. Developers should be able to git push heroku master and have it Just Work.

And in theory, it could just work. Builds are largely cached so it’s only the first one that’s likely to go beyond 15 minutes. To mitigate this, the most popular Haskell buildpack supports a service called Anvil for running that first build in an environment with no time limit. After many attempts and vague error messages, we had to give up on these Anvil-based deployments. We were on our own.

In the end, we were never able to come in under 15 minutes, even after upgrading to a PX dyno. Our Heroku representative was able to increase our app’s time limit to 30 minutes and so far we’ve been able to make that. I wouldn’t consider this typical though: I suspect our dependency on pandoc (and its highlighting engine) is causing compilation to take longer than most Yesod applications. I recommend trying the standard build pack and hoping to come in under 15 minutes before attempting to subvert it.

Once successfully on staging, we noticed another issue. Users were getting logged out randomly. It turns out the default session backend in Yesod stores the key to a cookie-based session in a file. This has a number of downsides in a Heroku deployment: First of all, the file system is ephemeral. Any time a dyno restarts, all sessions would be invalidated. Secondly, we had two dynos running. This meant that if you logged in on one dyno, but a subsequent request got routed to the second, you’d be logged out. To support this scenario, we defined an alternative backend which read the key from an environment variable which we could set to the same value in each instance.

More Haskell?

We definitely consider this experiment a success. We solved a number of deployment problems which should make our next Haskell project (which is already in the works) go that much more smoothly. All in all, we found the language well-suited to solving the kinds of problems we solve, thanks in no small part to Yesod and the great ecosystem of available libraries.

Episode #485 - August 1st, 2014

Posted 26 days back at Ruby5

Learning to deploy with capistrano, memoization patterns, better APIs with mocaroni, middleman-presentation, and RubyConf 2014 all in this episode of the Ruby5!

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Capistrano Tutorial

Need to get started with capistrano? This step-by-step tutorial will demystify the process and get you off on the right foot.
Capistrano Tutorial

Memoization Patterns

Curious about all the wonderful ways to memoize? This blog post from Justin Weiss will walk you through some patterns and a gem!
Memoization Patterns

Mocaroni

Mocaroni is a new service that lets you stub out and collaborate on an api!
Mocaroni

middleman-presentation

Middleman is a awesome for building static websites and now middleman-presentation lets you easily make HTML-based presentations, too!
middleman-presentation

RubyConf 2014

RubyConf is headed to San Diego this year and tickets are now on sale!
RubyConf 2014