Episode #439 - February 11th, 2014

Posted 2 months back at Ruby5

In this episode we cover Structuring Sinatrap Apps, REST clients with ActiveRestClient, supporting 12-Factor App with ENV_BANG using Foreman to manage services and a new DSL for creating objects with MooseX.

Listen to this episode on Ruby5

Sponsored by TopRubyJobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Structuring Sinatra Apps with Trevi

Last week, Alex MacCaw posted an article on the Sourcing.io blog which focused on a very opinionated way to develop and structure Sinatra applications. He’s even released a companion gem called Trevi that bundles all of this knowledge up and helps you follow along.
Structuring Sinatra Apps with Trevi

ActiveRestClient

ActiveRestClient is a gem is for accessing REST services in an ActiveRecord style. It aims to be a more flexible alternative to ActiveResource. It allows things like setting different endpoints for different REST actions and has additional features like built-in caching.
ActiveRestClient

ENV!

ENV! is a variant for supporting 12-Factor Apps similar to dotenv, but which provides a bit more friendly onboarding experience to a new application. Where dotenv just loads whatever is in your .env file into ENV, ENV! will fail loudly if required variables are undefined or missing and gives you the opportunity to provide helpful messages in that case.
ENV!

Using Foreman to Manage services

Maurício Linhares published an article last week detailing how to use Foreman to isolate and manage application development on OS X machines. He points out that while installing Postgres, for example, is a good thing, you don’t necessarily need it running all the time. The same is true for other application dependencies, like Redis.
Using Foreman to Manage services

MooseX

MooseX is a DSL that helps to make Object Oriented programming in Ruby easier, more consistent, and less tedious. The gem is maintained by Tiago Peczenyj and it's based on Perl's Moose and Moo, two very popular modules in the Perl community. With MooseX you can think more about what you want to do and less about the mechanics of OOP.
MooseX

RubyHeroes

The nominations are open for Ruby Heroes 2014. Head on over to rubyheroes.com, armed with the GitHub usernames of people who have made this past last year a pleasure for you to be in the Ruby community.
RubyHeroes

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

brew leaves

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

No, it's not about tea. We're continuing our rundown of lesser-known Homebrew features with brew leaves. Let's check the man brew page:

leaves   Show installed formulae that are not dependencies of another installed formula.

Or, in more computer science-y terms, it shows you the leaves of the Homebrew dependency graph.

When to use it

brew leaves shows you programs that you can safely uninstall. If you want to clean house, just run brew leaves and happily uninstall:

$ brew leaves | wc -l
45
$ brew leaves
...
leiningen
...
pngcrush
...

We have 45 leaves. We haven't used leiningen in a while, and forgot pngcrush was even installed. Let's uninstall:

$ brew uninstall pngcrush leiningen
$ brew leaves | wc -l
43

We now have 2 fewer leaves. If pngcrush or leiningen were the only things that depended on a third package foo, then uninstalling those two packages would make foo a new leaf, since now nothing depends on foo.

Easily create a Brewfile

Brewfiles are an easy way to install frequently-used Homebrew packages on a new machine. We can easily create a Brewfile using brew leaves:

$ brew leaves | sed 's/^/install /' > Brewfile
$ wc -l Brewfile
42
$ head -3 Brewfile
install aspell
install bison
install colordiff

Now all 42 packages we depend on are neatly listed. One possible concern is that a package will be left out - for example, we use rbenv but it's not in the Brewfile. This is because we also have rbenv-gem-rehash installed, which depends on rbenv, making rbenv not a leaf. Since rbenv-gem-rehash depends on rbenv, installing it will also install rbenv. We're safe.

What's next?

You can learn how to start and stop background services in Homebrew. You can also take a deep dive into graph theory.

Announcing gitsh

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

gitsh is a new way to use Git: instead of running Git commands in a general purpose shell like zsh or bash, gitsh provides you with a dedicated shell just for your Git commands.

<iframe src="//fast.wistia.net/embed/iframe/wkl3njtmz0" allowtransparency="true" frameborder="0" scrolling="no" class="wistia_embed" name="wistia_embed" allowfullscreen="" mozallowfullscreen="" webkitallowfullscreen="" oallowfullscreen="" msallowfullscreen="" width="640" height="480"></iframe>

Many of the early Unix utilities, like dc, didn't take sub-commands like Git and other modern programs do, instead they launched a shell. For a program like Git, which has so many commands and options, interacting via a shell still makes a lot of sense, and so gitsh follows in this long Unix tradition.

Save yourself some typing

At its simplest, gitsh saves you from typing the word git over and over.

Git commands are very moreish, you almost never just want one. If you work with Git, this flurry of commands is probably very familiar to you:

$ git status
$ git add -p
$ git commit
$ git push

With gitsh this gets easier:

$ gitsh
gitsh@ status
gitsh@ add -p
gitsh@ commit
gitsh@ push
gitsh@ :exit
$

All of your Git aliases will work in gitsh too, so you can save yourself even more typing.

Deep integration

Now we're in a dedicated Git shell, there's a lot more it can do than just save us a few keystrokes. gitsh is only concerned with Git, so it has all kinds of little ways to make using Git easier.

What's my status?

Of all the Git commands, I find myself using git status most often. If I'm about to commit, or push, or pull, it's a great way of quickly checking where things are up to.

In gitsh, if you hit return without entering a command, we assume you wanted a status, saving you even more typing and making it really easy to check the status after any command.

If you prefer the more taciturn output of git status -s, or find yourself using a completely different command with annoying regularity, you can always change gitsh's default command by setting the gitsh.defaultCommand variable using git config:

gitsh@ config --global gitsh.defaultCommand "status -s"

Tab completion and Git prompts

In gitsh you automatically get tab completion for commands, branch names, and paths, and the name and status of the current branch in your prompt. For example, if everything is committed and your working directory is clean the prompt is blue and ends with @, but if you have untracked files the prompt is red and ends with !.

It is possible to set up parts of this in bash or zsh, but it can be fiddly to get working, easily broken, and can interact strangely with aliases and third-party Git commands.

Git environment variables

Like most general purpose Unix shells, gitsh also provides environment variables. You can set a variable using the :set command, and read them using a $ prefix:

gitsh@ :set message "A commit message"
gitsh@ commit -m $message

If the variable name contains a dot, it will temporarily override one of your git config settings, until the end of your gitsh session. This is useful when pair programming:

gitsh@ :set user.name "George Brocklehurst & Mike Burns"
gitsh@ :set user.email support+george+mburns@thoughtbot.com
gitsh@ commit -m "We are pair programming!"

Convinced?

If you're on Mac OS X, you can install gitsh via Homebrew:

brew tap thoughtbot/formulae
brew install gitsh

If you're on Linux, there are install instructions in the gitsh README.

Don't forget to check out the man page:

man gitsh

And if you do find a bug, please report it on the gitsh GitHub repo.

How to Evaluate Your Rails JSON API for Performance Improvements

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Let's say your company's product is a mobile app that gets its data from an internal JSON API. The API, built using Rails, is a few years old. Response objects are large, request latency is high, and your data indicates mobile users aren’t converting because of it.

It can be tempting to immediately dig into your code and look for N+1 queries to refactor. But if you have the time and bandwidth, try to view this as a great opportunity to take a step back and rethink the high-level requirements for your JSON API. Starting with a conversation about the desired functionality of each endpoint will help keep your team's efforts focused on delivering no more than is required by the client, as efficiently as possible.

Grab your team for a whiteboarding session and review your assumptions about the behavior of each API endpoint:

  • How is this endpoint currently being used by the client?
  • What information does the client require for display to the user?
  • What needs to be done on the server side before sending a response to the client?
  • How frequently does the response content change?
  • Why does the response content change?

With the big picture in mind, review your Rails code to identify opportunities for improving performance. In addition to those N+1 queries, keep an eye out for the following patterns:

The response object has properties the client doesn’t use

If you're using #as_json to serialize your ActiveRecord models, it's possible your application is returning more than the client needs. To address this, consider using ActiveModel Serializers instead of #as_json.

The delivery of the response has unnecessary dependencies

Let's say your API has an endpoint the clients uses for reporting analytics events. Your controller might look something like this:

class AnalyticsEventsController < ApiController
  def create
    job = AnalyticsEventJob.new(params[:analytics_event])

    if job.enqueue
      head 201
    else
      head 422
    end
  end
end

Something to consider here is whether the client really needs to know if enqueueing the job is successful. If not, a simple improvement which preserves the existing interface might look something like this:

class AnalyticsEventsController < ApiController
  before_filter :ensure_valid_params, only: [:create]

  def create
    job = AnalyticsEventJob.new(analytics_event_params)
    job.enqueue
    head 201
  end

  private

  def ensure_valid_params
    unless analytics_event_params.valid?
      head 422
    end
  end

  def analytics_event_params
    analytics_event_params ||= AnalyticsParametersObject.new(
      params[:analytics_event]
    )
  end
end

With these changes, the server will respond with a 422 only when the request parameters are invalid.

Static responses aren't being cached effectively

It's possible your Rails application is handling more requests than necessary. Data which is requested frequently by the client but changes infrequently – the current user, for example – presents an opportunity for HTTP caching. Think about using a CDN like Fastly to provide a caching layer.

What's next?

The next step after implementing optimizations for performance is to measure performance gains. You can use tools like JMeter or services like BlazeMeter and Blitz.io to perform load tests in your staging environment.

It's good to keep in mind that through the process of evaluating and improving your Rails application, your team may discover your API is out of date with the needs of the client. You may also see opportunities to move processes currently handled by your Rails application (e.g. persisting and reporting on analytics events) into separate services.

If an API redesign is in order and the idea of non-RESTful routing doesn't make you too uncomfortable, you can explore the possibility of adding an orchestration layer to your API.

Episode #438 - February 7th, 2014

Posted 2 months back at Ruby5

We learn about Recursion a list of deprecated stuff in Ruby and the value of Rails worst practices

Listen to this episode on Ruby5

Recursion

Dave Bock was recently on the Ruby Hangout and gave a great presentation on recursion for ruby developers.
Recursion

7 Lines Every Gem's Rakefile Should Have

Ernie Miller published a post showing you how to create a rake console task to load irb and require your gem so you can have a console to play around with it
7 Lines Every Gem's Rakefile Should Have

Token Based Authentication in Rails

using authenticate_or_request_with_http_token for token based API authentication
Token Based Authentication in Rails

A List of Deprecated Stuff in Ruby

Bozhidar Batsov went through the the code built a list of decrecated stuff in Ruby
A List of Deprecated Stuff in Ruby

The value of Rails worst practices

When interviewing potential Rails developers, Devin found that the quickest way to gauge the experience of a potential hire is to show them some shockingly bad Rails code and ask them what they see
The value of Rails worst practices

Sponsored by NewRelic

Using their Real User Monitoring feature, they've once again culled the average browser speeds experienced by end users of nearly 3 million application instances and the data doesn’t lie.
NewRelic

Every line of code is always documented

Posted 3 months back at No Strings Attached

Every line of code comes with a hidden piece of documentation. It’s just not immediately visible.

Whoever wrote line 4 of the following code snippet decided to access the clientLeft property of a DOM node for some reason, but do nothing with the result. It’s pretty mysterious. Can you tell why they did it, or is it safe to change or remove that call in the future?

1 // ...
2 if (duration > 0) this.bind(endEvent, wrappedCallback)
3 
4 this.get(0).clientLeft
5 
6 this.css(cssValues)

If someone pasted you this code, like I did here, you probably won’t be able to tell who wrote this line, what was their reasoning, and is it necessary to keep it. However, most of the time when working on a project you’ll have access to its history via version control systems.

A project’s history is its most valuable documentation.

The mystery ends when we view the commit message which introduced this line:

$ git show $(git blame example.js -L 4,4 | awk '{print $1}')

Fix animate() for elements just added to DOM

Activating CSS transitions for an element just added to the DOM won’t work in either Webkit or Mozilla. To work around this, we used to defer setting CSS properties with setTimeout (see 272513b).

This solved the problem for Webkit, but not for latest versions of Firefox. Mozilla seems to need at least 15ms timeout, and even this value varies.

A better solution for both engines is to trigger “layout”. This is done here by reading clientLeft from an element. There are other properties and methods that trigger layout; see gent.ilcore.com/2011/03/how-not-to-trigger-layout-in-webkit

As it turns out, this line—more specifically, the change which introduced this line—is heavily documented with information of why it’s necessary, why did the previous approach (referred to by a commit SHA) not work, which browsers are affected, and a link for further reading.

So does every other line in the project have documentation, going back to the first day when the project was created. The quality of this documentation, however, relies heavily on the diligence of the people involved while writing good commit messages.

Effective spelunking of project’s history

git blame

I’ve already demonstrated how to use git blame from the command line above. When you don’t have access to the local git repository, you can also open the “Blame” view for any file on GitHub.

A very effective way of exploring a file’s history is with Vim and Fugitive:

  1. Use :Gblame in a buffer to open the blame view;
  2. Press P on a line of blame pane to re-blame at the parent of that commit, if you need to go deeper;
  3. Press o to open a split showing the commit currently selected in the blame pane.
  4. Use :Gbrowse in the commit split to open the commit in the GitHub web interface;
  5. Press C-o in the main buffer to close all other splits when you’re done exploring. Optionally, use :Gedit to reset the buffer to the most recent version in case you did any spelunking with P earlier.

git blame view in vim Fugitive

Find the pull request where a commit originated

With git blame you might have obtained a commit sha that introduced a change, but commit messages don’t always carry enough information or context to explain the rationale behind the change. However, if the team behind a project practices GitHub Flow, the context might be found in the pull request discussion:

$ hub log --merges --ancestry-path --oneline <SHA>..origin | tail
# ...
bc4712d Merge pull request #42 from sticky-sidebar
3f883f0 Merge branch 'master' into sticky-sidebar

Here, a single commit SHA was enough to discover that it originated in pull request #42.

The git pickaxe

Sometimes you’ll be trying to find something that is missing: for instance, a past call to a function that is no longer invoked from anywhere. The best way to find which commits have introduced or removed a certain keyword is with the ‘pickaxe’ argument to git log:

$ git log -S<string>

This way you can dig up commits that have, for example, removed calls to a specific function, or added a certain CSS selector.

Being on the right side of history

Keep in mind that everything that you’re making today is going to enter the project’s history and stay there forever. To be nicer to other people who work with you (even if it’s a solo project, that includes yourself in 3 months), follow these ground rules when making commits:

  • Always write commit messages as if you are explaining the change to a colleague sitting next to you who has no idea of what’s going on. Per Thoughtbot’s tips for better commit messages:

    Answer the following questions:

    • Why is this change necessary?
    • How does it address the issue?
    • What side effects does this change have?
    • Consider including a link [to the discussion.]
  • Avoid unrelated changes in a single commit. You might have spotted a typo or did tiny code refactoring in the same file where you made some other changes, but resist the temptation to record them together with the main change unless they’re directly related.

  • Always be cleaning up your history before pushing. If the commits haven’t been shared yet, it’s safe to rebase the heck out of them. The following could have become permanent history of the Faraday project, but I squashed it down to only 2 commits and edited their messages to hide the fact I had troubles setting this up in the first place:

    messy git history before rebase

  • Corollary of avoiding unrelated changes: stick to a line-based coding style that allows you to append, edit or remove values from lists without changing adjacent lines. Some examples:

      var one = "foo"
        , two = "bar"
        , three = "baz"   // Comma-first style allows us to add or remove a
                          // new variable without touching other lines
    
      # Ruby:
      result = make_http_request(
        :method => 'POST',
        :url => api_url,
        :body => '...',   // Ruby allows us to leave a trailing comma, making it
                          // possible to add/remove params while not touching others
      )
    

    Why would you want to use such coding styles? Well, always think about the person who’s going to git blame this. In the JavaScript example, if you were the one who added a committed the value "baz", you don’t want your name to show up when somebody blames the line that added "bar", since the two variables might be unrelated to the change.

rcm, an rc file manager

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We have built a suite of tools for managing your rc files.

The rcm suite of tools is for managing dotfiles directories. This is a directory containing all the .*rc files in your home directory (.zshrc, .vimrc, and so on). These files have gone by many names in history, such as "rc files" because they typically end in rc or "dotfiles" because they begin with a period. Creative, I know.

It's a unification of the existing shell scripts, make targets, rake tasks, GNU Bash constructions, and Python hacks that people copy and paste into their dotfiles repo, with a classical unix flair.

Here's a very quick example:

% lsrc
/home/mike/.zshrc:/home/mike/.dotfiles/zshrc
% rcup
linking /home/mike/.zshrc

This blog post demonstrates the features, but you may want to install rcm and run through the tutorial too.

Build on it

Once unified, we extended the suite with support for sharing rc files via host-specific files, tags, multiple dotfile directories, and hooks.

A little something for the sysadmins out there, host-specific files automate the configuration you need to do on each host. Maybe the computer jupiter needs a .gitconfig but the computer mars needs a .mailrc. You'd put the .gitconfig in host-jupiter/gitconfig, and the .mailrc in host-mars/mailrc, and our suite takes care of the rest.

The next step up from host-specific is tagging: tag the .mailrc file as mailx (tag-mailx/mailrc) and .gitconfig as git (tag-git/gitconfig), and install and uninstall the tags as needed:

% rcup -t mailx
% rcdn -t mailx

Tagging is great for teams sharing the same dotfiles repo, but we can do better. While some of us come to computers with a blank slate, others come with well over a decade of fine-tuned rc files. Let them combine dotfiles repos, preferring theirs:

% rcup -d personal-dotfiles -d thoughtbot-dotfiles

While automating things we noticed that some things require setup. For example, after linking the .vimrc you need to run :BundleInstall. This is why we added hooks, such as the one in the thoughtbot dotfiles as hooks/post-up:

#!/bin/sh

if [ ! -e $HOME/.vim/bundle/vundle ]; then
  git clone https://github.com/gmarik/vundle.git $HOME/.vim/bundle/vundle
fi
vim -u $HOME/.vimrc.bundles +BundleInstall +qa

Automate it

We make it easier to add something to your dotfiles, too. This is great for getting started, but it's also great for experimentation. For example, add your .cshrc to the openbsd tag:

% mkrc -t openbsd .cshrc

Or get fancy by adding a host-specific file to the dotfiles repo you share with your brunch friends:

% mkrc -o -d the-brunch-dotfiles .rcrc

Configure it

Given the power, we had to make an rc file for our rc files. Enter .rcrc.

The simplest things to configure are your tags and source directories:

TAGS="openbsd mailx gnupg"
DOTFILES_DIRS="~/.dotfiles /usr/local/share/global-dotfiles"

Some files should never be symlinks:

COPY_ALWAYS=weechat/*

And some files should be excluded:

EXCLUDES=global-dotfiles:python*

This means a normal rcup will do the right thing, without thinking hard about what you have configured, which machine you're on, or what has changed in your shared repos.

The .rcrc file is perfect as a host-specific file in your personal dotfiles repo:

mkrc -o .rcrc

Read about it

Since this is a unix tool, we treat it like a unix tool. Read the full tutorial in the rcm(7) manpage, read about each individual tool (with examples) in the respective lsrc(1), rcup(1), rcdn(1), and mkrc(1) manpages, and the full configuration file is in the rcrc(5) manpage.

The traditional whatis command will jog your memory:

% whatis rcm
rcup (1)             - update and install dotfiles
rcdn (1)             - remove dotfiles
lsrc (1)             - show configuration files
mkrc (1)             - bless files into a dotfile
rcrc (5)             - configuration for rcm
rcm (7)              - dotfile management

Install anywhere

The rcm suite is written in POSIX sh, available out of the box on BSD, GNU, OS X, and many other systems. We do our best to keep it portable.

The source package can be installed using GNU autotools, as is typical for many projects:

% configure
% gmake
% gmake install

But it gets easier on Arch and Debian, which are supported by their native package managers. Check our installation instructions for the details on those.

We also support OS X using Homebrew from our new thoughtbot tap:

% brew tap thoughtbot/formulae
% brew install rcm

Watch that tap for other tools for command line champions.

Get started quickly

Instead of inventing something new, we decided to codify existing practices. If you have a dotfiles repo much like ours—one where all the normal files should be symlinked as dotted files in your home directory—you can get started immediately:

% lsrc -d ~/dotfiles
% rcup -v -d ~/dotfiles

If you have no dotfiles repo yet, you can get started instantly:

% mkrc .zshrc .vimrc

We also cover special cases in our tutorial.

Let's build this

Please share your feedback on GitHub. Together we can build the greatest rc file management suite.

What's next

Back to Basics&#58; HTTP Requests in Rails Apps

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

The Hypertext Transfer Protocol specifies how a client machine requests information or actions from servers. This protocol specifies how two machines share information, which is called a request. These requests are composed of several parts which I'll outline below.

The first line of an HTTP request is called the Request-Line. It contains:

Let's take a closer look at these four elements.

URI

A URI or Uniform Resource Identifier is how objects are identified. Clients use URIs to tell the server what object to act on for a given request. In more general terms a URI is nothing more than a web address.

Request Header Fields

A client can communicate additional information to the server via a request's headers. In addition to just communicating information about the client, ie. what type of browser the request originated from, these values can modify how the server responds to the request. For example, the Content-Type header value is used by the Rails framework to decode a request’s message body.

Message Body

The message body is used to send information from the client to the server about the entity the client wants to modify or create. The message can be communicated using several different encoding mechanisms, some of which I'll discuss below. It is important to note that not all of the requests I discuss below are allowed to have message bodies.

Method

The method, sometimes called "verb" or "action", tells the server what the client wants it to do. There are many different methods available, but we're going to limit this blog to the four that are most relevant to Rails developers.

  • GET - how a client machine tells a server that it wants information about the item identified by the URI. Because GET requests are all about asking for information, they are not permitted to have request bodies. You still have the URI query string available to you if you need to send data from the client to the server on a GET request.
  • POST - how a client tells a server to add an entity as a child of the object identified by the URI. The entity that the client expects the server to add is transmitted in the request body.
  • PATCH - how a client tells a server it wants to modify an object identified by the URI the request is sent to.
  • DELETE - as you might guess, how a client tells a server to remove an object identified by the URI the request is sent to.

Let's make some requests

cURL

cURL is a utility that makes it possible to send requests from the command line. We'll use cURL to make some requests to a test Rails app which responds with strings.

Routes:

BackToBasics::Application.routes.draw do
  match '/curl_example' => 'request_example#curl_get_example', via: :get
  match '/curl_example' => 'request_example#curl_post_example', via: :post
end

Controller:

class RequestExampleController < ActionController::Base
  def curl_get_example
    render text: 'Thanks for sending a GET request with cURL!'
  end

  def curl_post_example
    render text: "Thanks for sending a POST request with cURL! Payload: #{request.body.read}"
  end
end

First we'll make a GET request. We tell cURL to do a GET request (which is actually the default) with the "X" option.

% curl -X GET http://localhost:3000/curl_example
Thanks for sending a GET request with cURL!

Rails server log:

Started GET "/curl_example" for 127.0.0.1 at 2013-06-21 14:38:22 -0700
Processing by RequestExampleController#curl_get_example as */*
  Rendered text template (0.0ms)
Completed 200 OK in 1ms (Views: 0.3ms | ActiveRecord: 0.0ms)

As you can see, our Rails app receives the GET request we sent from the terminal and then responds with the string we provided in the controller. The app uses the request's URI and Method to figure out which controller and action to call.

Next we'll make a POST request with a data payload. Again, we use "X" to specify the method. We also use "d" to specify the data to send in the payload.

% curl -X POST -d "backToBasics=for the win" http://localhost:3000/curl_example
Thanks for sending a POST request with cURL! Payload: backToBasics=for the win

Rails server log:

Started POST "/curl_example" for 127.0.0.1 at 2013-06-21 14:47:37 -0700
Processing by RequestExampleController#curl_post_example as */*
  Parameters: {"backToBasics"=>"for the win"}
  Rendered text template (0.0ms)
Completed 200 OK in 0ms (Views: 0.3ms | ActiveRecord: 0.0ms)

The Rails app receives our request. This time, in addition to the URL, it logs the data payload as hash of parameters.

Web Browsers

We're all familiar with surfing the web. I'm sure it comes as no surprise that this experience is made up of a series of requests and responses. Let's take a look at what's happening when we type a URL into our browser's address bar and hit enter. We'll also look at what happens when we enter data into a form and submit it.

Below is the demo controller we'll be sending requests to for this example.

Routes:

BackToBasics::Application.routes.draw do
  root to: 'request_example#index'
  match '/request' => 'request_example#create', via: :post
  match '/request' => 'request_example#create', via: :get
end

Controller:

class RequestExampleController < ActionController::Base
  def index
  end

  def create
    render json: params
  end
end

Address Bar

Typing a URL into the address bar of a web browser sends a GET request to the URL specified. Sending a GET request in this fashion looks identical to sending a GET request through the terminal with cURL.

If we set the root path of our demo app to the index action of our dummy controller and navigate to localhost:3000 the browser will send the GET request. If we take a look at the Rails console we'll notice the output is almost identical to what we saw with cURL.

Started GET "/" for 127.0.0.1 at 2014-01-31 15:09:53 -0800
Processing by RequestExampleController#index as HTML
  Rendered request_example/index.html.erb (1.2ms)
Completed 200 OK in 26ms (Views: 25.5ms | ActiveRecord: 0.0ms)

Forms

A form is made up of several key parts (we'll look at several simple examples a bit later). In the opening form tag we have the action attribute. This attribute tells the form where to send the request. In addition to the action attribute we have the method attribute. This tells the form what type of request to send to the URI specified in the action attribute.

Request bodies are defined by a form's markup. In the form tag there is an attribute called enctype, this attribute tells the browser how to encode the form data. There are several different values this attribute can have. The default is application/x-www-form-urlencoded, which tells the browser to encode all of the values. If a form includes a file upload an enctype of multipart/form-data should be used. This encodes none of the values. Finally, you can set the enctype to text/plain this converts spaces, but leaves all other characters unencoded. Inside the inside the form element we have input elements. These elements will render as assorted input types in our website. Each input element in the form should have a name attribute. This name attribute tells the browser what to name the data specified in that input in the message body. The type attribute tells the browser in what format to communicate the data in the message body. There is more to this as outlined here.

GET

Let's take a look at how we could send a GET request with a form.

The simple form below is made up of one text field and that text field will have the name my_data. The final input is the submit which tells the form we actually want to send the request to the URI specified in the action attribute.

Let’s send a request. Assume a user has navigated to a page and the following form has been rendered. In the text box named my_data a user has entered the string “back to basics" and clicked the submit button.

<form action="/request" method="GET">
  <input type="text" name="my_data">
  <input type=submit>
</form>

Rails server log:

Started GET "/request?my_data=back+to+basics" for 127.0.0.1 at 2013-06-21 14:44:25 -0700
Processing by RequestExampleController#create as HTML
  Parameters: {"my_data"=>"back to basics"}
Completed 200 OK in 0ms (Views: 0.2ms | ActiveRecord: 0.0ms)

There are several interesting things about this request. You’ll notice that our Rails app received the request, but the URL includes a query parameter called my_data. This is the result of our decision to use the GET method for this request. Because GET requests have no payloads the data we collected with our form is added to the URI. In addition, you’ll notice that our text input ends up with a name of my_data in the payload, or the value we specified with the name attribute of our text input.

We can open the Network tab in our developer tools (Firefox or Chrome) and see that the data was added to the query string. This is because the action on the form is GET.

POST

The POST action works almost identically to the GET request with the exception of the payload. Let's submit our form with the same text input and see what happens this time.

<form action="/request" method="POST">
  <input type="text" name="my_data">
  <input type=submit>
</form>

Rails server log:

Started POST "/request" for 127.0.0.1 at 2013-06-21 14:49:11 -0700
Processing by RequestExampleController#create as HTML
  Parameters: {"my_data"=>"back to basics"}
Completed 200 OK in 0ms (Views: 0.2ms | ActiveRecord: 0.0ms)

The first thing to notice is that our URL no longer contains the query parameter. It's also important to note that our parameters hash looks identical. Let's take a look at our network tab and see if we can learn anything about the request.

Examining the request we see that our payload includes what's called form data. Our input elements are converted to a request payload and sent to the server. Again the name of our text input element is used as the name associated with the user's text input.

XMLHttpRequest

It’s also possible to send requests via JavaScript. There is nothing special about these request from a mechanical perspective. They’re just requests like the ones we’ve sent above.

Because these requests are sent with JavaScript, in order to see what happens we'll have to provide a function that deals with the response. We'll use a simple function that will write whatever response we get to the JavaScript console.

function callback () {
  console.log(this.responseText);
};

Ajax Form Data

When sending our Ajax requests we have several different options as to how we want to send the data. As we saw above there is a concept of form data. We can easily create a form data payload using only JavaScript.

var request = new XMLHttpRequest();
request.onload = callback;
request.open("post", "http://localhost:3000/request");
var formData = new FormData();
formData.append('my_data', 'back to basics')
request.send(formData);

Rails server log:

Started POST "/request" for 127.0.0.1 at 2013-06-21 14:53:03 -0700
Processing by RequestExampleController#create as */*
  Parameters: {"my_data"=>"back to basics"}
Completed 200 OK in 0ms (Views: 0.1ms | ActiveRecord: 0.0ms)

As is obvious from our log this request was handled no different than a “normal" form submission by our Rails application. There is no special magic about an Ajax request as far as our server is concerned.

Ajax JSON Data

Another option we have is to use JSON. In order to do this we need to slightly modify our request headers and tell our server that it needs to do something slightly different to parse our payload.

var request = new XMLHttpRequest();
request.onload = callback;
request.open("post", "http://localhost:3000/request");
request.setRequestHeader("Content-Type", "application/json");
request.send('{"my_data":"back to basics"}');

Rails server log:

Started POST "/request" for 127.0.0.1 at 2013-06-21 14:55:55 -0700
Processing by RequestExampleController#create as */*
  Parameters: {"my_data"=>"back to basics", "request_example"=>{"my_data"=>"back to basics"}}
Completed 200 OK in 0ms (Views: 0.2ms | ActiveRecord: 0.0ms)

As you can see by simply modifying our request headers our Rails app is able to appropriately parse our payload and we end up with our my_data value available to our application.

Requests are one of the foundational elements of the internet as we know it. Understanding the individual elements of a request can make it much easier to debug issues with our Rails apps.

Episode #437 - February 4th, 2014

Posted 3 months back at Ruby5

Token Based Authentication, Recommundle, git_pretty_accept, PStore, Practicing Ruby, and RailsBricks 2 all in this episode of the Ruby5!

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Token Based Authentication in Rails

This week our very own Carlos Souza wrote up a blog post about how to use Token Based Authentication in your Rails app.
Token Based Authentication in Rails

Recommundle

Chris Tonkinson released recommundle, a recommendation engine for Gemfiles. You upload your project's gemfile and it recommends gems that it thinks you might be interested in checking out.
Recommundle

git_pretty_accept

George Mendoza released the git_pretty_accept gem this week which automates his teams preferred method of accepting github pull requests in their project to keep their history readable.
git_pretty_accept

Persisting data in Ruby with PStore

Rob Miller wrote up a blog post about how to persist data in ruby in situations where using a database might seem like overkill.
Persisting data in Ruby with PStore

Practicing Ruby journal moves to open-access

This week Gregory Brown of Prawn fame announced that he's giving open access to 68 articles from the Practicing Ruby journal.
Practicing Ruby journal moves to open-access

RailsBricks 2

Nico Schuele dropped us an email to let us know about RailsBricks 2. This new version is 100% in Ruby, doesn’t have anymore bash commands, and includes a test framework.
RailsBricks 2

Automatically wait for AJAX with Capybara

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Capybara's very good about waiting for AJAX. For example, this code will keep checking the page for the element for Capybara.default_wait_time seconds, allowing AJAX calls to finish:

expect(page).to have_css('.username', text: 'Gabe B-W')

But there are times when that's not enough. For example, in this code:

visit users_path
click_link 'Add Gabe as friend via AJAX'
reload_page
expect(page).to have_css('.favorite', text: 'Gabe')

We have a race condition between click_link and reload_page. Sometimes the AJAX call will go through before Capybara reloads the page, and sometimes it won't. This kind of nondeterministic test can be very difficult to debug, so I added a little helper.

Capybara's Little Helper

Here's the helper, via Coderwall:

# spec/support/wait_for_ajax.rb
module WaitForAjax
  def wait_for_ajax
    Timeout.timeout(Capybara.default_wait_time) do
      loop until finished_all_ajax_requests?
    end
  end

  def finished_all_ajax_requests?
    page.evaluate_script('jQuery.active').zero?
  end
end

RSpec.configure do |config|
  config.include WaitForAjax, type: :feature
end

We automatically include every file in spec/support/**/*.rb in our spec_helper.rb, so this file is automatically required. Since only feature specs can interact with the page via JavaScript, I've scoped the wait_for_ajax method to feature specs using the type: :feature option.

The helper uses the jQuery.active variable, which tracks the number of active AJAX requests. When it's 0, there are no active AJAX requests, meaning all of the requests have completed.

Usage

Here's how I use it:

visit users_path
click_link 'Add Gabe as friend via AJAX'
wait_for_ajax # This is new!
reload_page
expect(page).to have_css('.favorite', text: 'Gabe')

Now there's no race condition: Capybara will wait for the AJAX friend request to complete before reloading the page.

Change we can believe in (and see)

This solution can hide a bad user experience. We're not making any DOM changes on AJAX success, meaning Capybara can't automatically detect when the AJAX completes. If Capybara can't see it, neither can our users. Depending on your application, this might be OK.

One solution might be to have an AJAX spinner in a standard location that gets shown when AJAX requests start and hidden when AJAX requests complete. To do this globally in jQuery:

jQuery.ajaxSetup({
  beforeSend: function(xhr) {
    $('#spinner').show();
  },
  // runs after AJAX requests complete, successfully or not
  complete: function(xhr, status){
    $('#spinner').hide();
  }
});

What's next?

There is no official documentation on jQuery.active, since it's an internal variable, but this Stack Overflow answer is helpful. To see how we require all files in spec/support, read through our spec_helper template.

Credits

Thanks to Jorge Dias and Ancor Cruz on Coderwall for the original and refactored helper implementations.

Opening an Austin Office

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We're pleased to announce that we're opening an office in Austin, Texas!

image

Starting in early March we'll have a team in town consisting of myself and Alex (at least temporarily). Caleb will join shortly thereafter.

This new office will do the same work we're currently doing at all of our existing offices. We'll build high quality mobile and web apps for our clients and we'll do it face-to-face with clients in Austin.

Get in touch if you're interested in hiring or joining our Austin team.

We're looking forward to many years of Ruby meetups, iOS meetups, design meetups, 512 Pecan Porters, BBQ, afternoons at the Comal and Zilker Park, SXSW, Austin City Limits and nights at Stubb's.

See y'all there.

Episode #436 - January 31st, 2014

Posted 3 months back at Ruby5

Weekly Elixir news, control your AR Drone with Argus, use STI with an hstore, learning about Rails validators, sparklines in Ruby, and readme searching with HandCooler all in this episode of the Ruby5!

Listen to this episode on Ruby5

This episode is sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Elixir Fountain

Keeping up with what's going on in the Elixir community has never been easier. The Elixir Fountain weekly mailing list has you covered.
Elixir Fountain

Argus

Have a Parrot AR Drone and a command line? The Argus gem let's you control your quadracopter in Ruby!
Argus

STI + Hstore

Have a better STI experience in Rails by leveraging the Postgres Hstore with hstore_accessor.
STI + Hstore

Rails Errors and Validators

Learn the in's and out's of how Rails validators work with this detailed blog post.
Rails Errors and Validators

Sparkr

All the goodness of Spark now in your Ruby CLI!
Sparkr

HandCooler

Finding that gem readme has never been easier!
HandCooler

Optimizing Web Font Rendering Performance

Posted 3 months back at igvita.com

Web font adoption continues to accelerate across the web: according to HTTP Archive, ~37% of top 300K sites are using web fonts as of early 2014, which translates to a 2x+ increase over the past twelve months. Of course, this should not be all that surprising to most of us. Typography has always been an important part of good design, branding, and readability and web fonts offer many additional benefits: the text is selectable, searchable, zoomable, and high-DPI friendly. What's not to like?

Ah, but what about the rendering speed, don't web fonts come with a performance penalty? Fonts are an additional critical resource on the page, so yes, they can impact rendering speed of our pages. That said, just because the page is using web fonts doesn't mean it will (or has to) render slower.

There are four primary levers that determine the performance impact of web fonts on the page:

  1. The total number of fonts and font-weights used on the page.
  2. The total byte size of fonts used on the page.
  3. The transfer latency of the font resource.
  4. The time when the font downloads are initiated.

The first two levers are directly within the control of the designer of the page. The more fonts are used, the more requests will be made and more bytes will be incurred. The general UX best practice is to keep the number of used fonts at a minimum, which also aligns with our performance goals. Step one: use web fonts, but audit your font usage periodically and try to keep it lean.

Measuring web font latencies

The transfer latency of each font file is dependent on its bytesize, which in turn is determined by the number of glyphs, font metadata (e.g hinting for Windows platforms), and used compression method. Techniques such as font subsetting, UA-specific optimization, and more efficient compression (e.g. Google Fonts recently switched to Zopfli for WOFF resources), are all key to optimizing the transfer size. Plus, since we're talking about latency, where the font is served from makes a difference also – i.e. a CDN, and ideally the user's cache!

That said, instead of talking in the abstract, how long does it actually take the visitor to download the web font resource on your site? The best way to answer this question is to instrument your site via the Resource Timing API, which allows us to get the DNS, TCP, and transfer time data for each font - as a bonus, Google Fonts recently enabled Resource Timing support! Here is an example snippet to report font latencies to Google Analytics:

// check if visitor's browser supports Resource Timing
if (typeof window.performance == 'object') {
  if (typeof window.performance.getEntriesByName == 'function') {

  function logData(name, r) {
    var dns = Math.round(r.domainLookupEnd - r.domainLookupStart),
          tcp = Math.round(r.connectEnd - r.connectStart),
        total = Math.round(r.responseEnd - r.startTime);
    _gaq.push(
      ['_trackTiming', name, 'dns', dns],
      ['_trackTiming', name, 'tcp', tcp],
      ['_trackTiming', name, 'total', total]
    );
  }

  var _gaq = _gaq || [];
  var resources = window.performance.getEntriesByType("resource");
  for (var i in resources) {
    if (resources[i].name.indexOf("themes.googleusercontent.com") != -1) {
      logData("webfont-font", resources[i])
    }
    if (resources[i].name.indexOf("fonts.googleapis.com") != -1) {
      logData("webfont-css", resources[i])
    }
   }
  }
}

The above example captures the key latency metrics both for the UA-optimized CSS file and the font files specified in that file: the CSS lives on fonts.googleapis.com and is cached for 24 hours, and font files live on themes.googleusercontent.com and have a long-lived expiry. With that in place, let's take a look at the total (responseEnd - startTime) timing data in Google Analytics for my site:

For privacy reasons, the Resource Timing API intentionally does not provide a "fetched from cache” indicator, but we can nonetheless use a reasonable timing threshold - say, 20ms - to get an approximation. Why 20ms? Fetching a file from spinning rust, and even flash, is not free. The actual cache-fetch timing will vary based on hardware, but for our purposes we'll go with a relatively aggressive 20ms threshold.

With that in mind and based on above data for visitors coming to my site, the median time to get the CSS file is ~100ms, and ~26% of visitors get it from their local cache. Following that, we need to fetch the required font file(s), which take <20ms at the median – a significant portion of the visitors has them in their browser cache! This is great news, and a confirmation that the Google Fonts strategy of long-lived and shared font resources is working.

Your results will vary based on the fonts used, amount and type of traffic, plus other variables. The point is that we don't have to argue in the abstract about the latency and performance costs of web fonts: we have the tools and APIs to measure the incurred latencies precisely. And what we can measure, we can optimize.

Timing out slow font downloads

Despite our best attempts to optimize delivery of font resources, sometimes the user may simply have a poor connection due to a congested link, poor reception, or a variety of other factors. In this instance, the critical resources – including font downloads – may block rendering of the page, which only makes the matter worse. To deal with this, and specifically for web fonts, different browsers have taken different routes:

  • IE immediately renders text with the fallback font and re-renders it once the font download is complete.
  • Firefox holds font rendering for up to 3 seconds, after which it uses a fallback font, and once the font download has finished it re-renders the text once more with the downloaded font.
  • Chrome and Safari hold font rendering until the font download is complete.

There are many good arguments for and against each strategy and we won't go into that discussion here. That said, I think most will agree that the lack of any timeout in Chrome and Safari is not a great approach, and this is something that the Chrome team has been investigating for a while. What should the timeout value be? To answer this, we've instrumented Chrome to gather font-size and fetch times, which yielded the following results:

Webfont size range Percent 50th 70th 90th 95th 99th
0KB - 10KB 5.47% 136 ms 264 ms 785 ms 1.44 s 5.05 s
10KB - 50KB 77.55% 111 ms 259 ms 892 ms 1.69 s 6.43 s
50KB - 100KB 14.00% 167 ms 882 ms 1.31 s 2.54 s 9.74 s
100KB - 1MB 2.96% 198 ms 534 ms 2.06 s 4.19 s 10+ s
1MB+ 0.02% 370 ms 969 ms 4.22 s 9.21 s 10+ s

First, the good news is that the majority of web fonts are relatively small (<50KB). Second, most font downloads complete within several hundred milliseconds: picking a 10 second timeout would impact ~0.3% of font requests, and a 3 second timeout would raise that to ~1.1%. Based on this data, the conclusion was to make Chrome mirror the Firefox behavior: timeout after 3 seconds and use a fallback font, and re-render text once the font download has completed. This behavior will ship in Chrome M35, and I hope Safari will follow.

Hands-on: initiating font resource requests

We've covered how to measure the fetch latency of each resource, but there is one more variable that is often omitted and forgotten: we also need optimize when the fetch is initiated. This may seem obvious on the surface, except that it can be a tricky challenge for web fonts in particular. Let's take a look at a hands-on example:

@font-face {
  font-family: 'FontB';
  src: local('FontB'), url('http://mysite.com/fonts/fontB.woff') format('woff');
}
p { font-family: FontA }
<!DOCTYPE html>
<html>
<head>
  <link href='stylesheet.css' rel='stylesheet'> <!-- see content above -->
  <style>
    @font-face {
     font-family: 'FontA';
     src: local('FontA'), url('http://mysite.com/fonts/fontA.woff') format('woff');
   }
  </style>
  <script src='application.js' />
</head>
<body>
<p>Hello world!</p>
</body>
</html>

There is a lot going on above: we have an external CSS and JavaScript file, and inline CSS block, and two font declarations. Question: when will the font requests be triggered by the browser? Let's take it step by step:

  1. Document parser discovers external stylesheet.css and a request is dispatched.
  2. Document parser processes the inline CSS block which declares FontA - we're being clever here, we want the font request to go out as early as possible. Except, it doesn't. More on that in a second.
  3. Document parser blocks on external script: we can't proceed until that's fetched and executed.
  4. Once the script is fetched and executed we finish constructing the DOM, style calculation and layout is performed, and we finally dispatch request for fontA. At this point, we can also perform the first paint, but we can't render the text with our intended font since the font request is inflight... doh.

The key observation in the above sequence is that font requests are not initiated until the browser knows that the font is actually required to render some content on the page - e.g. we never request FontB since there is no content that uses it in above example! On one hand, this is great since it minimizes the number of downloads. On the other, it also means that the browser can't initiate the font request until it has both the DOM and the CSSOM and is able to resolve which fonts are required for the current page.

In the above example, our external JavaScript blocks DOM construction until it is fetched and executed, which also delays the font download. To fix this, we have a few options at our disposal: (a) eliminate the JavaScript, (b) add an async attribute (if possible), or (c) move it to the bottom of the page. However, the more general takeaway is that font downloads won't start until the browser can compute the the render tree. To make fonts render faster we need to optimize the critical rendering path of the page.

Tip: in addition to measuring the relative request latencies for each resource, we can also measure and analyze the request start time with Resource Timing! Tracking this timestamp will allow us to determine when the font request is initiated.

Optimizing font fetching in Chrome M33

Chrome M33 landed an important optimization that will significantly improve font rendering performance. The easiest way to explain the optimization is to look at a pre-M33 example timeline that illustrates the problem:

  1. Style calculation completed at ~840ms into the lifecycle of the page.
  2. Layout is triggered at ~1040ms, and font request is dispatched immediately after.

Except, why did we wait for layout if we already resolved the styles two hundred milliseconds earlier? Once we know the styles we can figure out which fonts we'll need and immediately initiate the appropriate requests – that's the new behavior in Chrome M33! On the surface, this optimization may not seem like much, but based on our Chrome instrumentation the gap between style and layout is actually much larger than one would think:

Percentile 50th 60th 70th 80th 90th
Time from Style → Layout 132 ms 182 ms 259 ms 410 ms 820 ms

By dispatching the font requests immediately after first style calculation the font download will be initiated ~130ms earlier at the median and ~800ms earlier at 90th percentile! Cross-referencing these savings with the font fetch latencies we saw earlier shows that in many cases this will allow us to fetch the font before the layout is done, which means that we won't have to block text rendering at all – this is a huge performance win.

Of course, one also should ask the obvious question... Why is the gap between style calculation and layout so large? The first place to start is in Chrome DevTools: capture a timeline trace and check for slow operations (e.g. long-running JavaScript, etc). Then, if you're feeling adventurous, head to chrome://tracing to take a peek under the hood – it may well be that the browser is simply busy processing and laying out the page.

Optimizing web fonts with Font Load Events API

Finally, we come to the most exciting part of this entire story: Font Load Events API. In a nutshell, this API will allow us to manage and define how and when the fonts are loaded – we can schedule font downloads at will, we can specify how and when the font will be rendered, and more. If you're familiar with the Web Font Loader JS library, then think of this API as that and more but implemented natively in the browser:

var font = new FontFace("FontA", "url(http://mysite.com/fonts/fontA.woff)", {});
font.ready().then(function() {
  // font loaded.. swap in the text / define own behavior.
});

font.load(); // initiate immediate fetch / don't block on render tree!

Font Load Events API gives us complete control over which fonts are used, when they are swapped in (i.e. should they block rendering), and when they're downloaded. In the example above we construct a FontFace object directly in JavaScript and trigger an immediate fetch – we can inline this snippet at the top of our page and avoid blocking on CSSOM and DOM entirely! Best of all, you can already play with this API in Canary builds of Chrome, and if all goes well it should find its way into stable release by M35.

Web font performance checklist

Web fonts offer a lot of benefits: improved readability, accessibility (searchable, selectable, zoomable), branding, and when done well, beautiful results. It's not a question of if web fonts should be used, but how to optimize their use. To that end, a quick performance checklist:

  1. Audit your font usage and keep it lean.
  2. Make sure font resources are optimized - see Google Web Fonts tricks.
  3. Instrument your font resources with Resource Timing: measure → optimize.
  4. Optimize the transfer latency and time of initial fetch for each font.
  5. Optimize your critical rendering path, eliminate unnecessary JS, etc.
  6. Spend some time playing with the Font Load Events API.

Just because the page is using a web font, or several, doesn't mean it will (or has to) render slower. A well optimized site can deliver a better and faster experience by using web fonts.

How To Edit An Existing Vim Macro

Posted 3 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Here's the situation:

You've just written an awesome vim macro and stopped recording. However, when you try an run the macro you realize that you forgot to add a ^ to the beginning of it and now it only works if you go back to the beginning of the line before running it. You might be thinking that its time to re-record, but there are two simple ways to edit an existing macro instead.

Yanking into a register:

  • "qp paste the contents of the register to the current cursor position
  • I enter insert mode at the begging of the pasted line
  • ^ add the missing motion to return to the front of the line
  • <Escape> return to visual mode
  • "qyy yank this new modified macro back into the q register
  • dd delete the pasted register from the file your editing

Editing the register visually:

  • :let @q=' open the q register
  • <Cntl-r><Cntl-r>q paste the contents of the q register into the buffer
  • ^ add the missing motion to return to the front of the line
  • ' add a closing quote
  • <Enter> finish editing the macro

What's next?

If you found this useful, you might also enjoy:

Episode #435 - January 28, 2014

Posted 3 months back at Ruby5

We destroy Rake with Thor, sit back for a Mina to go over Lite Config, hit some Rubygem Development Tips, and share a Weekly dose of Vim on this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

Configure Rails with YAML with Lite Config

Last week, Gabe da Silveira released lite_config, a small, environment-aware, YAML configuration manager for Rails applications. It provides conveniences like lazy loading your config/ YAML files, indifferent access to keys, automatic scoping to your currently-running Rails environment, and the ability to locally override these settings.
Configure Rails with YAML with Lite Config

Replace Rake with Thor

Thor is incredibly useful and gives you an easy way to create Ruby-based command line applications. Did you know that Thor has extensions available? And your Thor calls can be testable? Check out Ryan Sonnek's recent post for details.
Replace Rake with Thor

6 Tips for Full Stack Open Source RubyGems Development

Last week, Giovanni Intini posted an article on the Mikamai blog covering 6 tips for open source Rubygem development. The cover considerations you should make when creating your gems as well as service available to help you track and maintain them.
6 Tips for Full Stack Open Source RubyGems Development

Mina Deployment for Rails

Sakchai Siripanyawuth wrote to us this week about a two part video on Rails deployment with Mina, part of a series called DevOps for Developers. Mina is a deployment manager, like Capistrano or Vlad, and works over SSH. Check out the videos for more info.
Mina Deployment for Rails

Vim Weekly

Vim Weekly is a new mailing list (old school, right? Like Vim!) that sends out just five new Vim tips per week. If you're already somewhat familiar with Vim and are looking to hone your skills, these bite size tips may be just what you need.
Vim Weekly