I believe...

Posted about 15 hours back at Saaien Tist


Ryo Sakai reminded me a couple of weeks ago about Simon Sinek's excellent TED talk "Start With Why - How Great Leaders Inspire Action"; which inspired this post... Why do I do what I do?

The way data can be analysed has been automated more and more in the last few decades. Advances in machine learning and statistics make it possible to gain a lot of information from large datasets. But are we starting to rely to much on those algorithms? Different issues seem to pop up more and more. For one thing, research in algorithm design has enabled many more applications, but at the same time makes these so complex that they start to operate as black boxes. Not only to the end-user who provides the data, but even for the algorithm developer. Another issue with pre-defined algorithms is that having these around precludes us to identifying unexpected patterns. If the algorithm or statistical test is not specifically written to find a certain type of pattern, it will not find it. Third issue: (arbitrary) cutoffs. Many algorithms rely heavily on the user (or even worse: the developer) defining a set of cutoff values. This is true in machine learning as well as statistics. A statistical test returning a p-value of 4.99% is considered "statistically significant", but you'd throw away your data if that p-value were 5.01%. What's the intrinsic thing at 5% that makes you have to choose between "yes, this is good" and "let's throw our hypothesis out the window"? All in all, much of this comes back to the fragility of using computers (hat tip to Toni for the book by Nassim Taleb): you have to tell them what to do and what to expect. They're not resilient to changes in setting, data, prior knowledge, etc; at least not as much as we are.

So where does this bring us? It's my firm belief that we need to put the human back in the loop of data analysis. Yes, we need statistics. Yes, we need machine learning. But also: yes, we need a human individual to actually make sense of the data and drive the analysis. To make this possible, I focus on visual design, interaction design, and scalability. Visual design because the representation of data in many cases needs improvement to be able to cope with high-dimensional data; interaction design because it's often by "playing" with the data that the user can gain insights; and scalability because it's not trivial to process big data fast enough that we can get interactivity.

Parsing Embedded JSON and Arrays in Swift

Posted 1 day back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In the previous posts (first post, second post) about parsing JSON in Swift we saw how to use functional programming concepts and generics to make JSON decoding consise and readable. We left off last time creating a custom operator that allowed us to decode JSON into model objects using infix notation. That implementation looked like this:

struct User: JSONDecodable {
  let id: Int
  let name: String
  let email: String?

  static func create(id: Int)(name: String)(email: String?) -> User {
    return User(id: id, name: name, email: email)
  }

  static func decode(json: JSON) -> User? {
    return _JSONParse(json) >>> { d in
      User.create
        <^> d <|  "id"
        <*> d <|  "name"
        <*> d <|? "email"
    }
  }
}

We can now parse JSON into our model objects using the <| and <|? operators. The final piece we’re missing here is the ability to get keys from nested JSON objects and the ability to parse arrays of types.

Note: I’m using <|? to stay consistent with the previous blog post but ?s are not allowed in operators until Swift 1.1. You can use <|* for now.

Getting into the Nest

First, let’s look at getting to the data within nested objects. A use case for this could be a Post to a social network. A Post has a text component and a user who authored the Post. The model might look like this:

struct Post {
  let id: Int
  let text: String
  let authorName: String
}

Let’s assume that the JSON we receive from the server will look like this:

{
  "id": 5,
  "text": "This is a post.",
  "author": {
    "id": 1,
    "name": "Cool User"
  }
}

You can see that the author key is referencing a User object. We only want the user’s name from that object so we need to get the name out of the embedded JSON. Our Post decoder starts like this:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author"
    }
  }
}

This won’t work because our create function is telling the <| operator to try and make the value associated to the "author" key a String. However, it is a JSONObject so this will fail. We know that d <| "author" by itself will return a JSONObject? so we can use the bind operator to get at the JSONObject inside the optional.

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" >>> { $0 <| "name" }
    }
  }
}

This works, but there are two other issues at play. First, you can see that reaching further into embedded JSON can result in a lot of syntax on one line. More importantly, Swift’s type inference starts to hit its limit. I experienced long build times because the Swift compiler had to work very hard to figure out the types. A quick fix would be to give the closure a parameter type: { (o: JSONObject) in o <| "name" }, but this was even more syntax. Let’s try to overload our custom operator <| to handle this for us.

A logical next step would be to make the <| operator explicitly accept a JSONObject? optional value instead of the non-optional allowing us to eliminate the bind (>>>) operator.

func <|<A>(object: JSONObject?, key: String) -> A? {
  return object >>> { $0 <| key }
}

Then we use it in our Post decoder like so:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Post {
    return Post(id: id, text: text, authorName: authorName)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
    }
  }
}

That syntax looks much better; however, Swift has a bug / feature that allows a non-optional to be passed into a function that takes an optional parameter and Swift will automatically turn the value into an optional type. This means that our overloaded implementation of <| that takes an optional JSONObject will be confused with its non-optional counterpart since both can be used in the same situations.

Instead, let’s specify an overloaded version of <| that removes the generic return value and explicity sets it to JSONObject.

func <|(object: JSONObject, key: String) -> JSONObject {
  return object[key] >>> _JSONParse ?? JSONObject()
} 

We try to parse the value inside the object to a JSONObject and if that fails we return an empty JSONObject to the next part of the decoder. Now the d <| "author" <| "name" syntax works and the compiler isn’t slowed down.

Arrays and Arrays of Models

Now let’s look at how we can parse JSON arrays into a model. We’ll use our Post model and add an array of Strings as the comments on the Post.

struct Post {
  let id: Int
  let text: String
  let authorName: String
  let comments: [String]
}

Our decoding function will then look like this:

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [String]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

This works with no extra coding. Our _JSONParse function is already good enough to cast a JSONArray or [AnyObject] into a [String].

What if our Comment model was more complex than just a String? Let’s create that.

struct Comment {
  let id: Int
  let text: String
  let authorName: String
}

This is very similar to our original Post model so we know the decoder will look like this:

extension Comment: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String) -> Comment {
    return Comment(id: id, text: text, authorName: authorName)
  }

  static func decode(json: JSON) -> Comment? {
    return _JSONParse(json) >>> { d in
      Comment.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
    }
  }
}

Now our Post model needs to use the Comment model.

struct Post {
  let id: Int
  let text: String
  let authorName: String
  let comments: [Comment]
}

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [Comment]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

Unfortunately, _JSONParse isn’t good enough to take care of this automatically so we need to write another overload for <| to handle the array of models.

func <|<A>(object: JSONObject, key: String) -> [A?]? {
  return d <| key >>> { (array: JSONArray) in array.map { $0 >>> _JSONParse } }
}

First, we extract the JSONArray using the <| operator. Then we map over the array trying to parse the JSON using _JSONParse. Using map, we will get an array of optional types. What we really want is an array of only the types that successfully parsed. We can use the concept of flattening to remove the optional values that are nil.

func flatten<A>(array: [A?]) -> [A] {
  var list: [A] = []
  for item in array {
    if let i = item {
      list.append(i)
    }
  }
  return list
}

Then we add the flatten function to our <| overload:

func <|<A>(object: JSONObject, key: String) -> [A]? {
  return d <| key >>> { (array: JSONArray) in 
    array.map { $0 >>> _JSONParse } 
  } >>> flatten
}

Now, our array parsing will eliminate values that fail _JSONParse and return .None if the key was not found within the object.

The final step is to be able to decode a model object. For this, we need to define an overloaded function for _JSONParse that knows how to handle models. We can use our JSONDecodable protocol to know that there will be a decode function on the model that knows how to decode the JSON into a model object. Using this we can write a _JSONParse implementation like this:

func _JSONParse<A: JSONDecodable>(json: JSON) -> A? {
  return A.decode(json)
}

Now we can decode a Post that contains an array of Comment objects. However, we’ve introduced a new problem. There are two implementations for the <| operator that are ambiguous. One returns A? and the other returns [A]? but and array of a type could also be A so the compiler doesn’t know which implementation of <| to use. We can fix this by making every type that we want to use the A? version to conform to JSONDecodable. This means we will have to make the native Swift types conform as well.

extension String: JSONDecodable {
  static func decode(json: JSON) -> String? {
    return json as? String
  }
}

extension Int: JSONDecodable {
  static func decode(json: JSON) -> Int? {
    return json as? Int
  }
}

Then make the <| implementation that returns A? work only where A conforms to JSONDecodable.

func <|<A: JSONDecodable>(object: JSONObject, key: String) -> A?

Conclusion

Through a series of blog posts, we’ve seen how functional programming and generics can be a powerful tool in Swift for dealing with optionals and unknown types. We’ve also explored using custom operators to make JSON parsing more readable and consise. As a final look at what we can do, let’s see the Post decoder one last time.

extension Post: JSONDecodable {
  static func create(id: Int)(text: String)(authorName: String)(comments: [Comment]) -> Post {
    return Post(id: id, text: text, authorName: authorName, comments: comments)
  }

  public static func decode(json: JSON) -> Post? {
    return _JSONParse(json) >>> { d in
      Post.create
        <^> d <| "id"
        <*> d <| "text"
        <*> d <| "author" <| "name"
        <*> d <| "comments"
    }
  }
}

We’re excited to announce that we’re releasing an open source library for JSON parsing based on what we’ve learned writing this series. We are calling it Argo, named after the Greek word for swift and Jason of the Argonauts' boat. Jason’s father was Aeson, which is a JSON parsing library in Haskell that inspired Argo. You can find it on GitHub. We hope you enjoy it as much as we do.

The Bad News

During this part of the JSON parsing I ran up against the limits of the Swift compiler quickly. The larger your model object, the longer the build takes. This is an issue with the Swift compiler having trouble working out all the nested type inference. While Argo works, it can be impracticle for large objects. There is work being done on a separate branch to reduce this time.

Episode #499 - September 26, 2014

Posted 4 days back at Ruby5

Shell Shocked, Factory Girl for frontend tests with Hangar, and upgrading from Rails 3.2 to 4.2

Listen to this episode on Ruby5

Sponsored by NewRelic

New Relic APM identifies many transactions that serve your end users and other systems
NewRelic

Shell Shock

Stephane Chazelas has discovered a vulnerability in Bash covering almost every version up to and including version 4.3
Shell Shock

Prefer Objects as Method Parameters, Not Class Names

Posted 4 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

In an application we worked on, we presented users with multiple choice questions and then displayed summaries of the answers. Users could see one of several summary types. You could view the percentage of users who selected the correct answer, or see a breakdown of the percentage of users who selected each answer.

Some of these summary classes were simple:

class MostRecentAnswer
  def summary_for(question)
    question.most_recent_answer_text
  end
end

We allowed the user to select which summary to view, so we accepted a summary_type as a parameter. We needed to pass the summarizer around, so we accepted a class name in the parameters and that name directly to our model.

class SummariesController < ApplicationController
  def index
    @survey = Survey.find(params[:survey_id])
    @summaries = @survey.summaries_using(params[:summary_type])
  end
end

class Survey < ActiveRecord::Base
  has_many :questions

  def summaries_using(summarizer_type)
    summarizer = summarizer_type.constantize.new
    questions.map do |question|
      summarizer.summary_for(question)
    end
  end
end

This works, but it set us up for trouble later.

The Survey#summaries_using method accepts a class name, which means it can only reference constants instead of objects.

I’ve come to call this “class-oriented programming,” because it results in an over-emphasis on classes. Because code like this can only reference constants, it results in classes which use inheritance instead of composition.

Runtime vs Static State

Some Rails applications live with much of their data trapped in static state. Anything that isn’t a local or instance variable is static state. Here are some examples:

VERSION = 2

cattr_accessor :version
self.version = 2

@@version = 2

We don’t usually talk about “static” methods and attributes in Ruby, but all of the information contained in the above example is static state, because only one reference can exist at one time for the entire program.

This becomes a problem when you want to mix static state and runtime state, because static state is viral, as static state can only compose other static state.

Runtime State in Rails Applications

In our original example, you would be able to get away with using a class-based solution, because the MostRecentAnswer summarizer doesn’t need any information besides the question to summarize.

Here’s a new challenge: after the summary of each answer, also include the current user’s answer. Such a summarizer could be implemented in a decorator:

class WithUserAnswer
  def initialize(base_summarizer, user)
    @base_summarizer = base_summarizer
    @user = user
  end

  def summary_for(question)
    user_answer = question.answer_text_for(@user)
    base_summary = @base_summarizer.summary_for(question)
    "#{base_summary} (Your answer: #{user_answer})"
  end
end

This won’t work with a class-based solution, though, because the parameters to the initialize method vary for different summarizers. These parameters may have little in common and may be initialized far away from where they’re used, so it doesn’t make sense to pass all of them all of the time.

We can rewrite our example to pass an object instead of a class name:

class SummariesController < ApplicationController
  def index
    @survey = Survey.find(params[:survey_id])
    @summaries = @survey.summaries_using(summarizer)
  end

  private

  def summarizer
    if params[:include_user_answer]
      WithUserAnswer.new(base_summarizer, current_user)
    else
      base_summarizer
    end
  end

  def base_summarizer
    params[:summary_type].constantize.new
  end
end

class Survey < ActiveRecord::Base
  has_many :questions

  def summaries_using(summarizer)
    questions.map do |question|
      summarizer.summary_for(question)
    end
  end
end

Now that Survey accepts a summarizer object instead of a class name, we can pass objects which combine static and runtime state, like the current user.

The controller still uses constantize, because it’s not possible to pass an object as an HTTP parameter. However, by avoiding class names as much as possible, this example has become more flexible.

What’s Next?

You can learn more about factories, composition, decorators and more in Ruby Science.

Security advisory: Phusion Passenger and the CVE-2014-6271 Bash vulnerability

Posted 5 days back at Phusion Corporate Blog

On 24 September 2014, an important security vulnerability for Bash was published. This vulnerability, dubbed “Shellshock” and with identifiers CVE-2014-6271 and CVE-2014-7169, allows remote code execution.

This vulnerability is not caused by Phusion Passenger, but does affect Phusion Passenger. We strongly advise users to upgrade their systems as soon as possible. Please note that while CVE-2014-6271 has been patched, CVE-2014-7169 isn’t. A fix is still pending.

Please refer to your operating system vendor’s upgrade instructions, for example:

The post Security advisory: Phusion Passenger and the CVE-2014-6271 Bash vulnerability appeared first on Phusion Corporate Blog.

Maintenance Thursday 25th at 8pm EST

Posted 6 days back at entp hoth blog - Home

Lighthouse will be in maintenance mode tomorrow night at 8pm EST, for about 1h, hopefully less.

This is a bit short notice, but we have to perform some important hardware updates.

As usual, you can contact us at support@lighthouseapp.com if you have any question or concern.

Phusion Passenger 4.0.52 released

Posted 6 days back at Phusion Corporate Blog


Phusion Passenger is a fast and robust web server and application server for Ruby, Python, Node.js and Meteor. Passenger takes a lot of complexity out of deploying web apps, and adds powerful enterprise-grade features that are useful in production. High-profile companies such as Apple, New York Times, AirBnB, Juniper, American Express, etc are already using it, as well as over 350.000 websites.

Phusion Passenger is under constant maintenance and development. Version 4.0.52 is a bugfix release.

Phusion Passenger also has an Enterprise version which comes with a wide array of additional features. By buying Phusion Passenger Enterprise you will directly sponsor the development of the open source version.

Recent changes

Version 4.0.50 and 4.0.51 have been skipped because they were hotfixes for Enterprise customers. The changes in 4.0.50, 4.0.51 and 4.0.52 combined are as follows:

  • Fixed a null termination bug when autodetecting application types.
  • Node.js apps can now also trigger the inverse port binding mechanism by passing '/passenger' as argument. This was introduced in order to be able to support the Hapi.js framework. Please read this StackOverflow answer for more information regarding Hapi.js support.
  • It is now possible to abort Node.js WebSocket connections upon application restart. Please refer to this page for more information. Closes GH-1200.
  • Passenger Standalone no longer automatically resolves symlinks in its paths.
  • passenger-config system-metrics no longer crashes when the system clock is set to a time in the past. Closes GH-1276.
  • passenger-status, passenger-memory-stats, passenger-install-apache2-module and passenger-install-nginx-module no longer output ANSI color codes by default when STDOUT is not a TTY. Closes GH-487.
  • passenger-install-nginx-module --auto is now all that’s necessary to make it fully non-interactive. It is no longer necessary to provide all the answers through command line parameters. Closes GH-852.
  • Minor contribution by Alessandro Lenzen.
  • Fixed a potential heap corruption bug.
  • Added Union Station support for Rails 4.1.

Installing or upgrading to 4.0.52

OS X OS X Debian Debian Ubuntu Ubuntu
Heroku Heroku Ruby gem Ruby gem Tarball Tarball

Final

Fork us on Github!Phusion Passenger’s core is open source. Please fork or watch us on Github. :)

<iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=watch&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;repo=passenger&amp;type=fork&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="170" height="30"></iframe><iframe src="http://ghbtns.com/github-btn.html?user=phusion&amp;type=follow&amp;size=large&amp;count=true" allowtransparency="true" frameborder="0" scrolling="0" width="190" height="30"></iframe>

If you would like to stay up to date with Phusion news, please fill in your name and email address below and sign up for our newsletter. We won’t spam you, we promise.



Our iOS, Rails, and Backbone.js Books Are Now Available for Purchase

Posted 6 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Starting today, you can buy any of our books through these links.

Our book offerings currently include:

Each of these books comes in MOBI, EPUB, PDF, and HTML and includes access to the GitHub repository with the source Markdown / LaTeX file and an example application.

For the interested, for the past few months these books were included in and exclusively available through our subscription learning product, Upcase.

We determined that the books were not a good fit for Upcase, and so now we have split them out.

Episode #498 - September 23, 2014

Posted 7 days back at Ruby5

We go Airborne for Ruby 2.1.3 while Eagerly Decorating the skies and Swiftly avoiding the Daemons on this episode of Ruby5.

Listen to this episode on Ruby5

Sponsored by Top Ruby Jobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
Top Ruby Jobs

Ruby 2.1.3 is released

Last week, MRI Ruby 2.1.3 was released. It’s primarily a bug fix release, but does contain new garbage collection tuning which seemingly drastically reduces memory consumption.
Ruby 2.1.3 is released

Automatic Eager Loading in Rails with Goldiloader

The team from Salsify recently released a gem called Goldiloader. This gem attempts to automatically eager load associated records and avoid n+1 queries.
Automatic Eager Loading in Rails with Goldiloader

Airborne - RSpec-driven API testing

A few days ago a new, RSpec-driven API testing framework called Airborne took off on GitHub. It works with Rack applications and provides useful response header and JSON contents matchers for RSpec.
Airborne - RSpec-driven API testing

Active Record Eager Loading with Query Objects and Decorators

On the Thoughtbot blog this week, Joe Ferris wrote about using Query Objects and Decorators to easily store the data returned in ActiveRecord models and use it in your views. Query objects can help you wrap up complex SQL without polluting our models.
Active Record Eager Loading with Query Objects and Decorators

Don’t Daemonize your Daemons

Yesterday, Mike Perham put together a short, yet very useful, post entitled “Don’t Daemonize Your Daemons!” It was written as a retort to Jake Gordon’s Daemonizing Ruby Processes post last week, highlighting that fact that most people, including Jake, make daemonizing processes overly difficult.
Don’t Daemonize your Daemons

Swift for Rubyists

If you’re interested in diving into Apple’s new Swift language, we highly recommend the video of JP Simard’s talk on Swift For Rubyists on the realm.io blog. iOS 8 is out now, so Swift applications are now allowed in the App Store.
Swift for Rubyists

Validating JSON Schemas with an RSpec Matcher

Posted 7 days back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

At thoughtbot we’ve been experimenting with using JSON Schema, a widely-used specification for describing the structure of JSON objects, to improve workflows for documenting and validating JSON APIs.

Describing our JSON APIs using the JSON Schema standard allows us to automatically generate and update our HTTP clients using tools such as heroics for Ruby and Schematic for Go, saving loads of time for client developers who are depending on the API. It also allows us to improve test-driven development of our API.

If you’ve worked on a test-driven JSON API written in Ruby before, you’ve probably encountered a request spec that looks like this:

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      current_user = response_body["user"]
      expect(response.status).to eq 200
      expect(current_user["auth_token"]).to eq user.auth_token
      expect(current_user["email"]).to eq user.email
      expect(current_user["first_name"]).to eq user.first_name
      expect(current_user["last_name"]).to eq user.last_name
      expect(current_user["id"]).to eq user.id
      expect(current_user["phone_number"]).to eq user.phone_number
    end
  end

  def response_body
    JSON.parse(response.body)
  end
end

Following the four-phase test pattern, the test above executes a request to the current user endpoint and makes some assertions about the structure and content of the expected response. While this approach has the benefit of ensuring the response object includes the expected values for the specified properties, it is also verbose and cumbersome to maintain.

Wouldn’t it be nice if the test could look more like this?

describe "Fetching the current user" do
  context "with valid auth token" do
    it "returns the current user" do
      user = create(:user)
      auth_header = { "Auth-Token" => user.auth_token }

      get v1_current_user_url, {}, auth_header

      expect(response.status).to eq 200
      expect(response).to match_response_schema("user")
    end
  end
end

Well, with a dash of RSpec and a pinch of JSON Schema, it can!

Leveraging the flexibility of RSpec and JSON Schema

An important feature of JSON Schema is instance validation. Given a JSON object, we want to be able to validate that its structure meets our requirements as defined in the schema. As providers of an HTTP JSON API, our most important JSON instances are in the response body of our HTTP requests.

RSpec provides a DSL for defining custom spec matchers. The json-schema gem’s raison d'être is to provide Ruby with an interface for validating JSON objects against a JSON schema.

Together these tools can be used to create a test-driven process in which changes to the structure of your JSON API drive the implementation of new features.

Creating the custom matcher

First we’ll add json-schema to our Gemfile:

Gemfile

group :test do
  gem "json-schema"
end

Next, we’ll define a custom RSpec matcher that validates the response object in our request spec against a specified JSON schema:

spec/support/api_schema_matcher.rb

RSpec::Matchers.define :match_response_schema do |schema|
  match do |response|
    schema_directory = "#{Dir.pwd}/spec/support/api/schemas"
    schema_path = "#{schema_directory}/#{schema}.json"
    JSON::Validator.validate!(schema_path, response, strict: true)
  end
end

We’re make a handful of decisions here: We’re designating spec/support/api/schemas as the directory for our JSON schemas and we’re also implementing a naming convention for our schema files.

JSON::Validator#validate! is provided by the json-schema gem. Passing strict: true to the validator ensures that validation will fail when an object contains properties not defined in the schema.

Defining the user schema

Finally, we define the user schema using the JSON Schema specification:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties": {
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "email",
        "first_name",
        "id",
        "last_name",
        "phone_number"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

TDD, now with schema validation

Let’s say we need to add a new property, neighborhood_id, to the user response object. The back end for our JSON API is a Rails application using ActiveModel::Serializers.

We start by adding neighborhood_id to the list of required properties in the user schema:

spec/support/api/schemas/user.json

{
  "type": "object",
  "required": ["user"],
  "properties":
    "user" : {
      "type" : "object",
      "required" : [
        "auth_token",
        "created_at",
        "email",
        "first_name",
        "id",
        "last_name",
        "neighborhood_id",
        "phone_number",
        "updated_at"
      ],
      "properties" : {
        "auth_token" : { "type" : "string" },
        "created_at" : { "type" : "string", "format": "date-time" },
        "email" : { "type" : "string" },
        "first_name" : { "type" : "string" },
        "id" : { "type" : "integer" },
        "last_name" : { "type" : "string" },
        "neighborhood_id": { "type": "integer" },
        "phone_number" : { "type" : "string" },
        "updated_at" : { "type" : "string", "format": "date-time" }
      }
    }
  }
}

Then we run our request spec to confirm that it fails as expected:

Failures:

  1) Fetching a user with valid auth token returns requested user
     Failure/Error: expect(response).to match_response_schema("user")
     JSON::Schema::ValidationError:
       The property '#/user' did not contain a required property of 'neighborhood_id' in schema
       file:///Users/laila/Source/thoughtbot/json-api/spec/support/api/schemas/user.json#

Finished in 0.34306 seconds (files took 3.09 seconds to load)
1 example, 1 failure

Failed examples:

rspec ./spec/requests/api/v1/users_spec.rb:6 # Fetching a user with valid auth token returns requested user

We make the test pass by adding a neighborhood_id attribute in our serializer:

class Api::V1::UserSerializer < ActiveModel::Serializer
  attributes(
    :auth_token,
    :created_at,
    :email,
    :first_name,
    :id,
    :last_name,
    :neighborhood_id,
    :phone_number,
    :updated_at
  )
end
.

Finished in 0.34071 seconds (files took 3.14 seconds to load)
1 example, 0 failures

Top 1 slowest examples (0.29838 seconds, 87.6% of total time):
  Fetching a user with valid auth token returns requested user
    0.29838 seconds ./spec/requests/api/v1/users_spec.rb:6

Hooray!

What’s next?

Tender is mobile friendly!

Posted 8 days back at entp hoth blog - Home

Starting today, if you access a Tender site on mobile, you will get a nice mobile view (at last!). If your site uses custom CSS, you will need to manually activate it: please read the KB article for details.

Let us know how you like it :)

Cheers!

Lighthouse integrates with Raygun.io!

Posted 8 days back at entp hoth blog - Home

Raygun.io (https://raygun.io/) is an error tracking service that helps you build better software, allowing your team to keep an eye on the health of your applications by notifying you of software bugs in real time. Raygun works with every major web and mobile programming language and platform.

raygun.io

They recently added support for Lighthouse and we wrote a KB article to get you started.

So check them out, and start tracking!

Meaningful Exceptions

Posted 8 days back at Luca Guidi - Home

Writing detailed API documentation helps to improve software design.

We already know that explaining a concept to someone leads us to a better grasp. This is true for our code too. This translation process to a natural language forces us to think about a method from the outside perspective. We describe the intent, the input, the output and how it reacts under unexpected conditions. Put it in black and white and you will find something to refine.

It happened to me recently.

I was reviewing some changes in lotus-utils, while I asked myself: “What if we accidentally pass nil as argument here”? The answer was easy: NoMethodError, because nil doesn’t respond to a specific method that the implementation invokes.

A minute later, there was already an unit test to cover that case and a new documentation detail to explain it. Solved.

Well, not really. Let’s take a step back before.

First solution

When we design public API, we are deciding the way that client code should use our method and what to expect from it. Client code doesn’t know nothing about our implementation, and it shouldn’t be affected if we change it.

The technical reason why the code raises that exception is:

arg * 2

'/' * 2 # => "//"
nil * 2 # => NoMethodError

The first solution was to catch that error and to re-raise ArgumentError.

Improved solution

During the process of writing this article, I’ve recognized two problems with this proposal.

The first issue is about the implementation. What if we refactor the code in a way that NoMethodError is no longer raised?

2.times.map { arg }.join

2.times.map { '/' }.join # => "//"
2.times.map { nil }.join # => ""

Our new implementation has changed the behavior visible from the outside world. We have broken the software contract between our library and the client code.

It expected ArgumentError in case of nil, but after that modification, this isn’t true anymore.

The other concern is about the semantic of the exception. According to RubyDoc:

“ArgumentError: Raised when the arguments are wrong and there isn’t a more specific Exception class.”

We have a more specific situation here, we expect a string, but we’ve got nil. Probably, TypeError fits better our case.

Conclusion

Our test suite can be useful to check the correctness of a procedure under a deterministic scenario, but sometimes we write assertions under a narrowed point of view.

Explaining the intent with API docs mitigates this problem and helps other people to understand our initial idea.

Check if the semantic of the raised exceptions is coherent with that conceptualization.

To stay updated with the latest releases, to receive code examples, implementation details and announcements, please consider to subscribe to the Lotus mailing list.

<link href="//cdn-images.mailchimp.com/embedcode/slim-081711.css" rel="stylesheet" type="text/css"/>


A plan by any other name ...

Posted 11 days back at entp hoth blog - Home

… still gets you better, simpler, customer support!

We decided to change the names of our plans. If you are currently on the following plans, don’t fret! Nothing has changed other than the name. All your existing features are still there. If you were on a legacy plan, nothing changes for you at all.

  • Core => Starter
  • Extra => Standard
  • Ultimo => Pro

Let us know if you have any questions at help@tenderapp.com

Episode #497 - September 19th, 2014

Posted 11 days back at Ruby5

Start using Fourchette, roll-out features by the instance, read logs with a little help from your friends, run your own bitcoin node, and say hello to byebug!

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

Fourchette App

Deployable version of the Fourchette core
Fourchette App

helioth

Feature-flipping and rollout for your apps with ActiveRecord
helioth

hutils

A collection of command line utilities for working with logfmt
hutils

Toshi

An open source Bitcoin node built to power large scale web applications
Toshi

byebug

Byebug is a simple to use, feature rich debugger for Ruby 2
byebug