Episode #442 – February 21st, 2014

Posted 2 months back at Ruby5

We will miss you Jim Weirich.

Listen to this episode on Ruby5

Farewell, Jim.

Posted 2 months back at Phusion Corporate Blog

Today, the sad news has reached us that Jim Weirich has passed away. We’re incredibly sad about this as Jim was one of the nicest people we’ve got to know in the Ruby/Rails community when we first started Phusion. In keeping his memory alive, I’d like to reflect on a particular anecdote that made Jim especially awesome to us and most likely to you as well. I’m sure many of you who were fortunate enough to get to know him can relate to his kindness.

Back in 2008 when Hongli, Tinco and I set out to go to RailsConf to give our very first talk abroad, we met Jim in the lobby of the conference space. We had just attended a talk of his where he had gone through a myriad of valuable do’s and don’ts one should be aware of when giving a talk. These tips proved to be incredibly valuable to us in years to come, and we hope Jim knows how grateful we are for this.

Our talk was scheduled to be held the day after, and seeing Jim’s do’s and don’ts, we were suddenly confronted with how many embarassing “don’ts” we had in our slides. As Jim told the audience that it’s generally a good idea to avoid cliches such as having bulletpoint hell, stock images of “the world” and “business people shaking hands”, we felt more and more uncomfortable. Not only did we have a lot of bulletpoints, we even had an image of “business people shaking hands”… in front of “the world”. We basically tripped over every possible cliche in the book!

But hey, we still had 24 hours, surely we’d be able to fix this up right? Luckily, Jim had the demeanor of a big kind cuddly bear, so we felt compelled to walk up to him after his talk to ask for some help with our slides. Instead of brushing us off, Jim graciously sat down with us for about 2 hours in pointing out the things that could use improvement in the delivery of our talk. And understandibly laughed out loud at our slide with the business people shaking hands in front of the world. ;)

The next day, after giving our talk, we had people walking up to us saying that we killed it. In reality, it was Jim’s tips and kindness in sharing these tips that “killed it”.

We will miss you buddy.

Your friends, Tinco, Hongli and Ninh.

Low Power Custom Arduino Sensor Board

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Last month we looked at using Arduinos to monitor bathroom usage at thoughtbot so employees could check bathroom availability at their desks. After we had a working prototype, power consumption was too high resulting in only a day of usage. It was also very expensive to reproduce if we wanted to expand our sensor network.

Lower Power, Higher Savings

The biggest power sucker was the XBee radio. It was always on and drawing about 50mA of current. The XBee Series 2 offers a bit more power savings when on and also has a more robust interface that allows us to put it in sleep mode. However, it still has a steep price. The better option is the nRF24l01+ series from Nortic. They are very inexpensive on Amazon.com, have an SDK for the Arduino, sleep mode capability, and gives us automatic transmission packet assemble, detection, validation, retransmission, and acknowledgement.

When using the nRF24l01+ board in conjunction with the Arduino Fio, there was large jump in power savings. Current consumption for the entire sensor board was down to around 5mA. This would yield about 80 hours of use on a 400mAh battery. That's only about 3 days until it would need to be recharged again. Still not so great. The final chunk of power sucking was being caused by the LED that comes on every Arduino to indicate power-on. Removing the LEDs and potentially destroying an expensive board didn't sound appealing. The best option was to design a custom Arduino board that would fit our use case instead of trying to mash together pre-made general purpose boards.

Introducing the thoughtbot Arduino Sensor Board

The custom Arduino board will have no LEDs for power savings. To keep it similar to other Arduino boards, it will use the ATMega328P 8-bit microcontroller and have the same pinouts on the connectors. It will have a connector for the nRF24 board for easily making it wireless. Finally, it will support LiPo rechargeable batteries and coin cell batteries.

Designing the Board

We used the free version of Eagle for the schematic capture and PCB layout. You can access the eagle files, Gerber files, and bill of materials in the repository.

Here is the schematic: Sensor Board Schematic

Here is the PCB layout: Sensor Board PCB

Assemble Your Own

We used OSHPark to fabricate the boards. They are inexpensive and relatively quick. We had our boards in less than 2 weeks. To make your own, upload the Gerber files to OSHPark. While you're waiting for the boards to arrive, place an order with DigiKey for the components. The list of components is also on the repository: BOM. You will also need a few other things to program the Arduino that you could order while you waiting. Read the Programming section for specifics.

Fabricated PBC

Soldering it Together

To solder together the board, you'll need a few tools:

Soldering Tools

Start with U1, the microcontroller. It will be the hardest to solder so make it easy on yourself and do it before there are other parts in the way. Align the chip so pin 1 is in the correct spot and that all the pins are centered on their pads. Use some tape to hold it in place. Next, apply flux to one side of pins. Then, touch the solder to the hot iron just to get a little bit on the tip of the iron. Then, place the tip of the iron on each pad and you will see the flux-soaked metal pull the solder over it. Move down the pins, touching each one until you see the solder flow over it. Touch the solder to the iron again if more is needed. Once you have one side of the microcontroller done, remove the tape and finish the other sides.

We used a volt meter to test the pins and make sure they were connected to the pads by testing that the resistance between the pin on the chip and a pad it connects to somewhere else on the board was 0. If you don't have a volt meter a visual inspection will be sufficient. If 2 pins were accidentally connected by too much solder, try to add flux again and re-apply the iron. If that doesn't work, soak up some of the solder using the copper braid. Apply flux, then put the copper braid on top of the solder and push down on the braid with the soldering iron. Make sure to hold the plastic case of the braid and that there is enough braid between the case and the soldering iron or you could melt the case. You should see the copper braid absorb the solder and turn tin colored. Remove the braid when you've soaked up enough and re-touch the pins with solder as needed.

Sensor Board with chip soldered

Next, solder the other tough components, U2 and JP9 (the USB connector). Move onto the small components, capacitors and resistors, and take on the biggest stuff last, the connectors. When soldering the other small components, tape can be impractical, so try putting solder on one pad first then, using the tweezers, slide the component in while heating the pad with the soldering iron. Then solder the other side.

Take care when soldering the LED and the capacitor C1. Both of these components have to be soldered in a certain orientation. C1 pad should have a white line closer to one pad and there should be a white line on one end of the part. Make sure these lines are aligned. The LED should have a line and a dot on the bottom of the part. The line should face toward power. The green line is somewhat visible in this image:

LED Direction

Once you're all done it should look similar to this:

Sensor Board completed

Programming

To use the board as an Arduino, you'll have to upload the Arduino bootloader onto the chip. You can do this with this programmer from Sparkfun. Plug it into J1 so that the cable goes over the chip. If you're not sure, check the pin mappings. It should look similar to this image:

Programmer Connection

Plug the USB cable from the programmer into your computer and burn the bootloader to the chip from the Arduino software. If you're having trouble making your computer recognize the programmer, look around the internet as there is a lot of support out there. Open the Arduino software, at this time the latest version is 1.0.5. In the menu, select Tools, Board, Arduino Uno. Then, Tools, Programmer, USBtinyISP. Finally, Tools, Burn Bootloader. After a bit of time the board should be programmed with the Arduino bootloader.

Now you have a fully functional Arduino board. To save space and cost on the board, we left out the FTDI USB to Serial chip. The Arduino software uses USB to Serial to program the devices. You can purchase one from Sparkfun or Amazon.com. The way the board was designed, you need to plug the FTDI board in upside down. If the board is not plugged in correctly the power pins will be misaligned and you'll risk damaging the processor.

FTDI Board Connection

To make sure everything works, plug in the FTDI board to you computer and open the Blink example from the Arduino software. Make sure the Arduino Uno is selected as your board and the correct serial port is selected in the Tools menu. Upload the program. Connect and LED in series with a resistor (330 ohms to 1K) between ground (GND pin) and pin D13. Make sure the cathode of the LED is facing toward ground. The LED should blink in 1 second intervals.

Conclusion

Your Arduino is ready to go! Have fun creating anything your mind can imagine! Next time we'll look at rewriting the software for the bathroom occupancy detector and creating a sensor network using this custom Arduino board and the nRF24 transceiver.

iOS Code Review: Loose Guidelines

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

From time to time I've been asked to do an independent code review, to determine the overall health of a client's code base. Unlike a code walkthrough, where someone familiar with the code shows the main components, this is a code review where an outside expert examines the project, takes copious amounts of notes, and reports back either in written form or in a meeting with the team, depending on what the client wants.

This is separate from doing testing or any other sort of QA on the applications themselves. The idea is that you might have an application that works great and passes all kinds of acceptance tests, but is built in a such a way that future maintenance and enhancements will be difficult and/or costly. If we can identify problem areas in an application's source code, we can help set things on a better course. The sooner we can discover potential problems, the better. The experiences and guidelines described here are centered around iOS programming practices, but many of them apply to other sorts of projects as well.

The last time I did this, the client's budget was limited, so there wasn't time to do in-depth examination of every source code file in each of the clients's application. I had a lot of territory to cover, and not a whole lot of time. So, I decided to do it in two phases: First, I took an initial look at each project to establish a quick (if superficial) opinion of each project's health. After that, I dove deeper into each project, paying extra attention given to the projects that set off the most warning flags during the initial phase. This procedure worked pretty well, since I was able to report back with an approximate health level for each application, plus a lot of specifics for those that seemed to be in worse shape than the others.

The set of guidelines I'm outlining here are the kind of thing that anyone can do on their own codebase as well. If you don't understand what some of these points are all about, or why they're worth thinking about when it comes to your own apps, this could be a good time to improve your own skills a bit by thinking about some of these topics and looking for relevant discussion and debate (for example, on the internet).

Phase One: A Quick Look

To get a feel for the overall health of each app, do the following things:

  • Make sure all external resources required by the project (3rd-party code, etc) are fully contained within the app, or referenced through submodules, or (best of all) included via CocoaPods. If they're not managed in some way (e.g. CocoaPods) see if there is any documentation describing how to obtain and update these resources.
  • Open the project in Xcode and build it. Make sure the project builds cleanly (with no warnings, and hopefully errors).
  • Perform a static analysis in Xcode to see if any other problems show up.
  • Run oclint and see if this uncovers any other problems that Xcode's static analysis didn't reveal.
  • Examine the project structure. Do the various source code files seem to be placed in a reasonable hierarchy? The larger the project is, the more important it becomes to impose some kind of structure, in order to help outsiders find their way around.
  • Does the app have unit tests or integration tests? If so, run them and make sure they complete without any failures. Bonus points if tools are in use for measuring test coverage.

Doing all that should take several minutes for each app, regardless of the app's size, unless you encounter major problems somewhere along the line. Finding things that are not to your liking on one or two of those points doesn't necessarily mean that you've got a huge problem on your hands, but by considering your findings here you can start to get a sense of any project's overall "smell".

Phase Two: Diving Deep

After that, do a closer examination of each app, starting with the ones that set off the most warning flags in your head during the initial examination. You should look at every source code file (if you have the time to do so), reading through the code with all of these things in mind. You'll probably want to take notes as you go along, when you find things that need improving.

Objective-C

  • Are the latest Objective-C features from the past few years being used? This includes implicit accessor and instance variable synthesis, new syntactic shortcuts for creation of NSNumber/NSArray/NSDictionary, new syntax for array and dictionary indexing, etc.
  • Are instance variables and properties used in a consistent way? Does the code use accessors where appropriate, keeping direct instance variable access limited to init and dealloc methods?
  • Are properties declared with the correct storage semantics? (e.g. copy instead of strong for value types)
  • Are good names used for classes, methods, and variables?
  • Are there any classes that seem overly long? Maybe some functionality should be split into separate classes.
  • Within each class, are there many methods that seem too long, or are things split up nicely? Objective-C code is, by necessity, longer than the corresponding code would be in a language like Ruby, but generally shorter is better. Anything longer than ten or fifteen lines might be worth refactoring, and anything longer than 30 or 40 lines is almost definitely in need of refactoring.
  • Is the app compiled with ARC, MRR, or a mix? If not all ARC, why not?

Cocoa

  • Does the app make good use of common Cocoa patterns, such as MVC, notifications, KVO, lazy-loading, etc? Are there any efforts underway to adopt patterns that aren't backed by Apple, but are gaining steam in the iOS world, such as Reactive Cocoa and MVVM?
  • Are there view-controllers that are overloaded with too much responsibility?
  • If discrete sections of the app need to communicate with each other, how do they do so? There are multiple ways of accomplishing this (KVO, notifications, a big pile of global variables, etc), each with their own pros and cons.

Model Layer

  • If the app is using Core Data, does the data model seem sufficiently normalized and sensible? Is the Core Data stack set up for the possibility of doing some work on a background thread? See Theo's guide to core data concurrency for more on this.
  • If not using Core Data, does the app store data using some other techniques, and if so, does it seem reasonable?
  • At the other end of the spectrum, does the app skip model classes to a large extent, and just deal with things as dictionaries?

GUI

  • Is the GUI created primarily using xibs, storyboards, or code?
  • Is GUI layout done with constraints, the old springs'n'struts, or hard-coded frame layout?
  • Does the running app have a reasonable look and feel?

Network Layer

  • Is all networking done using asynchronous techniques, allowing the app to remain responsive?
  • If a 3rd-party network framework is being used, is it a modern, supported framework, or something that's become a dead end?
  • If no 3rd-party network framework is in use, are Apple's underlying classes being used in a good way? (There are plenty of ways to get this wrong).
  • Does the app function in a fluid manner, without undesireable timeouts or obvious network lags affecting the user?

Other

  • Is the app localized for multiple languages? If so, is this done using standard iOS localisation techniques?
  • If there are tricky/difficult things happening in the code, are these documented?

This is a pretty hefty set of things to consider. Depending on the code base, you won't necessarily be able to find a clear yes/no answer for all of these questions, and for certain types of apps, some of these points will be meaningless. Note that I'm not saying exactly what I think are the "right" answers for all of the questions listed here, although I certainly have my share of strong opinions about most of these (but those are topics for other blog posts). Even if you wouldn't agree with my answers, these are probably all points that are worth thinking about and discussing with co-workers and other collaborators to figure out just what seems right for your projects.

Episode #441 - February 18th, 2014

Posted 2 months back at Ruby5

In this episode we cover mruby 1.0, Hound CI, ActiveIntegration, Rails Flash Partials, Inch, Inheritable Aliases, and a big Rails for Zombies update. Put down your brains and your entrails.

Listen to this episode on Ruby5

Sponsored by TopRubyJobs

If you're looking for a top Ruby job or for top Ruby talent, then you should check out Top Ruby Jobs. Top Ruby Jobs is a website dedicated to the best jobs available in the Ruby community.
This episode is sponsored by Top Ruby Jobs

mruby 1.0 Released!

Earlier this month mruby 1.0 was released. This is a lightweight implementation of the Ruby language which can be linked and embedded within an application. You can also compile Ruby programs into compiled byte code.
mruby 1.0 Released!

Hound CI

Scott Albertson from Thoughtbot just released Hound CI, which is a service that reviews GitHub pull requests for style guide violations. It provides guidelines for things like git workflow, code formatting, naming, organization, and language-specific conventions for languages like Sass, Ruby, Coffeescript, Objective-C, and Python. AND it even includes some Rails development conventions for HTML, routing, background jobs, and testing.
Hound CI

Confidently Manage Business Logic with ActiveInteraction

OrgSync recently released version 1.0 of their gem ActiveInteraction, which helps manage application specific business logic. It's a unique way to help you keep business logic out of your models and controllers.
Confidently Manage Business Logic with ActiveInteraction

Rails Flash Partials

Zack Siri from Codemy wrote to us about another screencast he’s created, this time it’s about Rails Flash Partials. Setting up flash messages in Rails is really simple, but it can become more complex as your application grows. Rails partials are great for keeping your code DRY, and flash messages are no exception.
Rails Flash Partials

Inch

So there’s lots of libraries to help you rate your code, based on complexity, code coverage and so on and so on.. But this week I found a library that will grade how well your code is documented called Inch, by René Föhring. Check it out next time you need to beef up your documentation on a project.
Inch

Inheritable Aliases in Ruby

Ruby’s method aliases are pretty handy, but if you method_alias in a class and then extend from that class, it won’t work. One way to solve this is by using the Forwardable module and its def_delegator method that are included in the Ruby standard library. However, a better solution is outlined in Nate Smith’s blog post, in which he describes writing a custom inheritable_alias method.
Inheritable Aliases in Ruby

Rails for Zombies Updated!

Over on Code School we just updated the original Rails for Zombies to be compatible with Rails 4 and Ruby 2. We made a massive improvement to the videos as well, so if you know anyone that needs to get started with Ruby on Rails, you know where to send em.
Rails for Zombies Updated!

Thank You for Listening to Ruby5

Ruby5 is released Tuesday and Friday mornings. To stay informed about and active with this podcast, we encourage you to do one of the following:

Thank You for Listening to Ruby5

Your Docker image might be broken without you knowing it

Posted 2 months back at Phusion Corporate Blog

Docker is an awesome new technology for the creation of lightweight containers. It has many purposes and serves as a good building block for PaaS, application deployments, continuous integration systems and more. No wonder it’s becoming more and more popular every day.

However what a lot of people may not realize is that the operating system inside the container must be configured correctly, and that this is not easy to do. Unix has many strange corner cases that are hard to get right if you are not intimately familiar with the Unix system model. This can cause a lot of strange problems. In other words, your Docker image might be broken, without you knowing it.

To raise awareness about this problem, we’ve published a website which explains the problem in detail. It explains what exactly could be broken, why it’s broken that way, and what can be done to fix them.

And to make it easier for other Docker users to get things right, we’ve published a preconfigured image — baseimage-docker — which does get everything right. This potentially saves you a lot of time.

Learn the right way to build your Dockerfile.

One of the gripes we have with random Docker images on the Docker registry is that they’re often very poorly documented, and do not provide the original Dockerfile from which they’re created. With baseimage-docker, we’re breaking that tradition by providing excellent documentation and by making the entire build reproducible. The Dockerfile and all other sources are available at Github, for everyone to see and to modify.

Happy Docking!

Using Arel to Compose SQL Queries

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Rails gives us a great DSL for constructing most queries. With its knowledge of the relationships between our tables, it's able to construct join clauses with nothing more than the name of a table. It even aliases your tables automatically!

The method we most often reach for when querying the database is the where method. 80% of the time, your query will only be checking equality, which is what where handles. where is smart. It handles the obvious case:

where(foo: 'bar') # => WHERE foo = 'bar'

It also handles nils:

where(foo: nil) # => WHERE foo IS NULL

It handles arrays:

where(foo: ['bar', 'baz']) # => WHERE foo IN ('bar', 'baz')

It even handles arrays containing nil!

where(foo: ['bar', 'baz', nil]) # => (WHERE foo IN ('bar', 'baz') OR foo IS NULL)

With Rails 4, we can also query for inequality by using where.not. However, where has its limitations. It can only combine statements using AND. It doesn't provide a DSL for comparison operators other than = and <>.

When faced with a query that requires an OR statement, or when needing to do numeric comparisons such as <=, many Rails developers will reach for writing out a SQL string literal. However, there's a better way.

Arel

Arel is a library that was introduced in Rails 3 for use in constructing SQL queries. Every time you pass a hash to where, it goes through Arel eventually. Rails exposes this with a public API that we can hook into when we need to build a more complex query.

Let's look at an example:

class ProfileGroupMemberships < Struct.new(:user, :visitor)
  def groups
    @groups ||= user.groups.where("private = false OR id IN ?", visitor.group_ids)
  end
end

When we decide which groups to display on a users profile, we have the following restriction. The visitor may only see the group listed if the group is public, or if both users are members of the group. Even for a minor query like this, there's several reasons we would want to avoid using SQL string literals here.

  • Abstraction/Reuse
    • If we wanted to reuse any piece of this query, we would end up with a leaky abstraction at best involving string interpolation.
  • Readability
    • As complex SQL queries grow, they can quickly become difficult to reason about. Since they're so difficult to break apart, the reader often has to understand the entire query to understand any individual part.
  • Reliability
    • If we join to another table, our query will immediately break due to the ambiguity of the id column. Even if we qualify the columns with the table name, this will break as well if Rails decides to alias the table name.
  • Repetition
    • Often times we end up rewriting code that we already have as a scope on the class, just to be able to use it with an OR statement.

Refactoring to use Arel

The method Rails provides to access the underlying Arel interface is called arel_table. If you're working with another class's table, the code may become more readable if you assign a local variable or create a method to access the table.

def table
  Group.arel_table
end

The Arel::Table object acts like a hash which contains each column on the table. The columns given by Arel are a type of Node, which means it has several methods available on it to construct queries. You can find a list of most of the methods available on Nodes in the file predications.rb.

When breaking apart a query to use Arel, I find a good rule of thumb is to break out a method anywhere the word AND or OR is used, or when something is wrapped in parenthesis. Keeping this rule in mind, we end up with the following:

class ProfileGroupMemberships < Struct.new(:user, :visitor)
  def groups
    @groups ||= user.groups.where(public.or(shared_membership))
  end

  private

  def public
    table[:private].eq(false)
  end

  def shared_membership
    table[:id].in(visitor.group_ids)
  end

  def table
    Group.arel_table
  end
end

The resulting code is slightly more verbose due to Arel's interface, but we've given intention-revealing names to the underlying pieces, and are able to compose them in a satisfying fashion. The body of our public groups method now also describes the business logic we want, as opposed to how it is implemented.

With more complex queries, this can go a long way towards being able to easily reason about what a query is accomplishing, as well as debugging individual pieces. It also becomes possible to reuse pieces of scopes with OR clauses, or in the body of JOIN ON statements.

What's next?

  • Learn more about composition over inheritance in Ruby with Ruby Science

Senior Ruby on Rails developer. £50k Award winning company, prestigious country location, Oxfordshire near Reading/High Wycombe

Posted 2 months back at Ruby on Rails, London - The Blog by Dynamic50

Great opportunity for an experienced developer specialising in Ruby on Rails.

Prestigious countryside town location, Oxfordshire near Reading/High Wycombe

Working in close knit team directly with the CTO on award winning web application.

Opportunity to really contribute to an active product roadmap in an award winning small company. Really make a difference

- 3+ experience developing commercial web applications using Ruby on Rails, Python, Java, .NET, PHP or similar

- Passion for programming a must, daily stand-ups, etc.

- Knowledge of relational database design, SQL.

- Extensive experience with JavaScript, AJAX, CSS and HTML

- Experience with Agile methodologies

- Working knowledge of version control systems, SVN, Git.

- Experience with test driven development

- Knowledge of Eclipse would be an advantage

- Knowledge of Linux would be an advantage

Salary 50k

Send your resume to contactus@dynamic50.com or give us a call on 02032862879

Entry Level Ruby on Rails developer. £25k+ Award winning company, prestigious country location, Oxfordshire near Reading/High Wycombe

Posted 2 months back at Ruby on Rails, London - The Blog by Dynamic50

Great opportunity to develop your skills as a junior Ruby on Rails developer.

Prestigious countryside town location, Oxfordshire near Reading/High Wycombe

Working in close knit team directly with the CTO on award winning web application.

Opportunity to really contribute to an active product roadmap in an award winning small company. Really supportive development environment.
  • 2 experience developing commercial web applications using Ruby on Rails, Python, Java, .NET, PHP or similar
  • Passion for programming a must, great opportunity to learn in a supportive environment.
  • Knowledge of relational databases i.e. SQL preferred
  • Extensive experience with JavaScript, AJAX, CSS and HTML
  • Experience with Agile methodologies
Salary £25k+ depending on experience

Send your resume to contactus@dynamic50.com or give us a call on 02032862879

Episode #440 – February 14th, 2014

Posted 2 months back at Ruby5

PostgreSQL! Such wow! Much gitsh! Ask Ruby, maybe not? R u an activity feed? Hakiri amaze on the Doge 5!

Listen to this episode on Ruby5

Sponsored by New Relic

New Relic is _the_ all-in-one web performance analytics product. It lets you manage and monitor web application performance, from the browser down to the line of code. With Real User Monitoring, New Relic users can see browser response times by geographical location of the user, or by browser type.
This episode is sponsored by New Relic

PostgreSQL Awesomeness

Dig a little deeper into that database you’re most likely running. Hubert Lepicki has a nice overview of what makes PostgreSQL so nice for Rails development.
PostgreSQL Awesomeness

gitsh

Thoughtbot brings you an interactive shell for git. Why did they do this? Why not! It’s a simple tool, but effective. Save some typing and get some nice features for interacting with git.
gitsh

Ask Ruby or maybe not?

Pat Shaughnessy has a great post up on being more functional with your Ruby code. Be sure to follow the link to Dave Thomas’ clarifying post as well.
Ask Ruby or maybe not?

Enumerate your activity feed

The GiveGab team gives a high-level description of how they implemented their activity feed. Follow the pointers for more details on this sticky problem.
Enumerate your activity feed

Hakiri Facets

Hakiri launched a free service this week that scans your Gemfile.lock and reports known CVE vulnerabilities.
Hakiri Facets

A Guide to Core Data Concurrency

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

The iOS ethos of instant responsive UI elements means putting as much work as possible in background threads and as little work in the main thread. For most cases we are fine with using an NSOperationQueue or GCD, but getting concurrency to work in core data sometimes feels more like black magic than science. This post intends to demystify concurrency and offer two ways to go about it.

Setup 1: Private queue context and main queue context off of single persistent store coordinator

In this setup we will make two NSManagedObjectContext instances one with concurrency type NSMainQueueConcurrencyType and the other with type NSPrivateQueueConcurrencyType. We will attach ourself to the NSManagedObjectContextDidSaveNotification to propagate saves.

We add two methods to our TBCoreDataStore.h file:

+ (NSManagedObjectContext *)mainQueueContext;
+ (NSManagedObjectContext *)privateQueueContext;

In our implementation file we add two private properties and lazy load them:

@interface TBCoreDataStore ()

@property (strong, nonatomic) NSPersistentStoreCoordinator *persistentStoreCoordinator;
@property (strong, nonatomic) NSManagedObjectModel *managedObjectModel;

@property (strong, nonatomic) NSManagedObjectContext *mainQueueContext;
@property (strong, nonatomic) NSManagedObjectContext *privateQueueContext;

@end

#pragma mark - Singleton Access

+ (NSManagedObjectContext *)mainQueueContext
{
    return [[self defaultStore] mainQueueContext];
}

+ (NSManagedObjectContext *)privateQueueContext
{
    return [[self defaultStore] privateQueueContext];
}

#pragma mark - Getters

- (NSManagedObjectContext *)mainQueueContext
{
    if (!_mainQueueContext) {
        _mainQueueContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSMainQueueConcurrencyType];
        _mainQueueContext.persistentStoreCoordinator = self.persistentStoreCoordinator;
    }

    return _mainQueueContext;
}

- (NSManagedObjectContext *)privateQueueContext
{
    if (!_privateQueueContext) {
        _privateQueueContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
        _privateQueueContext.persistentStoreCoordinator = self.persistentStoreCoordinator;
    }

    return _privateQueueContext;
}

Next we override the initializer to add our observing:

- (id)init
{
    self = [super init];
    if (self) {
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSavePrivateQueueContext:)name:NSManagedObjectContextDidSaveNotification object:[self privateQueueContext]];
        [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(contextDidSaveMainQueueContext:) name:NSManagedObjectContextDidSaveNotification object:[self mainQueueContext]];
    }
    return self;
}

- (void)dealloc
{
    [[NSNotificationCenter defaultCenter] removeObserver:self];
}

- (void)contextDidSavePrivateQueueContext:(NSNotification *)notification
{
    @synchronized(self) {
        [self.mainQueueContext performBlock:^{
            [self.mainQueueContext mergeChangesFromContextDidSaveNotification:notification];
        }];
    }
}

- (void)contextDidSaveMainQueueContext:(NSNotification *)notification
{
    @synchronized(self) {
        [self.privateQueueContext performBlock:^{
            [self.privateQueueContext mergeChangesFromContextDidSaveNotification:notification];
        }];
    }
}

Now we have a working private queue and main queue context which will both be updated whenever one is saved. Here is an example usage:

[[TBCoreDataStore privateQueueContext] performBlock:^{
    NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"MyEntity"];
    NSArray *results = [[TBCoreDataStore privateQueueContext] executeFetchRequest:fetchRequest error:nil];
}];

One of the great advantages of this type of core data stack is it allows us to make great use of NSFetchedResultsController. An example of this is parsing JSON from a web service into a core data object as a background operation and then using the fetched results controller to indicate when said object has changed and updating the UI as a result.

Setup 2: The throwaway main queue context backed by a private queue context.

In this setup we will have only one NSManagedObjectContext which will stay with us for the life time of the app. This will be a private queue context which we will use to create child main queue contexts from. This allows us to spend as much time in the background and only when we need to do UI work do we create a new main queue context.

Starting from our base core data setup we add the following to TBCoreDataStore.h:

+ (NSManagedObjectContext *)newMainQueueContext;
+ (NSManagedObjectContext *)defaultPrivateQueueContext;

In our implementation file we add a single property and lazy load it:

@interface TBCoreDataStore ()

@property (strong, nonatomic) NSPersistentStoreCoordinator *persistentStoreCoordinator;
@property (strong, nonatomic) NSManagedObjectModel *managedObjectModel;

@property (strong, nonatomic) NSManagedObjectContext *defaultPrivateQueueContext;

@end

#pragma mark - Singleton Access

+ (NSManagedObjectContext *)newMainQueueContext
{
    NSManagedObjectContext *context = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSMainQueueConcurrencyType];
    context.parentContext = [self defaultPrivateQueueContext];
    
    return context;
}

+ (NSManagedObjectContext *)defaultPrivateQueueContext
{
    return [[self defaultStore] defaultPrivateQueueContext];
}

#pragma mark - Getters

- (NSManagedObjectContext *)defaultPrivateQueueContext
{
    if (!_defaultPrivateQueueContext) {
        _defaultPrivateQueueContext = [[NSManagedObjectContext alloc] initWithConcurrencyType:NSPrivateQueueConcurrencyType];
        _defaultPrivateQueueContext.persistentStoreCoordinator = self.persistentStoreCoordinator;
    }

    return _defaultPrivateQueueContext;
}

Here we have no need to observe save notifications as any saves on the created main queue context will bubble up to its parent the defaultPrivateQueueContext. This approach is very robust and spends the least possible time on the main queue. The downside is that it we cannot use the NSFetchedResultsController out of the box though we could cobble our own version using the different notifications sent out by core data.

Lets say we have a really big database (20k+ objects) and we want to do a complex fetch, the best way to go about this is to first use the background queue to fetch the NSManagedObjectIDs and then hop on the main queue and call -objectWithID on the results. This is how we should always be passing around managed objects between threads.

[[TBCoreDataStore defaultPrivateQueueContext] performBlock:^{

    NSFetchRequest *fetchRequest = [NSFetchRequest fetchRequestWithEntityName:@"MyEntity"];
    fetchRequest.resultType = NSManagedObjectIDResultType;

    NSArray *managedObjectIDs = [[TBCoreDataStore defaultPrivateQueueContext] executeFetchRequest:fetchRequest error:nil];

    NSManagedObjectContext *mainQueueContext = [TBCoreDataStore newMainQueueContext];
    [mainQueueContext performBlock:^{

        for (NSManagedObjectID *managedObjectID in managedObjectIDs) {
            MyEntity *myEntity = [mainQueueContext objectWithID:managedObjectID];
            // Update UI with myEntity
        }
    }];
}];

In this scenario we need to update our UI with a bunch of MyEntity managed objects. For efficiencie's sake we perform the costly fetch in the background and set the result type to be NSManagedObjectIDResultType which will return NSManagedObjectIDs. We then create a new mainQueueContext and get each managed object from the cache by using [mainQueueContext objectWithID:managedObjectID]. These objects are then safe to use on the main thread.

If your fetch is not too intensive we can just perform it on your new main queue context. If we want to use this stack I recommend making this snippet:

NSManagedObjectContext *mainQueueContext = [TBCoreDataStore newMainQueueContext];
[mainQueueContext performBlock:^{
    <#code#>
}];

Caveats

When performing extremely intensive fetch operations (10+ seconds) in the background thread and simultaneously needing to perform operations on another thread we will run into blockage. To prevent this from happening we should perform this operation an an entirely new context linked to a entirely new PSC. This will ensure that the operation stays in the background.

Useful utility methods

An extremely useful little one liner is the ability to turn an NSManagedObjectID into a string. we can use this to store the ID in the user defaults.

@implementation NSManagedObjectID (TBExtras)

- (NSString *)stringRepresentation
{
    return [[self URIRepresentation] absoluteString];
}

The flip side of this is then to get an NSManagedObjectID out of such a string. Add this method to your CoreDataStore:

+ (NSManagedObjectID *)managedObjectIDFromString:(NSString *)managedObjectIDString
{
    return [[[self defaultStore] persistentStoreCoordinator] managedObjectIDForURIRepresentation:[NSURL URLWithString:managedObjectIDString]];
}

With these two methods we have an easy way to build a cache on disk by using a plist. This is useful for saving a list of managed objects which need to be updated or maybe deleted between app launches.

Creating a managed object is a pain, so here is a little method which will make your life better:

@implementation NSManagedObject (TBAdditions)

+ (instancetype)createManagedObjectInContext:(NSManagedObjectContext *)context
{
    NSEntityDescription *entity = [NSEntityDescription entityForName:NSStringFromClass([self class]) inManagedObjectContext:context];
    return  [[[self class] alloc] initWithEntity:entity insertIntoManagedObjectContext:context];
}

Finally, while Apple does provide a method to get an NSManagedObject from an NSManagedObjectID we often want to convert a whole array of ids into objects to do this we can use the following:

@implementation NSManagedObjectContext (TBAdditions)

- (NSArray *)objectsWithObjectIDs:(NSArray *)objectIDs
{
    if (!objectIDs || objectIDs.count == 0) {
        return nil;
    }
    __block NSMutableArray *objects = [[NSMutableArray alloc] initWithCapacity:objectIDs.count];

    [self performBlockAndWait:^{
        for (NSManagedObjectID *objectID in objectIDs) {
            if ([objectID isKindOfClass:[NSNull class]]) {
                continue;
            }

            [objects addObject:[self objectWithID:objectID]];
        }
    }];

    return objects.copy;
}

@end

What's next?

I've placed the two core data stacks on GitHub.

Mark and Gordon talked about this on Build Phase episode 18.

Custom Ember Computed Properties

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

EmberJS has a lot of features for helping you build a clean JavaScript interface. One of my favorites is the computed property; Ember can watch a property or set of properties and when any of those change, it recalculates a value that is currently displayed on a screen:

fullName: (->
  "#{@get('firstName)} #{@get('lastName')}"
).property('firstName', 'lastName')

Any time the firstName or lastName of the current object change, the fullName property will also be updated.

In my last project I needed to cacluate the sum of a few properties in an array. I started with a computed property:

sumOfCost: (->
  @reduce ((previousValue, element) ->
    currentValue = element.get('cost')
    if isNaN(currentValue)
      previousValue
    else
      previousValue + currentValue
  ), 0
).property('@each.cost')

This works fine but I need to use this same function for a number of different properties on this controller as well as others. As such, I extracted a helper function:

# math_helpers.js.coffee
class App.mathHelpers
  @sumArrayForProperty: (array, propertyName) ->
    array.reduce ((previousValue, element) ->
      currentValue = element.get(propertyName)
      if isNaN(currentValue)
        previousValue
      else
        previousValue + currentValue
    ), 0


# array_controller.js.coffee
sumOfCost: (->
  App.mathHelpers.sumArrayForProperty(@, 'cost')
).property('@each.cost')

This removes a lot of duplication but I still have the cost property name in the helper method as well as the property declation. I also have the 'decoration' of setting up a computed property in general.

What I need is something that works like Ember.computed.alias('name') but allows me to transform the object instead of just aliasing a property:

# computed_properties.js.coffee
App.computed = {}

App.computed.sumByProperty = (propertyName) ->
  Ember.computed "@each.#{propertyName}", ->
    App.mathHelpers.sumArrayForProperty(@, propertyName)

# array_controller.js.coffee
sumOfCost: App.computed.sumByProperty('cost')

This allows me to easily define a 'sum' for a property without a lot of duplication. In this application I have a lot of similar functions around computing information in arrays. Being able to easily have one function for calculation allowed me to easily unit test that function and feel confident that it would work on any other object. It also simplifies the model or controller a lot for anyone viewing the class for the first time.

Prevent Spoofing with Paperclip

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

Egor Homakov recently brought to my attention a slight problem with how Paperclip handles some content type validations. Namely, if an attacker puts an entire HTML page into the EXIF tag of a completely valid JPEG and named the file "gotcha.html", they could potentially trick users into an XSS vulnerability.

Now, this is kind of a convoluted means of attacking. It involves:

  • A server that's running Paperclip configured to not validate content types or filenames
  • A front-end HTTP server that will serve the assets with a content type based on their file name
  • The attacker must get the user to load the crafted image directly (injecting it in an img tag is not enough)

Even with this list of requirements, it's possible, and so we need to take it seriously.

Content Type Spoof Detection

To combat this, we've released Paperclip 4.0 (and then quickly released 4.1), which has a few new restrictions in order to improve out-of-the-box security. The change that handles this problem directly is an automatic validation that checks uploaded files for content type spoofing. That is, if you upload a JPEG and name it .html, it's not going to get through. This happens automatically during the upload process, and uses the file command in order to determine the actual content type of the file. If you don't have file already (for example, because you're on Windows), you can install the file command separately.

Required Content Type or Filename Validations

Next, we're also turning on a new requirement: You must have a content type or filename validation, or you must explicitly opt-out of it.

class ActiveRecord::Base
  has_attached_file :avatar

  # Validate content type
  validates_attachment_content_type :avatar, :content_type => /\Aimage/

  # Validate filename
  validates_attachment_file_name :avatar, :matches => [/png\Z/, /jpe?g\Z/]

  # Explicitly do not validate
  do_not_validate_attachment_file_type :avatar
end

Note that older versions of Paperclip are susceptible to this attack if you don't have a content type validation. If you do have one, then you are protected against people crafting images to perform this type of attack.

The filename validation is new with 4.0.0. We know that some people don't store the content types on their models, but still need a way to be valid. Using the file name can help ensure you're only getting the kinds of files you expect, and all Paperclip attachments have that. This will allow those users to upgrade without having to implement a possibly costly migration of that data into their database.

Content Type Mapping

Immediately, some users reported problems with the spoof detection added in 4.0. In order to fix this, we released 4.1 that added an option called :content_type_mappings that will allow you to specify an extension that cannot otherwise be mapped. For example:

Paperclip.options[:content_type_mappings] = {
  :pem => "text/plain"
}

This will allow users to upload ".pem" files (public certificates for encryption), because file considers those files as "text/plain". This will tell Paperclip "I consider a .pem file that file calls 'text/plain' to be correct" and it will accept it.

Handling console.log errors

Posted 2 months back at Web On Rails

https://twitter.com/bansalakhil/status/433950675613921280

Announcing Taco Tuesdays: A Product Design Talk Series at thoughtbot SF

Posted 2 months back at GIANT ROBOTS SMASHING INTO OTHER GIANT ROBOTS - Home

We're excited to introduce a new talk series focused on product design, hosted by thoughtbot in San Francisco.

On February 25th at 6:30pm, we will host the first Taco Tuesdays event at thoughtbot San Francisco (85 2nd St, Suite 700, 94105)

Our first two speakers are Adam Morse, product designer at Salesforce, and Wells Riley, product designer at KickSend and Hack Design. They will give talks on the topic: "What is a design problem you recently encountered, and how did you approach it?"

Food (in the form of tacos) and beverages will be provided.

RSVP at Eventbrite for free to join us. Hope to see you there!