ActiveTest::Redesign < ActiveTest::Examination

Posted over 7 years back at Wood for the Trees

The following is taken straight from the first two sections of the ActiveTest Redesign Draft. Feel free to comment, ask for features, steer me away from the unnecessary or just say you’re interested.

1.0: ADDRESSING A NEED

This section takes a cursory look at the evolution of ActiveTest.

1.1: The Original Premise

I originally started writing ActiveTest to address problems I had with Test::Unit. It lacked the following for my projects:

  • inheritance
  • modularity (adding)
  • flexible (redefining)
  • granular verbosity
  • observing
  • BDD interface

The most annoying omissions were the first two. I felt that no matter how much documenting my tests were doing, I hated breaking with the habit of not repeating myself. There are simply times when different parts of an application share a common interface or set of functions. That’s all there needs to be said in the tests.

1.2: Problems with ActiveTest 0.1.0

The first beta of ActiveTest was a flop; it was too specific to Rails, especially controllers, and used too many metaprogramming techniques. Consequently, it created doubt about the failures in test cases—were they ActiveTest or real? It mildly discouraged TDD and, as seen in my small series of articles on the mistakes I’ve made, there was too much focus on the One-Liner™.

1.3: Current State of Need

From what I can tell, there are three related schools of thought about testing which have members in the Ruby community: Test Driven Development, Behaviour Driven Development and Story Driven Development. The Agile and XP groups of developers select whichever according to taste. ActiveTest can easily cater to all schools and groups.

Looking at Test::Unit and comparing it to other projects, it is apparent that Rubyists are being left high and dry. Concessions have come in the form of ZenTest for automation, RSpec for BDD, RFuzz for fuzzing, Mocha for mocking, and rcov for coverage. Each of them (except RSpec) are forced to extend an inherently closed model, Test::Unit::TestCase, and the arcane classes which drive it.

2.0: APPROACHING THE REDESIGN

This section explains the purpose and approach for the new ActiveTest framework.

2.1: Purpose

The new ActiveTest is to provide an advanced, backward-compatible replacement for Test::Unit. By this design, it is compatible with many projects already extending Test::Unit, allowing for quick uptake and usage. It is also to be at the forefront of testing theories and advanced testing techniques both within and outside the Ruby community. Ultimately, it’s purpose is to make developers in all other languages very, very jealous.

2.2: Summary

ActiveTest must change from being a Rails plugin to a gem. It must be compatible with other gems and plugins within the Ruby community or, if compatibility is too complicated, incorporate their ideas with due credit. It must take head-on the latest in BDD and SDD styles. It must reuse as much of its old code to this purpose as possible. Finally, it must draw on projects in other languages, most notably Java and Python, to make use of the latest developments.

2.3: Influenctial Ruby projects

2.4: Influential projects in other languages

2.5: Objectives

The following features from Test::Unit:

  • assertions
  • setup & teardown
  • interfaces: Console, Fox, GTK, GTK2, TK
  • auto-running
  • filtering
  • backtrace
  • text report
  • unit tests

The following features from Ruby projects are compatible:

  • mocking and stubbing (through Mocha)
  • fuzzing (through RFuzz)
  • coverage reports (through rcov)

The following features incorporated from other projects:

  • data fixtures (from ActiveRecord)
  • automation (from ZenTest)
  • colorized output (from ZenTest & Testoob)
  • granular test selection (from Testoob)
  • different report formats (from Testoob)
  • test threading (from Testoob & GroboUtils)
  • Test Anything Protocol (from TAP, Test::Harness & Test::More)

The following features are specific to ActiveTest:

  • co-existence of TDD, BDD & SDD testing styles
  • inheritance
  • test type variety: unit, integration, system, performance
  • granular verbosity (backtrace, debug flags)
  • interface: Web/AJAX

ActiveTest: Examination, Part V: Redesigning Weak Areas

Posted over 7 years back at Wood for the Trees

There are three aspects of ActiveTest which really need to be cleaned up: the Subject abstract class, metaprogramming subjects, and the extension of Test::Unit.

The Subject Abstraction

It is pretty obvious now, without a real need for define_behavioral, that ActiveTest::Subject as an interface class is an unnecessary intermediary between Base and the subjects which inherit from it. So trim the fat and move all of Subject’s functionality to Base. The result is a cleaner hierarchy and less temptation to put something in that middle-man model.

class ActiveTest::Controller < ActiveTest::Base
end

This change in design clears up the association between subjects and the base. Base, which was originally a general abstract for any kind of test case, becomes a general abstract for subjects. The change is mostly conceptual. Functionally, it still provides the rudiments of an ActiveTest suite: inheritance and nested setup/teardown. The next logical step, then, is to start using those advantages with a concrete class: Controller, for example.

Subject Metaprogramming

The only concrete subject written so far is Controller and it is a harbinger of doom for the others. It is bloated with metaprogramming, namely succeeds_on and its peers, because of a misunderstanding of where to use macros. Do we need macros in tests? Not to define them—Ruby is clean enough to do it already. If macros clean up readability, then when does something become muddied in a test? Consider the following real situation:

  def test_should_add_quantity_for_product
    @cart = carts(:first)
    @item = @cart.line_items[0]
    assert_no_difference @cart.line_items, :count do
      assert_difference @item, :quantity, 1, :reload do
        @cart.add_product(@item.product.id)
      end
    end
  end

What’s going on? In this example from a simple test on shopping carts, you want to ensure that the quantity of an item in a cart is incremented when a duplicate item is added to the cart. The item and cart objects are associated through the line_items join model. Because the Cart model updates Item, the item instance needs to be reloaded to reflect changes in the database.

Essentially, we want to test changes being propagated down a hierarchy of Record associations. This situation is not so uncommon and we can easily think of more instances. Once we have a repetition of this pattern, we can write a macro to open up the hierarchy:

  expand_hierarchy :cart_to_items, :reload => true do |a|
    a.cart = carts(:first)
    a.line_items = a.cart.line_items
    a.item = a.line_items[0]
  end

  def test_should_add_item_quantity_for_product
    cart_to_items do |a|
      assert_difference a.item, :quantity do
        a.cart.add_product(a.item.product.id)
      end
    end
  end

  def test_should_increment_line_items_for_product
    cart_to_items do |a|
      assert_difference a.line_items, :count do
        a.cart.add_product(a.item.product.id)
      end
    end
  end

The expand_hierarchy macro here helps us remove any extra tests or setup from one test case as well as make it clearer what we are testing for: we want to find, within a hierarchy, the correct change. The easiest way to do that is to expand out the parameters. You can think of it as a ‘setup-on-demand’. Also, when any one instance is updated, the others are out of sync. The :reload flag tells the macro to offer up proxies of each object; when one changes, all of the other objects are refreshed automatically.

expand_hierarchy, then, does two things: it offers a standard interface for accessing hierarchical associations and it removes extra setup within a test case. Both make it easier to read what is happening, I’d say, but don’t effect the documentation quality. In fact, it seems to improve it. So rather than using macros to define the test cases themselves, a better use would be to package non-assertive elements of testing. These programmatic elements do not need to appear in each test case—they belong in a library, which is what ActiveTest can standardise.

Extending Test::Unit

The premise of ActiveTest began to sour when it became a tool to move test cases into stock meta-definitions. The idea of templating is fine, but the meta-programming aspect was a serious flaw. It hid all the guts in a DSL rather than gave a transparent way of writing tests faster and more easily. The reason it went in this direction is that it is the same direction taken by Test::Unit: linear testing.

So the question is: should ActiveTest extend or evolve? When looking at the problems with Test::Unit and the original design of ActiveTest, it may not be so bad of an idea to address the real issue which has been skirted around by many Ruby programmers for a long time now: Test::Unit isn’t flexible enough. I do not wish to trod over Nathaniel Talbott’s work, but rather to address problems that arise in more complicated environments, like Rails, and are not suited to Test::Unit.

The new version of ActiveTest, I think, should live up to its namesake and really be its own testing framework, written to be compatible with Test::Unit, but with a design that can be improved by others without lengthy review of its code. If the only change, in the worst case scenario, is to change Test::Unit::TestCase to ActiveTest::Base, then there is no issue of learning a new DSL. Instead, ActiveTest would use the same language as Test::Unit, but be completely different under the hood and consequently include much more.

Coming Up Next: ActiveTest::Redesign…

ActiveTest: Examination, Part V: Redesigning Weak Areas

Posted over 7 years back at Wood for the Trees

There are three aspects of ActiveTest which really need to be cleaned up: the Subject abstract class, metaprogramming subjects, and the extension of Test::Unit.

The Subject Abstraction

It is pretty obvious now, without a real need for define_behavioral, that ActiveTest::Subject as an interface class is an unnecessary intermediary between Base and the subjects which inherit from it. So trim the fat and move all of Subject’s functionality to Base. The result is a cleaner hierarchy and less temptation to put something in that middle-man model.

class ActiveTest::Controller < ActiveTest::Base
end

This change in design clears up the association between subjects and the base. Base, which was originally a general abstract for any kind of test case, becomes a general abstract for subjects. The change is mostly conceptual. Functionally, it still provides the rudiments of an ActiveTest suite: inheritance and nested setup/teardown. The next logical step, then, is to start using those advantages with a concrete class: Controller, for example.

Subject Metaprogramming

The only concrete subject written so far is Controller and it is a harbinger of doom for the others. It is bloated with metaprogramming, namely succeeds_on and its peers, because of a misunderstanding of where to use macros. Do we need macros in tests? Not to define them—Ruby is clean enough to do it already. If macros clean up readability, then when does something become muddied in a test? Consider the following real situation:

  def test_should_add_quantity_for_product
    @cart = carts(:first)
    @item = @cart.line_items[0]
    assert_no_difference @cart.line_items, :count do
      assert_difference @item, :quantity, 1, :reload do
        @cart.add_product(@item.product.id)
      end
    end
  end

What’s going on? In this example from a simple test on shopping carts, you want to ensure that the quantity of an item in a cart is incremented when a duplicate item is added to the cart. The item and cart objects are associated through the line_items join model. Because the Cart model updates Item, the item instance needs to be reloaded to reflect changes in the database.

Essentially, we want to test changes being propagated down a hierarchy of Record associations. This situation is not so uncommon and we can easily think of more instances. Once we have a repetition of this pattern, we can write a macro to open up the hierarchy:

  expand_hierarchy :cart_to_items, :reload => true do |a|
    a.cart = carts(:first)
    a.line_items = a.cart.line_items
    a.item = a.line_items[0]
  end

  def test_should_add_item_quantity_for_product
    cart_to_items do |a|
      assert_difference a.item, :quantity do
        a.cart.add_product(a.item.product.id)
      end
    end
  end

  def test_should_increment_line_items_for_product
    cart_to_items do |a|
      assert_difference a.line_items, :count do
        a.cart.add_product(a.item.product.id)
      end
    end
  end

The expand_hierarchy macro here helps us remove any extra tests or setup from one test case as well as make it clearer what we are testing for: we want to find, within a hierarchy, the correct change. The easiest way to do that is to expand out the parameters. You can think of it as a ‘setup-on-demand’. Also, when any one instance is updated, the others are out of sync. The :reload flag tells the macro to offer up proxies of each object; when one changes, all of the other objects are refreshed automatically.

expand_hierarchy, then, does two things: it offers a standard interface for accessing hierarchical associations and it removes extra setup within a test case. Both make it easier to read what is happening, I’d say, but don’t effect the documentation quality. In fact, it seems to improve it. So rather than using macros to define the test cases themselves, a better use would be to package non-assertive elements of testing. These programmatic elements do not need to appear in each test case—they belong in a library, which is what ActiveTest can standardise.

Extending Test::Unit

The premise of ActiveTest began to sour when it became a tool to move test cases into stock meta-definitions. The idea of templating is fine, but the meta-programming aspect was a serious flaw. It hid all the guts in a DSL rather than gave a transparent way of writing tests faster and more easily. The reason it went in this direction is that it is the same direction taken by Test::Unit: linear testing.

So the question is: should ActiveTest extend or evolve? When looking at the problems with Test::Unit and the original design of ActiveTest, it may not be so bad of an idea to address the real issue which has been skirted around by many Ruby programmers for a long time now: Test::Unit isn’t flexible enough. I do not wish to trod over Nathaniel Talbott’s work, but rather to address problems that arise in more complicated environments, like Rails, and are not suited to Test::Unit.

The new version of ActiveTest, I think, should live up to its namesake and really be its own testing framework, written to be compatible with Test::Unit, but with a design that can be improved by others without lengthy review of its code. If the only change, in the worst case scenario, is to change Test::Unit::TestCase to ActiveTest::Base, then there is no issue of learning a new DSL. Instead, ActiveTest would use the same language as Test::Unit, but be completely different under the hood and consequently include much more.

Coming Up Next: ActiveTest::Redesign…

ActiveTest: Examination, Part IV: Salvaging Useful Ideas

Posted over 7 years back at Wood for the Trees

DRY and Abstraction: There is a difference between practices of DRY and Abstraction. When you are being DRY, you are either addressing a new repetition or continuing an earlier practice. The primary focus of being DRY is not to create higher level tools for future use. It is a practice of improving past and present code. It replaces, deletes, compresses, encapsulates—it is passive redesign. Abstraction, on the other hand, is active design. When you try to be DRY without a real model, you are actually abstracting. The process of abstraction is a wider programming activity which can lead quite naturally into over-design, because without real models and implementation, it often fails to address real issues.

With that short sermon, let’s make ActiveTest useful again. As I mentioned in the introduction to this mini-series, there are things to take away from the old version of ActiveTest. We’ll now look at them in full and show why they are still useful.

1. Test::Unit inheritance: filtering classes

One of the few patches in the old ActiveTest is to modify Test::Unit’s very well hidden collector filter. The idea of filtration is critical to making inheritance possible. If you have a situation where many ActiveRecord models share many attributes, for example in a Single Table Inheritance for a ‘Content’ model, and they all have a body, title, excerpt, owner, created_at, created_by, updated_at and updated_by, the tests for these attributes will be identical across all model tests. So why not create a ‘ContentTestCase’ suite which each model test inherits? With Test::Unit you can make a ‘StandardContentTests’ module and mix it in, but then you are looking for a place to put them and later looking around for those abstracted tests. Alternatively, if you want to use inheritance, those tests will be run immediately. You want to filter out the abstract class, but still be able to inherit from it. By modifying Test::Unit’s collector filter, it is possible to put anything in the ActiveTest namespace and it will not be run:

require 'test/unit/autorunner'
class Test::Unit::AutoRunner

  def initialize_with_scrub(standalone)
    initialize_without_scrub(standalone)
    @filters << proc { |t| /^ActiveTest::/ =~ t.class.name ? false : true }
  end

  alias_method :initialize_without_scrub, :initialize
  alias_method :initialize, :initialize_with_scrub
end

I won’t go on about the peculiarity of having the filter in AutoRunner, but the inaccessibility of the filter requires a monkey patch like the one above. Just poking into initialize lets us add to the @filters instance variance on AutoRunner and tell it to ignore the ActiveTest namespace. This technique is more about just reading through the code and finding the right place to patch.

2. Metaprogramming techniques (not all of them)

Wrapping define_method has proven to be pretty pointless other than ensuring an unique method name, but developing a lower-level language to standardise class-level macros is a general idea which ActiveTest could keep with some minor adjustments. Ignoring what define_behavioral does, this is a very short macro definition (from ActiveTest::Controller):

  def assigns_records_on(action, options = {})
    define_behavioral(:assign_records_on, action, options) do
      send("call_#{action}_action", options)
      assert_assigned((options[:with] || plural_or_singular(action)).to_sym)
    end
  end

If define_behavioral is made to register behaviors for a better parsing method than using method_missing, then we give it some meaning, but not really enough to keep it. The rest of the method definition, however, is quite clean for creating macros for Subjects, especially if it is compressed in the way mentioned in Part III. Making test macros, however, needs to be dropped in the way it is being used here, or at least used more sparingly. We could keep the setup macro because it sets up useful instance variables and makes reasonable guesses about the test suite’s environment. So, the macro idea should be kept back for special cases, the current implementation entirely dropped.

3. Nested, self-contained setup methods through sorcery method unbinding & stacking

I can’t help but be partial to the way I nested setup and teardown methods. It is my first actually useful innovation in Ruby—a trickery of method unbinding and stacking. Perhaps this bias makes me think it is still useful, but I honestly think it is a useful way to wrap Test::Unit. Let’s have a look at the way it is done in ActiveTest::Base:

  class_inheritable_array :setup_methods
  class_inheritable_array :teardown_methods

  self.setup_methods = []
  self.teardown_methods = []

  # Execute all defined setup methods beyond Test::Unit::TestCase.
  def setup_with_nesting
    self.setup_methods.each { |method| method.bind(self).call }
  end
  alias_method :setup, :setup_with_nesting

  # Execute all defined teardown methods beyond Test::Unit::TestCase.
  def teardown_with_nesting
    self.teardown_methods.each { |method| method.bind(self).call }
  end
  alias_method :teardown, :teardown_with_nesting

  # Suck in every setup and teardown defined, unbind it, remove it and
  # execute it on the child. From here on out, we nest setup/teardown.
  def self.method_added(symbol)
    unless self == ActiveTest::Base
      case symbol.to_s
      when 'setup'
        self.setup_methods = [instance_method(:setup)]
        remove_method(:setup)
      when 'teardown'
        self.teardown_methods = [instance_method(:teardown)]
        remove_method(:teardown)
      end
    end
  end

The first aspect which is useful is allowing setup and teardown nesting without affecting additions to Test::Unit made before or after loading ActiveTest. This is allowed by the code in method_added. Before ActiveTest is loaded, the methods and aliases already exist on Test::Unit; after ActiveTest is loaded, all setup or teardown methods are stacked in a class inherited array and subsequently undefined from the original class. When setup is finally called on a subclass of ActiveTest::Base, it is the ActiveTest::Base method called first. Here is an example stack:

  class Test::Unit::TestCase
    def setup; puts "a"; end;
  end
  class Test::Unit::TestCase
    def setup_with_fixtures; puts "b"; end
    alias_method_chain :setup, :fixtures
  end
  class ActiveTest::Sample < ActiveTest::Base
    def setup; puts "c"; end
  end
  class SampleTest < ActiveTest::Sample
    def setup; puts "d"; end
  end

Upon execution, it will output: d, c, b, then a. A caveat of this technique is that you must be absolutely certain that each unbound method setup_with_nesting executes must be rebound to the original class or one of its descendants. Because of the way class inherited attributes work, this rule is not violated: SampleTest can run the setup method from ActiveTest::Sample because it is a descendant, but other classes inheriting from ActiveTest::Sample will not have methods from SampleTest.

4. Specificity of breaking down things into Subjects

This is more of a design note than actual programming. Presently in Test::Unit, it is only possible to add to Test::Unit::TestCase, from which every test suite inherits. There is no way to say that, for example, assert_difference for an ActiveRecord test is slightly different to 'assert_difference for an ActionController test. The functionality may be the same (for example, refreshing the Record or refreshing the Controller request), but implemented differently for each. By creating subclasses of a test suite that provide only the methods needed by that kind of test, the developer can easily encapsulate test methods and not leak public methods across different kinds of tests. It’s just cleaner and will definitely form the backbone of test suites in the new ActiveTest framework.

Coming Up Next: Redesigning Weak Areas…

ActiveTest: Examination, Part IV: Salvaging Useful Ideas

Posted over 7 years back at Wood for the Trees

DRY and Abstraction: There is a difference between practices of DRY and Abstraction. When you are being DRY, you are either addressing a new repetition or continuing an earlier practice. The primary focus of being DRY is not to create higher level tools for future use. It is a practice of improving past and present code. It replaces, deletes, compresses, encapsulates—it is passive redesign. Abstraction, on the other hand, is active design. When you try to be DRY without a real model, you are actually abstracting. The process of abstraction is a wider programming activity which can lead quite naturally into over-design, because without real models and implementation, it often fails to address real issues.

With that short sermon, let’s make ActiveTest useful again. As I mentioned in the introduction to this mini-series, there are things to take away from the old version of ActiveTest. We’ll now look at them in full and show why they are still useful.

1. Test::Unit inheritance: filtering classes

One of the few patches in the old ActiveTest is to modify Test::Unit’s very well hidden collector filter. The idea of filtration is critical to making inheritance possible. If you have a situation where many ActiveRecord models share many attributes, for example in a Single Table Inheritance for a ‘Content’ model, and they all have a body, title, excerpt, owner, created_at, created_by, updated_at and updated_by, the tests for these attributes will be identical across all model tests. So why not create a ‘ContentTestCase’ suite which each model test inherits? With Test::Unit you can make a ‘StandardContentTests’ module and mix it in, but then you are looking for a place to put them and later looking around for those abstracted tests. Alternatively, if you want to use inheritance, those tests will be run immediately. You want to filter out the abstract class, but still be able to inherit from it. By modifying Test::Unit’s collector filter, it is possible to put anything in the ActiveTest namespace and it will not be run:

require 'test/unit/autorunner'
class Test::Unit::AutoRunner

  def initialize_with_scrub(standalone)
    initialize_without_scrub(standalone)
    @filters << proc { |t| /^ActiveTest::/ =~ t.class.name ? false : true }
  end

  alias_method :initialize_without_scrub, :initialize
  alias_method :initialize, :initialize_with_scrub
end

I won’t go on about the peculiarity of having the filter in AutoRunner, but the inaccessibility of the filter requires a monkey patch like the one above. Just poking into initialize lets us add to the @filters instance variance on AutoRunner and tell it to ignore the ActiveTest namespace. This technique is more about just reading through the code and finding the right place to patch.

2. Metaprogramming techniques (not all of them)

Wrapping define_method has proven to be pretty pointless other than ensuring an unique method name, but developing a lower-level language to standardise class-level macros is a general idea which ActiveTest could keep with some minor adjustments. Ignoring what define_behavioral does, this is a very short macro definition (from ActiveTest::Controller):

  def assigns_records_on(action, options = {})
    define_behavioral(:assign_records_on, action, options) do
      send("call_#{action}_action", options)
      assert_assigned((options[:with] || plural_or_singular(action)).to_sym)
    end
  end

If define_behavioral is made to register behaviors for a better parsing method than using method_missing, then we give it some meaning, but not really enough to keep it. The rest of the method definition, however, is quite clean for creating macros for Subjects, especially if it is compressed in the way mentioned in Part III. Making test macros, however, needs to be dropped in the way it is being used here, or at least used more sparingly. We could keep the setup macro because it sets up useful instance variables and makes reasonable guesses about the test suite’s environment. So, the macro idea should be kept back for special cases, the current implementation entirely dropped.

3. Nested, self-contained setup methods through sorcery method unbinding & stacking

I can’t help but be partial to the way I nested setup and teardown methods. It is my first actually useful innovation in Ruby—a trickery of method unbinding and stacking. Perhaps this bias makes me think it is still useful, but I honestly think it is a useful way to wrap Test::Unit. Let’s have a look at the way it is done in ActiveTest::Base:

  class_inheritable_array :setup_methods
  class_inheritable_array :teardown_methods

  self.setup_methods = []
  self.teardown_methods = []

  # Execute all defined setup methods beyond Test::Unit::TestCase.
  def setup_with_nesting
    self.setup_methods.each { |method| method.bind(self).call }
  end
  alias_method :setup, :setup_with_nesting

  # Execute all defined teardown methods beyond Test::Unit::TestCase.
  def teardown_with_nesting
    self.teardown_methods.each { |method| method.bind(self).call }
  end
  alias_method :teardown, :teardown_with_nesting

  # Suck in every setup and teardown defined, unbind it, remove it and
  # execute it on the child. From here on out, we nest setup/teardown.
  def self.method_added(symbol)
    unless self == ActiveTest::Base
      case symbol.to_s
      when 'setup'
        self.setup_methods = [instance_method(:setup)]
        remove_method(:setup)
      when 'teardown'
        self.teardown_methods = [instance_method(:teardown)]
        remove_method(:teardown)
      end
    end
  end

The first aspect which is useful is allowing setup and teardown nesting without affecting additions to Test::Unit made before or after loading ActiveTest. This is allowed by the code in method_added. Before ActiveTest is loaded, the methods and aliases already exist on Test::Unit; after ActiveTest is loaded, all setup or teardown methods are stacked in a class inherited array and subsequently undefined from the original class. When setup is finally called on a subclass of ActiveTest::Base, it is the ActiveTest::Base method called first. Here is an example stack:

  class Test::Unit::TestCase
    def setup; puts "a"; end;
  end
  class Test::Unit::TestCase
    def setup_with_fixtures; puts "b"; end
    alias_method_chain :setup, :fixtures
  end
  class ActiveTest::Sample < ActiveTest::Base
    def setup; puts "c"; end
  end
  class SampleTest < ActiveTest::Sample
    def setup; puts "d"; end
  end

Upon execution, it will output: d, c, b, then a. A caveat of this technique is that you must be absolutely certain that each unbound method setup_with_nesting executes must be rebound to the original class or one of its descendants. Because of the way class inherited attributes work, this rule is not violated: SampleTest can run the setup method from ActiveTest::Sample because it is a descendant, but other classes inheriting from ActiveTest::Sample will not have methods from SampleTest.

4. Specificity of breaking down things into Subjects

This is more of a design note than actual programming. Presently in Test::Unit, it is only possible to add to Test::Unit::TestCase, from which every test suite inherits. There is no way to say that, for example, assert_difference for an ActiveRecord test is slightly different to 'assert_difference for an ActionController test. The functionality may be the same (for example, refreshing the Record or refreshing the Controller request), but implemented differently for each. By creating subclasses of a test suite that provide only the methods needed by that kind of test, the developer can easily encapsulate test methods and not leak public methods across different kinds of tests. It’s just cleaner and will definitely form the backbone of test suites in the new ActiveTest framework.

Coming Up Next: Redesigning Weak Areas…

ActiveTest: Examination, Part III: Code Bloat

Posted over 7 years back at Wood for the Trees

Another problem you can see from the final example in Part II is a hint of code bloat. I somehow managed to add way more lines of code to just define my original test case of 3 lines. Along the way, I made it very difficult to document what any single method did, because they all provided only pieces. If you wanted to find out what succeeds_on :index actually does, you’ll find yourself trawling through more than a handful of methods. This is the terror of too much premature extraction.

Each Subject received the Spartan method ‘define_behavioral’, whose function is solely to ensure a new method’s name is unique:

      def define_behavioral(prefix, core, specifics = {}, &block)

        proposed = "test_should_#{prefix.to_s.underscore}"
        proposed << "_#{core.to_s.underscore}" if core.respond_to?(:to_s)

        while method_defined?(proposed)
          if proposed =~ /^(.*)(_variant_)(\d+)$/
            proposed = "#{$1}#{$2}#{$3.to_i + 1}" 
          else
            proposed << "_variant_2"
          end
        end

        define_method proposed, &block

        proposed
      end

The bloat is in determining such nondescript method names. It is a result of the poor design that any dynamically generated method would clash with another one. Ultimately, it is better if the developer defines the name of the method, because it would be used in a rake agiledox call. After all, it is their documentation. It did not help matters that this problem was glossed over by writing my own agiledox which loads in everything first, then uses ObjectSpace… another rather inelegant hideous solution.

If we remove the unique naming, define_behavioral is useless; it just wraps define_method. Also, if we remove the dynamic call_#{action}_action and instead use an instance method on ActiveTest::Controller, we reduce method_missing and drop a bunch of unnecessary methods:

class << self
  def succeeds_on(method_name, action, options = {})
    request_method = determine_request_method(action)
    define_method(method_name) do
      call_request_method request_method, action, options
      assert_response :success
      assert_template action
    end
  end
end

Using a direct definition removes the flexibility, but all the code bloat is about meeting some unmentioned need of customising. Until an e-mail is received about it, why do it? Also, better solutions which apply to more than just ActiveTest::Controller may appear naturally. All in all, better not to be so flexible so prematurely.

What Was I Thinking?

I really wanted to write a flexible and easily customised macro language. I thought I could make a clean, elegant macro writer for macro writers, with which test cases were created flexibly and efficiently. The flexibility is what caused all the bloat because I decided to use the expedient of method_missing rather than design a register_behavior type of declaration. Also, I thought it would be better to cater to more edge cases than pigeon-hole ActiveTest to only the most common cases (which could potentially be very few, making ActiveTest pretty useless). The mistake was in trying to cater to these two exclusive cases (common and edge) in their specifics (what they include) rather than how they operate (make a request and check a response).

Coming Up Next: Salvaging Useful Ideas…

ActiveTest: Examination, Part III: Code Bloat

Posted over 7 years back at Wood for the Trees

Another problem you can see from the final example in Part II is a hint of code bloat. I somehow managed to add way more lines of code to just define my original test case of 3 lines. Along the way, I made it very difficult to document what any single method did, because they all provided only pieces. If you wanted to find out what succeeds_on :index actually does, you’ll find yourself trawling through more than a handful of methods. This is the terror of too much premature extraction.

Each Subject received the Spartan method ‘define_behavioral’, whose function is solely to ensure a new method’s name is unique:

      def define_behavioral(prefix, core, specifics = {}, &block)

        proposed = "test_should_#{prefix.to_s.underscore}"
        proposed << "_#{core.to_s.underscore}" if core.respond_to?(:to_s)

        while method_defined?(proposed)
          if proposed =~ /^(.*)(_variant_)(\d+)$/
            proposed = "#{$1}#{$2}#{$3.to_i + 1}" 
          else
            proposed << "_variant_2"
          end
        end

        define_method proposed, &block

        proposed
      end

The bloat is in determining such nondescript method names. It is a result of the poor design that any dynamically generated method would clash with another one. Ultimately, it is better if the developer defines the name of the method, because it would be used in a rake agiledox call. After all, it is their documentation. It did not help matters that this problem was glossed over by writing my own agiledox which loads in everything first, then uses ObjectSpace… another rather inelegant hideous solution.

If we remove the unique naming, define_behavioral is useless; it just wraps define_method. Also, if we remove the dynamic call_#{action}_action and instead use an instance method on ActiveTest::Controller, we reduce method_missing and drop a bunch of unnecessary methods:

class << self
  def succeeds_on(method_name, action, options = {})
    request_method = determine_request_method(action)
    define_method(method_name) do
      call_request_method request_method, action, options
      assert_response :success
      assert_template action
    end
  end
end

Using a direct definition removes the flexibility, but all the code bloat is about meeting some unmentioned need of customising. Until an e-mail is received about it, why do it? Also, better solutions which apply to more than just ActiveTest::Controller may appear naturally. All in all, better not to be so flexible so prematurely.

What Was I Thinking?

I really wanted to write a flexible and easily customised macro language. I thought I could make a clean, elegant macro writer for macro writers, with which test cases were created flexibly and efficiently. The flexibility is what caused all the bloat because I decided to use the expedient of method_missing rather than design a register_behavior type of declaration. Also, I thought it would be better to cater to more edge cases than pigeon-hole ActiveTest to only the most common cases (which could potentially be very few, making ActiveTest pretty useless). The mistake was in trying to cater to these two exclusive cases (common and edge) in their specifics (what they include) rather than how they operate (make a request and check a response).

Coming Up Next: Salvaging Useful Ideas…

ActiveTest: Examination, Part II: Abstracting Without Basis

Posted over 7 years back at Wood for the Trees

My first mistake started the following cascade effect:

  • move all assert_x_success and assert_x_failure to class methods
  • begin defining any test case as a class method
  • extract the definition of an instance method to the superclass level (ActiveTest::Controller)
  • extract those superclass methods to abstract methods (ActiveTest::Subject)
  • fragment assert conditions to be passed up the hierarchy (define_behavioural)
  • extract fragments to the class level (assert_restful_get_success, method_missing, etc.)
  • rinse & repeat for each Subject

Rather than inheriting from a single class with all its assertions self-contained, I now inherited from ActiveTest::Base -> Subject -> Controller. Along the way there were sprinkled methods on the class and eigenclass level which helped define the test cases (ActiveTest::Subject.define_behavioral) or provide extra assertions. I had extracted so much that it was a nightmare to understand what was actually happening (a rare problem when writing in Ruby).

An Extraction Case Study from 0.1.0 Beta (abridged)

module ActiveTest
  class Controller < Subject

    class << self

      def succeeds_on(action, options = {})
        define_behavioral(:succeed_on, action, options) do
          send("call_#{action}_action", options)
          send("assert_#{action}_success")
        end
      end

    end

    ...

    def assert_restful_get_success(action, options = {})
      assert_response :success
      assert_template action.to_s
    end

    ...

    def method_missing(method, *args)
      options = args.last.is_a?(Hash) ? args.dup.pop : {}
      if method.to_s =~ /^assert_(.*)_(success|failure)$/
        action, sf = $1, $2
        if action !~ /get|post|put|delete/
          request_method = self.class.determine_request_method(action)
          send("assert_restful_#{request_method}_#{sf}", action, options)
        end
      elsif method.to_s =~ /^call_(.*)_action$/
        request_method = self.class.determine_request_method(action = $1)
        send("call_request_method", request_method, action, options)
      else
        super
      end
    end

    def call_request_method(request_method, action, options = {})
      options = expand_options(options)
      send(request_method, action, options[:parameters])
    end

  end

end

As you can see, I left out quite a few secondary methods which are being called. The first problem here is in the design of defining a behaviour: it gives too many hooks to the user that allow them to customise. If I am trying to address common cases, this should have been a red flag to a bull—I was blind. The result was a ton of method_missing calls because I extracted every call into a dynamic method, including the assertions in a dynamically defined test case (twice, in fact). Awful.

What Was I Thinking?

When I first began writing ActiveTest, I focused solely on making test cases a one-line business. ActiveTest contained just a simple class method on ActiveTest::Subject to define any kind of test case and a number of classes inheriting from it that defined common groups of assertions. Then, I abstracted the entire process into defining ‘behaviours’, approaching each test case as a behaviour request-response pair (in fact, it is a context if you use BDD terminology). I went through a number of phases looking for the cleanest way to implement this idea of behaviours (I aped RSpec, Test::Rails and others). None of them seemed appropriate, so my own metalanguage was written that looked clean (but only when I did not provide parameters). I thought the metaprogramming of tests with a descriptive class method call would be self-explanatory enough.[1]

Take away from this one single maxim, by which you should live and die when you refactor: do not abstract without real-life need. If you don’t base your abstraction on something real, you are theorizing—and we all know that the problem with theories is that they have not been tested.

Coming Up Next: Code Bloat…

Footnotes

1 Recently I encountered a situation where parameters were impossible to pass. This unforeseen situation (where I needed RFuzz:http://rfuzz.rubyforge.org/) contributed to my realisation that ActiveTest’s current design is useless for anything moderately dynamic or complex.

ActiveTest: Examination, Part II: Abstracting Without Basis

Posted over 7 years back at Wood for the Trees

My first mistake started the following cascade effect:

  • move all assert_x_success and assert_x_failure to class methods
  • begin defining any test case as a class method
  • extract the definition of an instance method to the superclass level (ActiveTest::Controller)
  • extract those superclass methods to abstract methods (ActiveTest::Subject)
  • fragment assert conditions to be passed up the hierarchy (define_behavioural)
  • extract fragments to the class level (assert_restful_get_success, method_missing, etc.)
  • rinse & repeat for each Subject

Rather than inheriting from a single class with all its assertions self-contained, I now inherited from ActiveTest::Base -> Subject -> Controller. Along the way there were sprinkled methods on the class and eigenclass level which helped define the test cases (ActiveTest::Subject.define_behavioral) or provide extra assertions. I had extracted so much that it was a nightmare to understand what was actually happening (a rare problem when writing in Ruby).

An Extraction Case Study from 0.1.0 Beta (abridged)

module ActiveTest
  class Controller < Subject

    class << self

      def succeeds_on(action, options = {})
        define_behavioral(:succeed_on, action, options) do
          send("call_#{action}_action", options)
          send("assert_#{action}_success")
        end
      end

    end

    ...

    def assert_restful_get_success(action, options = {})
      assert_response :success
      assert_template action.to_s
    end

    ...

    def method_missing(method, *args)
      options = args.last.is_a?(Hash) ? args.dup.pop : {}
      if method.to_s =~ /^assert_(.*)_(success|failure)$/
        action, sf = $1, $2
        if action !~ /get|post|put|delete/
          request_method = self.class.determine_request_method(action)
          send("assert_restful_#{request_method}_#{sf}", action, options)
        end
      elsif method.to_s =~ /^call_(.*)_action$/
        request_method = self.class.determine_request_method(action = $1)
        send("call_request_method", request_method, action, options)
      else
        super
      end
    end

    def call_request_method(request_method, action, options = {})
      options = expand_options(options)
      send(request_method, action, options[:parameters])
    end

  end

end

As you can see, I left out quite a few secondary methods which are being called. The first problem here is in the design of defining a behaviour: it gives too many hooks to the user that allow them to customise. If I am trying to address common cases, this should have been a red flag to a bull—I was blind. The result was a ton of method_missing calls because I extracted every call into a dynamic method, including the assertions in a dynamically defined test case (twice, in fact). Awful.

What Was I Thinking?

When I first began writing ActiveTest, I focused solely on making test cases a one-line business. ActiveTest contained just a simple class method on ActiveTest::Subject to define any kind of test case and a number of classes inheriting from it that defined common groups of assertions. Then, I abstracted the entire process into defining ‘behaviours’, approaching each test case as a behaviour request-response pair (in fact, it is a context if you use BDD terminology). I went through a number of phases looking for the cleanest way to implement this idea of behaviours (I aped RSpec, Test::Rails and others). None of them seemed appropriate, so my own metalanguage was written that looked clean (but only when I did not provide parameters). I thought the metaprogramming of tests with a descriptive class method call would be self-explanatory enough.[1]

Take away from this one single maxim, by which you should live and die when you refactor: do not abstract without real-life need. If you don’t base your abstraction on something real, you are theorizing—and we all know that the problem with theories is that they have not been tested.

Coming Up Next: Code Bloat…

Footnotes

1 Recently I encountered a situation where parameters were impossible to pass. This unforeseen situation (where I needed RFuzz:http://rfuzz.rubyforge.org/) contributed to my realisation that ActiveTest’s current design is useless for anything moderately dynamic or complex.

ActiveTest: Examination, Part I: Removing Transparency

Posted over 7 years back at Wood for the Trees

The first abstraction I made was to change assert_get_success and its equivalents (get_failure, post_success, etc.) into an abstract idea of common test cases. Obviously every application is going to be treated differently, but to make a suite of tools to remove the most common tests seemed like a useful thing to extract—it’s DRY.

Consequently, I had the following in Test::Unit::TestCase:

  class << self
    def test_should_index_items
      define_method(:test_should_index_items) do
        assert_get_success :index
      end
    end
  end

I then abstracted it a little:

  class << self
    def test_should_get_with_success(action = :index, parameters = {})
      define_method("test_should_#{action}_item") do
        assert_get_success action, parameters
      end
    end
  end

This is basic DRYing up. However, I have at this point already lost sight of the simplest benefit of writing out test cases with Test::Unit: documentation. If someone else uses this, the only thing they know is that their GET succeeds… but in what way? I began documenting what each method does and extracting more and more complex cases which may be ‘common’. Documenting these methods hid the general fault, which was making the entire testing process opaque to the user. Tests started to look like this:

class TestSomething < ActiveTest::Base
  test_should_get_with_success :index
  test_should_get_with_success :show, :id => 1
  test_should_get_with_success :new
  test_should_get_with_success :edit, :id => 1
end

So my first mistake was removing transparency without thinking about what I was beginning to lose. This is quite project-specific. In most cases, making methods opaque and useful is a good thing, but in the case of ActiveTest, it needs to be flexible and transparent. It is hard to beat a series of asserts.[1]

This mistake is probably my least offensive, because some, including myself, may like this feature just to remove all those irritating test_should_index_items, especially when you are writing new controllers left and right.

What Was I Thinking?

I thought that the documenting aspect of testing was purely outlining what everything should do, rather than describing what everything should do. My original idea was to make it easier to document and test at the same time by outlining the cases in each suite. Just in terms of line ratios, I now had a 5-line method which let me write a test case in 1 line. Not bad. It quickly paid for itself because it catered to :index, :new:, :show and :edit. However, I lost all the expressiveness of showing what is being tested.

Coming Up Next: Abstracting Without Basis…

Footnotes

1 It can be done. In fact, the next generation of Active Test, while rather different, frees up the tester to think completely about the behaviour of his code. I will trade in the brevity of test cases for the flexibility of defining them (for anything, whether Rails application, plugin, gem or 60-line file).

ActiveTest: Examination, Part I: Removing Transparency

Posted over 7 years back at Wood for the Trees

The first abstraction I made was to change assert_get_success and its equivalents (get_failure, post_success, etc.) into an abstract idea of common test cases. Obviously every application is going to be treated differently, but to make a suite of tools to remove the most common tests seemed like a useful thing to extract—it’s DRY.

Consequently, I had the following in Test::Unit::TestCase:

  class << self
    def test_should_index_items
      define_method(:test_should_index_items) do
        assert_get_success :index
      end
    end
  end

I then abstracted it a little:

  class << self
    def test_should_get_with_success(action = :index, parameters = {})
      define_method("test_should_#{action}_item") do
        assert_get_success action, parameters
      end
    end
  end

This is basic DRYing up. However, I have at this point already lost sight of the simplest benefit of writing out test cases with Test::Unit: documentation. If someone else uses this, the only thing they know is that their GET succeeds… but in what way? I began documenting what each method does and extracting more and more complex cases which may be ‘common’. Documenting these methods hid the general fault, which was making the entire testing process opaque to the user. Tests started to look like this:

class TestSomething < ActiveTest::Base
  test_should_get_with_success :index
  test_should_get_with_success :show, :id => 1
  test_should_get_with_success :new
  test_should_get_with_success :edit, :id => 1
end

So my first mistake was removing transparency without thinking about what I was beginning to lose. This is quite project-specific. In most cases, making methods opaque and useful is a good thing, but in the case of ActiveTest, it needs to be flexible and transparent. It is hard to beat a series of asserts.[1]

This mistake is probably my least offensive, because some, including myself, may like this feature just to remove all those irritating test_should_index_items, especially when you are writing new controllers left and right.

What Was I Thinking?

I thought that the documenting aspect of testing was purely outlining what everything should do, rather than describing what everything should do. My original idea was to make it easier to document and test at the same time by outlining the cases in each suite. Just in terms of line ratios, I now had a 5-line method which let me write a test case in 1 line. Not bad. It quickly paid for itself because it catered to :index, :new:, :show and :edit. However, I lost all the expressiveness of showing what is being tested.

Coming Up Next: Abstracting Without Basis…

Footnotes

1 It can be done. In fact, the next generation of Active Test, while rather different, frees up the tester to think completely about the behaviour of his code. I will trade in the brevity of test cases for the flexibility of defining them (for anything, whether Rails application, plugin, gem or 60-line file).

ActiveTest: Examination, Introduction: A Mistake and How To Fix It

Posted over 7 years back at Wood for the Trees

In trying to bring ActiveTest to a state resembling my original article about ActiveTest, I realised that … it’s a piece of crap. It just isn’t the kind of code which many but myself would find useful and now even I don’t find it that helpful anymore. A total cowpat of a project, sadly.

To help both myself and the community, I will be analysing my mistake in full and mercilessly revealing my thinking process, development and design along the way. Because it may get a little… lengthy, I’ll break it down into a number of articles. The last article will be dedicated entirely to making Active Test useful (which itself is three parts: salvaging useful ideas, redesigning weak areas, and changing purpose).

Obituary to Active Test 0.1.0 Beta

In its death-throes, the Active Test Plugin can still be useful: as an example of what not to do. Before I start laying into my sad miscarriage of an idea, I’ll outline useful ideas which came out of it:

  • Finding a way to extend Test::Unit which is safe and allows inheritance
  • Learning different ways of metaprogramming (especially ways to wrap define_method)
  • Extracting a more granular set of things to test in an application
  • Learning techniques for self-testing a test library

As with many mistakes, more than half of it is about learning rather than providing something useful. That’s kind of the point, I guess! I didn’t receive a single support e-mail or comment that it has helped anyone. Then I began to realise what I was doing was wrong and had managed to air my dirty laundry in the process.

An overview of the mistakes

With that small obituary to ActiveTest in its current incarnation, let’s look at the problems in the order in which I’ll be discussing them:

  • Removing test case transparency (an advantage of Test::Unit)
  • Extracting and abstracting without a real-world need or basis1
  • Excessive metaprogramming, which introduces ambiguity into tests (the cardinal sin)
  • Code bloat, plain and simple

The Idea: the original test case

I first came up with ActiveTest when I looked at this test case (from an old revision of one of my projects):

  def test_should_index_items
     assert_get_success :index
  end

At first glance, this looks almost identical to what ActiveTest currently does. I have a case where I want to ensure that a typical GET request receives a HTTP 200 response. I extrapolated HTTP 200 to HTTP GET into a request-response ‘success’ condition. The method was given a parameter (an action to call) that was passed to get and assert_template. All perfectly fine, a little DRY, but not unreadable. However, this test case and a bunch of others exactly like it formed almost the entire basis of my initial version of Active Test.

Coming Up Next: My First Mistake…

Footnotes

1 For a milder case of this, called premature extraction, see this article

ActiveTest: Examination, Introduction: A Mistake and How To Fix It

Posted over 7 years back at Wood for the Trees

In trying to bring ActiveTest to a state resembling my original article about ActiveTest, I realised that … it’s a piece of crap. It just isn’t the kind of code which many but myself would find useful and now even I don’t find it that helpful anymore. A total cowpat of a project, sadly.

To help both myself and the community, I will be analysing my mistake in full and mercilessly revealing my thinking process, development and design along the way. Because it may get a little… lengthy, I’ll break it down into a number of articles. The last article will be dedicated entirely to making Active Test useful (which itself is three parts: salvaging useful ideas, redesigning weak areas, and changing purpose).

Obituary to Active Test 0.1.0 Beta

In its death-throes, the Active Test Plugin can still be useful: as an example of what not to do. Before I start laying into my sad miscarriage of an idea, I’ll outline useful ideas which came out of it:

  • Finding a way to extend Test::Unit which is safe and allows inheritance
  • Learning different ways of metaprogramming (especially ways to wrap define_method)
  • Extracting a more granular set of things to test in an application
  • Learning techniques for self-testing a test library

As with many mistakes, more than half of it is about learning rather than providing something useful. That’s kind of the point, I guess! I didn’t receive a single support e-mail or comment that it has helped anyone. Then I began to realise what I was doing was wrong and had managed to air my dirty laundry in the process.

An overview of the mistakes

With that small obituary to ActiveTest in its current incarnation, let’s look at the problems in the order in which I’ll be discussing them:

  • Removing test case transparency (an advantage of Test::Unit)
  • Extracting and abstracting without a real-world need or basis1
  • Excessive metaprogramming, which introduces ambiguity into tests (the cardinal sin)
  • Code bloat, plain and simple

The Idea: the original test case

I first came up with ActiveTest when I looked at this test case (from an old revision of one of my projects):

  def test_should_index_items
     assert_get_success :index
  end

At first glance, this looks almost identical to what ActiveTest currently does. I have a case where I want to ensure that a typical GET request receives a HTTP 200 response. I extrapolated HTTP 200 to HTTP GET into a request-response ‘success’ condition. The method was given a parameter (an action to call) that was passed to get and assert_template. All perfectly fine, a little DRY, but not unreadable. However, this test case and a bunch of others exactly like it formed almost the entire basis of my initial version of Active Test.

Coming Up Next: My First Mistake…

Footnotes

1 For a milder case of this, called premature extraction, see this article

Digg Scares Me (403 Go Away!)

Posted over 7 years back at Ryan Tomayko's Writings

I just found out that this piece about IE testing with Parallels hit the digg.com front page briefly at some point on Tuesday. The comments over there are pretty much unanimously in favor of having me dragged out into the street and shot.

The funny thing about Digg is that it changes the way people read. The average Digger seems to assume that people write stuff solely for the purpose of making it to the Digg front page. My article was clearly a cheap ploy to capitalize on the recent buzz around Parallels combined with reports on battery explosions to drive traffic to my ad-ridden site. Or I'm being paid by Microsoft to spread negative press about the Mac. NO, I'm a Firefox fanboy that hates Microsoft, Apple, and Parallels. It’s crazy.

No one knows you there so you have to write in a way that is completely void of who you are and what you’re about. That sucks. I'd rather just opt out of the popularity contest (and I did – see below).

Here’s some examples of the presumptuous attitude you find in comments there:

This story should focus more on the actual problem: that is, that the power cord was faulty.

First, I'm not writing a “story” for Digg Corporation and, to be honest, I have very little interest in improving my writing to better serve the Digg “community”.

Second, if I was formally reporting a power cord issue, I would do it with AppleCare.

The point of the post was simply to share (with other developers) an anecdote about a common occurrence in software development: sometimes you take a step forward and two steps back. In this case, IE testing became a bit easier but seemed to cause a hardware failure. The pattern seems to repeat itself over and over in software development, this iteration was just more bizarre than most.

Put that on the Digg front page and somehow it becomes an attack on Apple, Microsoft, Parallels, Web Standards, World of Warcraft, and George W. Bush.

Not the internet’s fault this guy is an idiot. MY POWER CHORD MELTED? MUST BE MICROSOFT’S FAULT!

Okay, a “POWER CHORD” is something you strum on a guitar.

And when did I ever insinuate this could possibly be Microsoft’s fault? We know the power cord should not melt and Apple makes the power cord. It would seem to be an hardware issue set off by running multiple instances of Parallels but I really don’t know because I haven’t done any extensive research on the issue and, more importantly, I really don’t care. I pretty much assume stuff will break like this every once in a while and just try to move past it as soon as possible.

Nice MacGyver (aka duct taping) skills.

Now that’s a proper comment! Using duct tape for something that requires electrical tape is asinine, it’s also a little funny.

403 Go Away!

Anyway, the whole experience has freaked me out to the point that I'd rather not have to deal with it so I'm blocking digg traffic. Here’s how it works.

In Rails, I modified the controller to send back a 403 Forbidden response when we detect someone coming from digg.com:

before_filter :only => :permalink do |controller|
  if controller.request.env['HTTP_REFERER'] =~ /^http:\/\/(www\.)?digg\.com/
    controller.render :action => 'terrified', :status => 403, :layout => false
    false
  else
    true
  end
end

The response looks something like this:

$ curl -e http://digg.com -si http://tomayko.com/weblog/2006/12/23/parallels-makes-ie-suck-less
HTTP/1.1 403 Forbidden
Content-Length: 363
Date: Sat, 30 Dec 2006 10:50:25 GMT
Status: 403
Cache-Control: no-cache
Server: Mongrel 0.3.13.3
Content-Type: text/html;charset=utf-8

<html>
  <head><title>Go Away!</title></head>
  <body>
  <h1>403 Go Away!</h1>
  <p>The server understood the request, but is refusing to fulfill it because 
  you're coming from <a href='http://digg.com'>digg.com</a> and the proprieter
  of this system is frankly terrified by you people. Authorization will not 
  help and the request SHOULD NOT be repeated.</p>
  </body>
</html>

Don’t believe me? You can test by following the link back here from this page on digg.com.

Until next year...

Posted over 7 years back at The Hobo Blog

So I’m off with the family for a couple of days with friends in the West Country. My understanding is that these particular friends have NO INTERNET CONNECTION. He’s an artist – go figure. So I’ll be off-line for the rest of the year :-).

Happy New Year to you all! If the few days since the launch of Hobo are anything to go by, 2007 should be an exciting year.