ActiveTest: Examination, Part IV: Salvaging Useful Ideas

Posted over 7 years back at Wood for the Trees

DRY and Abstraction: There is a difference between practices of DRY and Abstraction. When you are being DRY, you are either addressing a new repetition or continuing an earlier practice. The primary focus of being DRY is not to create higher level tools for future use. It is a practice of improving past and present code. It replaces, deletes, compresses, encapsulates—it is passive redesign. Abstraction, on the other hand, is active design. When you try to be DRY without a real model, you are actually abstracting. The process of abstraction is a wider programming activity which can lead quite naturally into over-design, because without real models and implementation, it often fails to address real issues.

With that short sermon, let’s make ActiveTest useful again. As I mentioned in the introduction to this mini-series, there are things to take away from the old version of ActiveTest. We’ll now look at them in full and show why they are still useful.

1. Test::Unit inheritance: filtering classes

One of the few patches in the old ActiveTest is to modify Test::Unit’s very well hidden collector filter. The idea of filtration is critical to making inheritance possible. If you have a situation where many ActiveRecord models share many attributes, for example in a Single Table Inheritance for a ‘Content’ model, and they all have a body, title, excerpt, owner, created_at, created_by, updated_at and updated_by, the tests for these attributes will be identical across all model tests. So why not create a ‘ContentTestCase’ suite which each model test inherits? With Test::Unit you can make a ‘StandardContentTests’ module and mix it in, but then you are looking for a place to put them and later looking around for those abstracted tests. Alternatively, if you want to use inheritance, those tests will be run immediately. You want to filter out the abstract class, but still be able to inherit from it. By modifying Test::Unit’s collector filter, it is possible to put anything in the ActiveTest namespace and it will not be run:

require 'test/unit/autorunner'
class Test::Unit::AutoRunner

  def initialize_with_scrub(standalone)
    initialize_without_scrub(standalone)
    @filters << proc { |t| /^ActiveTest::/ =~ t.class.name ? false : true }
  end

  alias_method :initialize_without_scrub, :initialize
  alias_method :initialize, :initialize_with_scrub
end

I won’t go on about the peculiarity of having the filter in AutoRunner, but the inaccessibility of the filter requires a monkey patch like the one above. Just poking into initialize lets us add to the @filters instance variance on AutoRunner and tell it to ignore the ActiveTest namespace. This technique is more about just reading through the code and finding the right place to patch.

2. Metaprogramming techniques (not all of them)

Wrapping define_method has proven to be pretty pointless other than ensuring an unique method name, but developing a lower-level language to standardise class-level macros is a general idea which ActiveTest could keep with some minor adjustments. Ignoring what define_behavioral does, this is a very short macro definition (from ActiveTest::Controller):

  def assigns_records_on(action, options = {})
    define_behavioral(:assign_records_on, action, options) do
      send("call_#{action}_action", options)
      assert_assigned((options[:with] || plural_or_singular(action)).to_sym)
    end
  end

If define_behavioral is made to register behaviors for a better parsing method than using method_missing, then we give it some meaning, but not really enough to keep it. The rest of the method definition, however, is quite clean for creating macros for Subjects, especially if it is compressed in the way mentioned in Part III. Making test macros, however, needs to be dropped in the way it is being used here, or at least used more sparingly. We could keep the setup macro because it sets up useful instance variables and makes reasonable guesses about the test suite’s environment. So, the macro idea should be kept back for special cases, the current implementation entirely dropped.

3. Nested, self-contained setup methods through sorcery method unbinding & stacking

I can’t help but be partial to the way I nested setup and teardown methods. It is my first actually useful innovation in Ruby—a trickery of method unbinding and stacking. Perhaps this bias makes me think it is still useful, but I honestly think it is a useful way to wrap Test::Unit. Let’s have a look at the way it is done in ActiveTest::Base:

  class_inheritable_array :setup_methods
  class_inheritable_array :teardown_methods

  self.setup_methods = []
  self.teardown_methods = []

  # Execute all defined setup methods beyond Test::Unit::TestCase.
  def setup_with_nesting
    self.setup_methods.each { |method| method.bind(self).call }
  end
  alias_method :setup, :setup_with_nesting

  # Execute all defined teardown methods beyond Test::Unit::TestCase.
  def teardown_with_nesting
    self.teardown_methods.each { |method| method.bind(self).call }
  end
  alias_method :teardown, :teardown_with_nesting

  # Suck in every setup and teardown defined, unbind it, remove it and
  # execute it on the child. From here on out, we nest setup/teardown.
  def self.method_added(symbol)
    unless self == ActiveTest::Base
      case symbol.to_s
      when 'setup'
        self.setup_methods = [instance_method(:setup)]
        remove_method(:setup)
      when 'teardown'
        self.teardown_methods = [instance_method(:teardown)]
        remove_method(:teardown)
      end
    end
  end

The first aspect which is useful is allowing setup and teardown nesting without affecting additions to Test::Unit made before or after loading ActiveTest. This is allowed by the code in method_added. Before ActiveTest is loaded, the methods and aliases already exist on Test::Unit; after ActiveTest is loaded, all setup or teardown methods are stacked in a class inherited array and subsequently undefined from the original class. When setup is finally called on a subclass of ActiveTest::Base, it is the ActiveTest::Base method called first. Here is an example stack:

  class Test::Unit::TestCase
    def setup; puts "a"; end;
  end
  class Test::Unit::TestCase
    def setup_with_fixtures; puts "b"; end
    alias_method_chain :setup, :fixtures
  end
  class ActiveTest::Sample < ActiveTest::Base
    def setup; puts "c"; end
  end
  class SampleTest < ActiveTest::Sample
    def setup; puts "d"; end
  end

Upon execution, it will output: d, c, b, then a. A caveat of this technique is that you must be absolutely certain that each unbound method setup_with_nesting executes must be rebound to the original class or one of its descendants. Because of the way class inherited attributes work, this rule is not violated: SampleTest can run the setup method from ActiveTest::Sample because it is a descendant, but other classes inheriting from ActiveTest::Sample will not have methods from SampleTest.

4. Specificity of breaking down things into Subjects

This is more of a design note than actual programming. Presently in Test::Unit, it is only possible to add to Test::Unit::TestCase, from which every test suite inherits. There is no way to say that, for example, assert_difference for an ActiveRecord test is slightly different to 'assert_difference for an ActionController test. The functionality may be the same (for example, refreshing the Record or refreshing the Controller request), but implemented differently for each. By creating subclasses of a test suite that provide only the methods needed by that kind of test, the developer can easily encapsulate test methods and not leak public methods across different kinds of tests. It’s just cleaner and will definitely form the backbone of test suites in the new ActiveTest framework.

Coming Up Next: Redesigning Weak Areas…

ActiveTest: Examination, Part IV: Salvaging Useful Ideas

Posted over 7 years back at Wood for the Trees

DRY and Abstraction: There is a difference between practices of DRY and Abstraction. When you are being DRY, you are either addressing a new repetition or continuing an earlier practice. The primary focus of being DRY is not to create higher level tools for future use. It is a practice of improving past and present code. It replaces, deletes, compresses, encapsulates—it is passive redesign. Abstraction, on the other hand, is active design. When you try to be DRY without a real model, you are actually abstracting. The process of abstraction is a wider programming activity which can lead quite naturally into over-design, because without real models and implementation, it often fails to address real issues.

With that short sermon, let’s make ActiveTest useful again. As I mentioned in the introduction to this mini-series, there are things to take away from the old version of ActiveTest. We’ll now look at them in full and show why they are still useful.

1. Test::Unit inheritance: filtering classes

One of the few patches in the old ActiveTest is to modify Test::Unit’s very well hidden collector filter. The idea of filtration is critical to making inheritance possible. If you have a situation where many ActiveRecord models share many attributes, for example in a Single Table Inheritance for a ‘Content’ model, and they all have a body, title, excerpt, owner, created_at, created_by, updated_at and updated_by, the tests for these attributes will be identical across all model tests. So why not create a ‘ContentTestCase’ suite which each model test inherits? With Test::Unit you can make a ‘StandardContentTests’ module and mix it in, but then you are looking for a place to put them and later looking around for those abstracted tests. Alternatively, if you want to use inheritance, those tests will be run immediately. You want to filter out the abstract class, but still be able to inherit from it. By modifying Test::Unit’s collector filter, it is possible to put anything in the ActiveTest namespace and it will not be run:

require 'test/unit/autorunner'
class Test::Unit::AutoRunner

  def initialize_with_scrub(standalone)
    initialize_without_scrub(standalone)
    @filters << proc { |t| /^ActiveTest::/ =~ t.class.name ? false : true }
  end

  alias_method :initialize_without_scrub, :initialize
  alias_method :initialize, :initialize_with_scrub
end

I won’t go on about the peculiarity of having the filter in AutoRunner, but the inaccessibility of the filter requires a monkey patch like the one above. Just poking into initialize lets us add to the @filters instance variance on AutoRunner and tell it to ignore the ActiveTest namespace. This technique is more about just reading through the code and finding the right place to patch.

2. Metaprogramming techniques (not all of them)

Wrapping define_method has proven to be pretty pointless other than ensuring an unique method name, but developing a lower-level language to standardise class-level macros is a general idea which ActiveTest could keep with some minor adjustments. Ignoring what define_behavioral does, this is a very short macro definition (from ActiveTest::Controller):

  def assigns_records_on(action, options = {})
    define_behavioral(:assign_records_on, action, options) do
      send("call_#{action}_action", options)
      assert_assigned((options[:with] || plural_or_singular(action)).to_sym)
    end
  end

If define_behavioral is made to register behaviors for a better parsing method than using method_missing, then we give it some meaning, but not really enough to keep it. The rest of the method definition, however, is quite clean for creating macros for Subjects, especially if it is compressed in the way mentioned in Part III. Making test macros, however, needs to be dropped in the way it is being used here, or at least used more sparingly. We could keep the setup macro because it sets up useful instance variables and makes reasonable guesses about the test suite’s environment. So, the macro idea should be kept back for special cases, the current implementation entirely dropped.

3. Nested, self-contained setup methods through sorcery method unbinding & stacking

I can’t help but be partial to the way I nested setup and teardown methods. It is my first actually useful innovation in Ruby—a trickery of method unbinding and stacking. Perhaps this bias makes me think it is still useful, but I honestly think it is a useful way to wrap Test::Unit. Let’s have a look at the way it is done in ActiveTest::Base:

  class_inheritable_array :setup_methods
  class_inheritable_array :teardown_methods

  self.setup_methods = []
  self.teardown_methods = []

  # Execute all defined setup methods beyond Test::Unit::TestCase.
  def setup_with_nesting
    self.setup_methods.each { |method| method.bind(self).call }
  end
  alias_method :setup, :setup_with_nesting

  # Execute all defined teardown methods beyond Test::Unit::TestCase.
  def teardown_with_nesting
    self.teardown_methods.each { |method| method.bind(self).call }
  end
  alias_method :teardown, :teardown_with_nesting

  # Suck in every setup and teardown defined, unbind it, remove it and
  # execute it on the child. From here on out, we nest setup/teardown.
  def self.method_added(symbol)
    unless self == ActiveTest::Base
      case symbol.to_s
      when 'setup'
        self.setup_methods = [instance_method(:setup)]
        remove_method(:setup)
      when 'teardown'
        self.teardown_methods = [instance_method(:teardown)]
        remove_method(:teardown)
      end
    end
  end

The first aspect which is useful is allowing setup and teardown nesting without affecting additions to Test::Unit made before or after loading ActiveTest. This is allowed by the code in method_added. Before ActiveTest is loaded, the methods and aliases already exist on Test::Unit; after ActiveTest is loaded, all setup or teardown methods are stacked in a class inherited array and subsequently undefined from the original class. When setup is finally called on a subclass of ActiveTest::Base, it is the ActiveTest::Base method called first. Here is an example stack:

  class Test::Unit::TestCase
    def setup; puts "a"; end;
  end
  class Test::Unit::TestCase
    def setup_with_fixtures; puts "b"; end
    alias_method_chain :setup, :fixtures
  end
  class ActiveTest::Sample < ActiveTest::Base
    def setup; puts "c"; end
  end
  class SampleTest < ActiveTest::Sample
    def setup; puts "d"; end
  end

Upon execution, it will output: d, c, b, then a. A caveat of this technique is that you must be absolutely certain that each unbound method setup_with_nesting executes must be rebound to the original class or one of its descendants. Because of the way class inherited attributes work, this rule is not violated: SampleTest can run the setup method from ActiveTest::Sample because it is a descendant, but other classes inheriting from ActiveTest::Sample will not have methods from SampleTest.

4. Specificity of breaking down things into Subjects

This is more of a design note than actual programming. Presently in Test::Unit, it is only possible to add to Test::Unit::TestCase, from which every test suite inherits. There is no way to say that, for example, assert_difference for an ActiveRecord test is slightly different to 'assert_difference for an ActionController test. The functionality may be the same (for example, refreshing the Record or refreshing the Controller request), but implemented differently for each. By creating subclasses of a test suite that provide only the methods needed by that kind of test, the developer can easily encapsulate test methods and not leak public methods across different kinds of tests. It’s just cleaner and will definitely form the backbone of test suites in the new ActiveTest framework.

Coming Up Next: Redesigning Weak Areas…

ActiveTest: Examination, Part III: Code Bloat

Posted over 7 years back at Wood for the Trees

Another problem you can see from the final example in Part II is a hint of code bloat. I somehow managed to add way more lines of code to just define my original test case of 3 lines. Along the way, I made it very difficult to document what any single method did, because they all provided only pieces. If you wanted to find out what succeeds_on :index actually does, you’ll find yourself trawling through more than a handful of methods. This is the terror of too much premature extraction.

Each Subject received the Spartan method ‘define_behavioral’, whose function is solely to ensure a new method’s name is unique:

      def define_behavioral(prefix, core, specifics = {}, &block)

        proposed = "test_should_#{prefix.to_s.underscore}"
        proposed << "_#{core.to_s.underscore}" if core.respond_to?(:to_s)

        while method_defined?(proposed)
          if proposed =~ /^(.*)(_variant_)(\d+)$/
            proposed = "#{$1}#{$2}#{$3.to_i + 1}" 
          else
            proposed << "_variant_2"
          end
        end

        define_method proposed, &block

        proposed
      end

The bloat is in determining such nondescript method names. It is a result of the poor design that any dynamically generated method would clash with another one. Ultimately, it is better if the developer defines the name of the method, because it would be used in a rake agiledox call. After all, it is their documentation. It did not help matters that this problem was glossed over by writing my own agiledox which loads in everything first, then uses ObjectSpace… another rather inelegant hideous solution.

If we remove the unique naming, define_behavioral is useless; it just wraps define_method. Also, if we remove the dynamic call_#{action}_action and instead use an instance method on ActiveTest::Controller, we reduce method_missing and drop a bunch of unnecessary methods:

class << self
  def succeeds_on(method_name, action, options = {})
    request_method = determine_request_method(action)
    define_method(method_name) do
      call_request_method request_method, action, options
      assert_response :success
      assert_template action
    end
  end
end

Using a direct definition removes the flexibility, but all the code bloat is about meeting some unmentioned need of customising. Until an e-mail is received about it, why do it? Also, better solutions which apply to more than just ActiveTest::Controller may appear naturally. All in all, better not to be so flexible so prematurely.

What Was I Thinking?

I really wanted to write a flexible and easily customised macro language. I thought I could make a clean, elegant macro writer for macro writers, with which test cases were created flexibly and efficiently. The flexibility is what caused all the bloat because I decided to use the expedient of method_missing rather than design a register_behavior type of declaration. Also, I thought it would be better to cater to more edge cases than pigeon-hole ActiveTest to only the most common cases (which could potentially be very few, making ActiveTest pretty useless). The mistake was in trying to cater to these two exclusive cases (common and edge) in their specifics (what they include) rather than how they operate (make a request and check a response).

Coming Up Next: Salvaging Useful Ideas…

ActiveTest: Examination, Part III: Code Bloat

Posted over 7 years back at Wood for the Trees

Another problem you can see from the final example in Part II is a hint of code bloat. I somehow managed to add way more lines of code to just define my original test case of 3 lines. Along the way, I made it very difficult to document what any single method did, because they all provided only pieces. If you wanted to find out what succeeds_on :index actually does, you’ll find yourself trawling through more than a handful of methods. This is the terror of too much premature extraction.

Each Subject received the Spartan method ‘define_behavioral’, whose function is solely to ensure a new method’s name is unique:

      def define_behavioral(prefix, core, specifics = {}, &block)

        proposed = "test_should_#{prefix.to_s.underscore}"
        proposed << "_#{core.to_s.underscore}" if core.respond_to?(:to_s)

        while method_defined?(proposed)
          if proposed =~ /^(.*)(_variant_)(\d+)$/
            proposed = "#{$1}#{$2}#{$3.to_i + 1}" 
          else
            proposed << "_variant_2"
          end
        end

        define_method proposed, &block

        proposed
      end

The bloat is in determining such nondescript method names. It is a result of the poor design that any dynamically generated method would clash with another one. Ultimately, it is better if the developer defines the name of the method, because it would be used in a rake agiledox call. After all, it is their documentation. It did not help matters that this problem was glossed over by writing my own agiledox which loads in everything first, then uses ObjectSpace… another rather inelegant hideous solution.

If we remove the unique naming, define_behavioral is useless; it just wraps define_method. Also, if we remove the dynamic call_#{action}_action and instead use an instance method on ActiveTest::Controller, we reduce method_missing and drop a bunch of unnecessary methods:

class << self
  def succeeds_on(method_name, action, options = {})
    request_method = determine_request_method(action)
    define_method(method_name) do
      call_request_method request_method, action, options
      assert_response :success
      assert_template action
    end
  end
end

Using a direct definition removes the flexibility, but all the code bloat is about meeting some unmentioned need of customising. Until an e-mail is received about it, why do it? Also, better solutions which apply to more than just ActiveTest::Controller may appear naturally. All in all, better not to be so flexible so prematurely.

What Was I Thinking?

I really wanted to write a flexible and easily customised macro language. I thought I could make a clean, elegant macro writer for macro writers, with which test cases were created flexibly and efficiently. The flexibility is what caused all the bloat because I decided to use the expedient of method_missing rather than design a register_behavior type of declaration. Also, I thought it would be better to cater to more edge cases than pigeon-hole ActiveTest to only the most common cases (which could potentially be very few, making ActiveTest pretty useless). The mistake was in trying to cater to these two exclusive cases (common and edge) in their specifics (what they include) rather than how they operate (make a request and check a response).

Coming Up Next: Salvaging Useful Ideas…

ActiveTest: Examination, Part II: Abstracting Without Basis

Posted over 7 years back at Wood for the Trees

My first mistake started the following cascade effect:

  • move all assert_x_success and assert_x_failure to class methods
  • begin defining any test case as a class method
  • extract the definition of an instance method to the superclass level (ActiveTest::Controller)
  • extract those superclass methods to abstract methods (ActiveTest::Subject)
  • fragment assert conditions to be passed up the hierarchy (define_behavioural)
  • extract fragments to the class level (assert_restful_get_success, method_missing, etc.)
  • rinse & repeat for each Subject

Rather than inheriting from a single class with all its assertions self-contained, I now inherited from ActiveTest::Base -> Subject -> Controller. Along the way there were sprinkled methods on the class and eigenclass level which helped define the test cases (ActiveTest::Subject.define_behavioral) or provide extra assertions. I had extracted so much that it was a nightmare to understand what was actually happening (a rare problem when writing in Ruby).

An Extraction Case Study from 0.1.0 Beta (abridged)

module ActiveTest
  class Controller < Subject

    class << self

      def succeeds_on(action, options = {})
        define_behavioral(:succeed_on, action, options) do
          send("call_#{action}_action", options)
          send("assert_#{action}_success")
        end
      end

    end

    ...

    def assert_restful_get_success(action, options = {})
      assert_response :success
      assert_template action.to_s
    end

    ...

    def method_missing(method, *args)
      options = args.last.is_a?(Hash) ? args.dup.pop : {}
      if method.to_s =~ /^assert_(.*)_(success|failure)$/
        action, sf = $1, $2
        if action !~ /get|post|put|delete/
          request_method = self.class.determine_request_method(action)
          send("assert_restful_#{request_method}_#{sf}", action, options)
        end
      elsif method.to_s =~ /^call_(.*)_action$/
        request_method = self.class.determine_request_method(action = $1)
        send("call_request_method", request_method, action, options)
      else
        super
      end
    end

    def call_request_method(request_method, action, options = {})
      options = expand_options(options)
      send(request_method, action, options[:parameters])
    end

  end

end

As you can see, I left out quite a few secondary methods which are being called. The first problem here is in the design of defining a behaviour: it gives too many hooks to the user that allow them to customise. If I am trying to address common cases, this should have been a red flag to a bull—I was blind. The result was a ton of method_missing calls because I extracted every call into a dynamic method, including the assertions in a dynamically defined test case (twice, in fact). Awful.

What Was I Thinking?

When I first began writing ActiveTest, I focused solely on making test cases a one-line business. ActiveTest contained just a simple class method on ActiveTest::Subject to define any kind of test case and a number of classes inheriting from it that defined common groups of assertions. Then, I abstracted the entire process into defining ‘behaviours’, approaching each test case as a behaviour request-response pair (in fact, it is a context if you use BDD terminology). I went through a number of phases looking for the cleanest way to implement this idea of behaviours (I aped RSpec, Test::Rails and others). None of them seemed appropriate, so my own metalanguage was written that looked clean (but only when I did not provide parameters). I thought the metaprogramming of tests with a descriptive class method call would be self-explanatory enough.[1]

Take away from this one single maxim, by which you should live and die when you refactor: do not abstract without real-life need. If you don’t base your abstraction on something real, you are theorizing—and we all know that the problem with theories is that they have not been tested.

Coming Up Next: Code Bloat…

Footnotes

1 Recently I encountered a situation where parameters were impossible to pass. This unforeseen situation (where I needed RFuzz:http://rfuzz.rubyforge.org/) contributed to my realisation that ActiveTest’s current design is useless for anything moderately dynamic or complex.

ActiveTest: Examination, Part II: Abstracting Without Basis

Posted over 7 years back at Wood for the Trees

My first mistake started the following cascade effect:

  • move all assert_x_success and assert_x_failure to class methods
  • begin defining any test case as a class method
  • extract the definition of an instance method to the superclass level (ActiveTest::Controller)
  • extract those superclass methods to abstract methods (ActiveTest::Subject)
  • fragment assert conditions to be passed up the hierarchy (define_behavioural)
  • extract fragments to the class level (assert_restful_get_success, method_missing, etc.)
  • rinse & repeat for each Subject

Rather than inheriting from a single class with all its assertions self-contained, I now inherited from ActiveTest::Base -> Subject -> Controller. Along the way there were sprinkled methods on the class and eigenclass level which helped define the test cases (ActiveTest::Subject.define_behavioral) or provide extra assertions. I had extracted so much that it was a nightmare to understand what was actually happening (a rare problem when writing in Ruby).

An Extraction Case Study from 0.1.0 Beta (abridged)

module ActiveTest
  class Controller < Subject

    class << self

      def succeeds_on(action, options = {})
        define_behavioral(:succeed_on, action, options) do
          send("call_#{action}_action", options)
          send("assert_#{action}_success")
        end
      end

    end

    ...

    def assert_restful_get_success(action, options = {})
      assert_response :success
      assert_template action.to_s
    end

    ...

    def method_missing(method, *args)
      options = args.last.is_a?(Hash) ? args.dup.pop : {}
      if method.to_s =~ /^assert_(.*)_(success|failure)$/
        action, sf = $1, $2
        if action !~ /get|post|put|delete/
          request_method = self.class.determine_request_method(action)
          send("assert_restful_#{request_method}_#{sf}", action, options)
        end
      elsif method.to_s =~ /^call_(.*)_action$/
        request_method = self.class.determine_request_method(action = $1)
        send("call_request_method", request_method, action, options)
      else
        super
      end
    end

    def call_request_method(request_method, action, options = {})
      options = expand_options(options)
      send(request_method, action, options[:parameters])
    end

  end

end

As you can see, I left out quite a few secondary methods which are being called. The first problem here is in the design of defining a behaviour: it gives too many hooks to the user that allow them to customise. If I am trying to address common cases, this should have been a red flag to a bull—I was blind. The result was a ton of method_missing calls because I extracted every call into a dynamic method, including the assertions in a dynamically defined test case (twice, in fact). Awful.

What Was I Thinking?

When I first began writing ActiveTest, I focused solely on making test cases a one-line business. ActiveTest contained just a simple class method on ActiveTest::Subject to define any kind of test case and a number of classes inheriting from it that defined common groups of assertions. Then, I abstracted the entire process into defining ‘behaviours’, approaching each test case as a behaviour request-response pair (in fact, it is a context if you use BDD terminology). I went through a number of phases looking for the cleanest way to implement this idea of behaviours (I aped RSpec, Test::Rails and others). None of them seemed appropriate, so my own metalanguage was written that looked clean (but only when I did not provide parameters). I thought the metaprogramming of tests with a descriptive class method call would be self-explanatory enough.[1]

Take away from this one single maxim, by which you should live and die when you refactor: do not abstract without real-life need. If you don’t base your abstraction on something real, you are theorizing—and we all know that the problem with theories is that they have not been tested.

Coming Up Next: Code Bloat…

Footnotes

1 Recently I encountered a situation where parameters were impossible to pass. This unforeseen situation (where I needed RFuzz:http://rfuzz.rubyforge.org/) contributed to my realisation that ActiveTest’s current design is useless for anything moderately dynamic or complex.

ActiveTest: Examination, Part I: Removing Transparency

Posted over 7 years back at Wood for the Trees

The first abstraction I made was to change assert_get_success and its equivalents (get_failure, post_success, etc.) into an abstract idea of common test cases. Obviously every application is going to be treated differently, but to make a suite of tools to remove the most common tests seemed like a useful thing to extract—it’s DRY.

Consequently, I had the following in Test::Unit::TestCase:

  class << self
    def test_should_index_items
      define_method(:test_should_index_items) do
        assert_get_success :index
      end
    end
  end

I then abstracted it a little:

  class << self
    def test_should_get_with_success(action = :index, parameters = {})
      define_method("test_should_#{action}_item") do
        assert_get_success action, parameters
      end
    end
  end

This is basic DRYing up. However, I have at this point already lost sight of the simplest benefit of writing out test cases with Test::Unit: documentation. If someone else uses this, the only thing they know is that their GET succeeds… but in what way? I began documenting what each method does and extracting more and more complex cases which may be ‘common’. Documenting these methods hid the general fault, which was making the entire testing process opaque to the user. Tests started to look like this:

class TestSomething < ActiveTest::Base
  test_should_get_with_success :index
  test_should_get_with_success :show, :id => 1
  test_should_get_with_success :new
  test_should_get_with_success :edit, :id => 1
end

So my first mistake was removing transparency without thinking about what I was beginning to lose. This is quite project-specific. In most cases, making methods opaque and useful is a good thing, but in the case of ActiveTest, it needs to be flexible and transparent. It is hard to beat a series of asserts.[1]

This mistake is probably my least offensive, because some, including myself, may like this feature just to remove all those irritating test_should_index_items, especially when you are writing new controllers left and right.

What Was I Thinking?

I thought that the documenting aspect of testing was purely outlining what everything should do, rather than describing what everything should do. My original idea was to make it easier to document and test at the same time by outlining the cases in each suite. Just in terms of line ratios, I now had a 5-line method which let me write a test case in 1 line. Not bad. It quickly paid for itself because it catered to :index, :new:, :show and :edit. However, I lost all the expressiveness of showing what is being tested.

Coming Up Next: Abstracting Without Basis…

Footnotes

1 It can be done. In fact, the next generation of Active Test, while rather different, frees up the tester to think completely about the behaviour of his code. I will trade in the brevity of test cases for the flexibility of defining them (for anything, whether Rails application, plugin, gem or 60-line file).

ActiveTest: Examination, Part I: Removing Transparency

Posted over 7 years back at Wood for the Trees

The first abstraction I made was to change assert_get_success and its equivalents (get_failure, post_success, etc.) into an abstract idea of common test cases. Obviously every application is going to be treated differently, but to make a suite of tools to remove the most common tests seemed like a useful thing to extract—it’s DRY.

Consequently, I had the following in Test::Unit::TestCase:

  class << self
    def test_should_index_items
      define_method(:test_should_index_items) do
        assert_get_success :index
      end
    end
  end

I then abstracted it a little:

  class << self
    def test_should_get_with_success(action = :index, parameters = {})
      define_method("test_should_#{action}_item") do
        assert_get_success action, parameters
      end
    end
  end

This is basic DRYing up. However, I have at this point already lost sight of the simplest benefit of writing out test cases with Test::Unit: documentation. If someone else uses this, the only thing they know is that their GET succeeds… but in what way? I began documenting what each method does and extracting more and more complex cases which may be ‘common’. Documenting these methods hid the general fault, which was making the entire testing process opaque to the user. Tests started to look like this:

class TestSomething < ActiveTest::Base
  test_should_get_with_success :index
  test_should_get_with_success :show, :id => 1
  test_should_get_with_success :new
  test_should_get_with_success :edit, :id => 1
end

So my first mistake was removing transparency without thinking about what I was beginning to lose. This is quite project-specific. In most cases, making methods opaque and useful is a good thing, but in the case of ActiveTest, it needs to be flexible and transparent. It is hard to beat a series of asserts.[1]

This mistake is probably my least offensive, because some, including myself, may like this feature just to remove all those irritating test_should_index_items, especially when you are writing new controllers left and right.

What Was I Thinking?

I thought that the documenting aspect of testing was purely outlining what everything should do, rather than describing what everything should do. My original idea was to make it easier to document and test at the same time by outlining the cases in each suite. Just in terms of line ratios, I now had a 5-line method which let me write a test case in 1 line. Not bad. It quickly paid for itself because it catered to :index, :new:, :show and :edit. However, I lost all the expressiveness of showing what is being tested.

Coming Up Next: Abstracting Without Basis…

Footnotes

1 It can be done. In fact, the next generation of Active Test, while rather different, frees up the tester to think completely about the behaviour of his code. I will trade in the brevity of test cases for the flexibility of defining them (for anything, whether Rails application, plugin, gem or 60-line file).

ActiveTest: Examination, Introduction: A Mistake and How To Fix It

Posted over 7 years back at Wood for the Trees

In trying to bring ActiveTest to a state resembling my original article about ActiveTest, I realised that … it’s a piece of crap. It just isn’t the kind of code which many but myself would find useful and now even I don’t find it that helpful anymore. A total cowpat of a project, sadly.

To help both myself and the community, I will be analysing my mistake in full and mercilessly revealing my thinking process, development and design along the way. Because it may get a little… lengthy, I’ll break it down into a number of articles. The last article will be dedicated entirely to making Active Test useful (which itself is three parts: salvaging useful ideas, redesigning weak areas, and changing purpose).

Obituary to Active Test 0.1.0 Beta

In its death-throes, the Active Test Plugin can still be useful: as an example of what not to do. Before I start laying into my sad miscarriage of an idea, I’ll outline useful ideas which came out of it:

  • Finding a way to extend Test::Unit which is safe and allows inheritance
  • Learning different ways of metaprogramming (especially ways to wrap define_method)
  • Extracting a more granular set of things to test in an application
  • Learning techniques for self-testing a test library

As with many mistakes, more than half of it is about learning rather than providing something useful. That’s kind of the point, I guess! I didn’t receive a single support e-mail or comment that it has helped anyone. Then I began to realise what I was doing was wrong and had managed to air my dirty laundry in the process.

An overview of the mistakes

With that small obituary to ActiveTest in its current incarnation, let’s look at the problems in the order in which I’ll be discussing them:

  • Removing test case transparency (an advantage of Test::Unit)
  • Extracting and abstracting without a real-world need or basis1
  • Excessive metaprogramming, which introduces ambiguity into tests (the cardinal sin)
  • Code bloat, plain and simple

The Idea: the original test case

I first came up with ActiveTest when I looked at this test case (from an old revision of one of my projects):

  def test_should_index_items
     assert_get_success :index
  end

At first glance, this looks almost identical to what ActiveTest currently does. I have a case where I want to ensure that a typical GET request receives a HTTP 200 response. I extrapolated HTTP 200 to HTTP GET into a request-response ‘success’ condition. The method was given a parameter (an action to call) that was passed to get and assert_template. All perfectly fine, a little DRY, but not unreadable. However, this test case and a bunch of others exactly like it formed almost the entire basis of my initial version of Active Test.

Coming Up Next: My First Mistake…

Footnotes

1 For a milder case of this, called premature extraction, see this article

ActiveTest: Examination, Introduction: A Mistake and How To Fix It

Posted over 7 years back at Wood for the Trees

In trying to bring ActiveTest to a state resembling my original article about ActiveTest, I realised that … it’s a piece of crap. It just isn’t the kind of code which many but myself would find useful and now even I don’t find it that helpful anymore. A total cowpat of a project, sadly.

To help both myself and the community, I will be analysing my mistake in full and mercilessly revealing my thinking process, development and design along the way. Because it may get a little… lengthy, I’ll break it down into a number of articles. The last article will be dedicated entirely to making Active Test useful (which itself is three parts: salvaging useful ideas, redesigning weak areas, and changing purpose).

Obituary to Active Test 0.1.0 Beta

In its death-throes, the Active Test Plugin can still be useful: as an example of what not to do. Before I start laying into my sad miscarriage of an idea, I’ll outline useful ideas which came out of it:

  • Finding a way to extend Test::Unit which is safe and allows inheritance
  • Learning different ways of metaprogramming (especially ways to wrap define_method)
  • Extracting a more granular set of things to test in an application
  • Learning techniques for self-testing a test library

As with many mistakes, more than half of it is about learning rather than providing something useful. That’s kind of the point, I guess! I didn’t receive a single support e-mail or comment that it has helped anyone. Then I began to realise what I was doing was wrong and had managed to air my dirty laundry in the process.

An overview of the mistakes

With that small obituary to ActiveTest in its current incarnation, let’s look at the problems in the order in which I’ll be discussing them:

  • Removing test case transparency (an advantage of Test::Unit)
  • Extracting and abstracting without a real-world need or basis1
  • Excessive metaprogramming, which introduces ambiguity into tests (the cardinal sin)
  • Code bloat, plain and simple

The Idea: the original test case

I first came up with ActiveTest when I looked at this test case (from an old revision of one of my projects):

  def test_should_index_items
     assert_get_success :index
  end

At first glance, this looks almost identical to what ActiveTest currently does. I have a case where I want to ensure that a typical GET request receives a HTTP 200 response. I extrapolated HTTP 200 to HTTP GET into a request-response ‘success’ condition. The method was given a parameter (an action to call) that was passed to get and assert_template. All perfectly fine, a little DRY, but not unreadable. However, this test case and a bunch of others exactly like it formed almost the entire basis of my initial version of Active Test.

Coming Up Next: My First Mistake…

Footnotes

1 For a milder case of this, called premature extraction, see this article

Digg Scares Me (403 Go Away!)

Posted over 7 years back at Ryan Tomayko's Writings

I just found out that this piece about IE testing with Parallels hit the digg.com front page briefly at some point on Tuesday. The comments over there are pretty much unanimously in favor of having me dragged out into the street and shot.

The funny thing about Digg is that it changes the way people read. The average Digger seems to assume that people write stuff solely for the purpose of making it to the Digg front page. My article was clearly a cheap ploy to capitalize on the recent buzz around Parallels combined with reports on battery explosions to drive traffic to my ad-ridden site. Or I'm being paid by Microsoft to spread negative press about the Mac. NO, I'm a Firefox fanboy that hates Microsoft, Apple, and Parallels. It’s crazy.

No one knows you there so you have to write in a way that is completely void of who you are and what you’re about. That sucks. I'd rather just opt out of the popularity contest (and I did – see below).

Here’s some examples of the presumptuous attitude you find in comments there:

This story should focus more on the actual problem: that is, that the power cord was faulty.

First, I'm not writing a “story” for Digg Corporation and, to be honest, I have very little interest in improving my writing to better serve the Digg “community”.

Second, if I was formally reporting a power cord issue, I would do it with AppleCare.

The point of the post was simply to share (with other developers) an anecdote about a common occurrence in software development: sometimes you take a step forward and two steps back. In this case, IE testing became a bit easier but seemed to cause a hardware failure. The pattern seems to repeat itself over and over in software development, this iteration was just more bizarre than most.

Put that on the Digg front page and somehow it becomes an attack on Apple, Microsoft, Parallels, Web Standards, World of Warcraft, and George W. Bush.

Not the internet’s fault this guy is an idiot. MY POWER CHORD MELTED? MUST BE MICROSOFT’S FAULT!

Okay, a “POWER CHORD” is something you strum on a guitar.

And when did I ever insinuate this could possibly be Microsoft’s fault? We know the power cord should not melt and Apple makes the power cord. It would seem to be an hardware issue set off by running multiple instances of Parallels but I really don’t know because I haven’t done any extensive research on the issue and, more importantly, I really don’t care. I pretty much assume stuff will break like this every once in a while and just try to move past it as soon as possible.

Nice MacGyver (aka duct taping) skills.

Now that’s a proper comment! Using duct tape for something that requires electrical tape is asinine, it’s also a little funny.

403 Go Away!

Anyway, the whole experience has freaked me out to the point that I'd rather not have to deal with it so I'm blocking digg traffic. Here’s how it works.

In Rails, I modified the controller to send back a 403 Forbidden response when we detect someone coming from digg.com:

before_filter :only => :permalink do |controller|
  if controller.request.env['HTTP_REFERER'] =~ /^http:\/\/(www\.)?digg\.com/
    controller.render :action => 'terrified', :status => 403, :layout => false
    false
  else
    true
  end
end

The response looks something like this:

$ curl -e http://digg.com -si http://tomayko.com/weblog/2006/12/23/parallels-makes-ie-suck-less
HTTP/1.1 403 Forbidden
Content-Length: 363
Date: Sat, 30 Dec 2006 10:50:25 GMT
Status: 403
Cache-Control: no-cache
Server: Mongrel 0.3.13.3
Content-Type: text/html;charset=utf-8

<html>
  <head><title>Go Away!</title></head>
  <body>
  <h1>403 Go Away!</h1>
  <p>The server understood the request, but is refusing to fulfill it because 
  you're coming from <a href='http://digg.com'>digg.com</a> and the proprieter
  of this system is frankly terrified by you people. Authorization will not 
  help and the request SHOULD NOT be repeated.</p>
  </body>
</html>

Don’t believe me? You can test by following the link back here from this page on digg.com.

Until next year...

Posted over 7 years back at The Hobo Blog

So I’m off with the family for a couple of days with friends in the West Country. My understanding is that these particular friends have NO INTERNET CONNECTION. He’s an artist – go figure. So I’ll be off-line for the rest of the year :-).

Happy New Year to you all! If the few days since the launch of Hobo are anything to go by, 2007 should be an exciting year.

Until next year...

Posted over 7 years back at The Hobo Blog

So I’m off with the family for a couple of days with friends in the West Country. My understanding is that these particular friends have NO INTERNET CONNECTION. He’s an artist – go figure. So I’ll be off-line for the rest of the year :-).

Happy New Year to you all! If the few days since the launch of Hobo are anything to go by, 2007 should be an exciting year.

The Pending Ruby/Java Co-op

Posted over 7 years back at Ryan Tomayko's Writings

What follows started out as a single bullet point from a list of predictions for 2007. It somehow went awry. As such, you would do well to consider this piece with roughly the same amount of seriousness as would be given to any other tech prediction article.

A couple of years ago I was excited about dynamic languages on the JVM and what it could mean for web developers. Then I wasn’t. Now I am again. I've been completely clean of Java for almost two years now but I have a feeling that might change in 2007 (probably late 2007).

Java will Eventually Not Suck

There’s basically three things that need to occur before Java gets to go back in the toolbox:

  1. The mainline JVM must be available – the way I want it. The hardest part of this was done when Java went GPL and makes everything that follows possible.

  2. The quantity of Java packages must go up. I want a huge inventory of libraries packaged on all popular Linux and BSD packaging systems. I'm mostly interested in FreeBSD, which already includes 148 Java related ports in the main repository, but I need to know packages exist on all major Linux distributions as well.

  3. The quality of Java packages must go up. By quality I mean packages should be individually well maintained, patched appropriately for the intended system, and updated with little lag behind upstream. Quality is a function of eyeballs. #1 and #2 will result in an increase of eyeballs and thus an eventual increase in quality. QED

  4. Better JVM startup time. This may be fixed already, I'm not sure. If it isn’t, it will be once #1, #2, and #3 are in effect.

  5. JRuby running under rough performance and feature parity with CRuby.

Once these things are settled, Java becomes a serious option for me and people like me, whether we know it or not.

Java: A Better C

All Java has to do is be better than C at some of the things C does now. Java will, in effect, become the new C with a small but critical difference. Before we get into what that difference is, let’s take a look at a few of the most important things C does for web developers today:

  • The HTTP/web server. Must be fast, fast, fast – silly fast. Right now this is httpd, lighttpd, nginx, and Mongrel. All C.

  • The language interpreter: python, ruby, perl, php, etc. (I suppose java should be in that list as well :)

  • Lower level support libraries: libxml2, openssl, ImageMagik, cairo, glib, yaml, RDBMS client libraries (psql, mysql), gzip, bzip, termios, etc.

  • Situations that require better performance than is easily had in a higher level language: extension libraries, basically.

You didn’t know how much you loved C, did you?

C is portable, fast, has lots of quality libraries, good tool support, and is Freely available. Java is portable in theory, fast enough, has lots of quality libraries, excellent tool support, and is about to become truly available.

At this point, the pragmatic developer should be asking, what makes Java so much better than C that I should consider using it instead when I'm satisfied with what I have today? The answer is quite simple: malloc(3) and free(3), and more specifically, Java’s lack thereof.

You can take Java’s crippled OO foundation, enterprise scalability, vendor supported JEE containers and just throw it all out the window: I don’t care. Memory management is the reason to consider using Java over C right now.

Ruby: Just Because

Ruby is but one of many extremely productive and well designed high level languages: Python, Smalltalk, (Good) Perl, Scheme, Common Lisp, Groovy, Nice, etc. All amazing languages when you care about expressing logic to humans and, incidentally, to machines*.

If classes have a class, I’ll take it over Java. So why Ruby?

  • It hasn’t failed yet. I mean, Jython’s been around for a long time, seems to have lost steam, and the original author was hired by Microsoft for his work on IronPython. While I wouldn’t go so far as to call this failure, the momentum needed to build a strong community around the JVM seems to have waned.

  • Many other languages have already been smeared as kiddy scripting environments by Sun and the tech press.

  • The Perl guys are busy on their own VM.

  • There’s already interest in improving Ruby’s running environment in the Ruby community. The C based Ruby interpreter doesn’t seem to have the same level of prestige as, say, CPython or perl. It’s widely considered one of the slower dynamic language interpreters. Work on a new VM was recently merged to Ruby’s trunk but is not yet stable and won’t be for some time.

  • PHP doesn’t need the JVM: as far as I can tell, PHP is pretty much tied to C for all eternity. I'm not saying that’s a bad thing but the C under-pinnings are so clearly visible in the higher level language that putting the thing on Java almost seems like an insult.

  • Ruby is cool right now. I don’t pretend to understand the mechanics of cool but Ruby is it – for the time being, anyway. Whether that makes sense or not is irrelevant; with hype comes real and tangible benefits.

Now, as much as I like Ruby, I don’t think there’s anything insanely spectacular about the language that makes it better suited for a coup d'état on the JVM than many other languages. It just seems to be in the right place at the right time.

The other reason I'm focusing on Ruby is that I'm biased, lazy, selfish, and trying to create a bit of hype here. I've been using Ruby heavily for almost a year now and, as much as I hate to admit it, I'm beginning to hit a performance wall in some very specific cases. In addition, a few Java libraries have caught my eye whose Ruby or (C with Ruby bindings) analogs are considerably less developed.

I want to be clear that performance is not a problem across the board for me in Ruby. Quite the contrary, in fact. Performance has been good enough in all but a few corners of development. I'm also not talking about the ability to scale or anything that can be fixed with more hardware. I'm talking about stuff like image manipulation (graphing specifically), PDF generation, and flat file parsing. In these places, I could really use orders of magnitude increases in raw performance over what I can reasonably expect from Ruby today.

Solving the performance problem has meant dropping down into C, which isn’t all that bad but I'd personally be more comfortable dropping down into Java.

How this plays out in 2007

In Java land, people start taking Ruby seriously as an alternative to The Java Language on the JVM in a way that Jython, Groovy, and other dynlangs never accomplished. Not as a “scripting” or “glue” language but as a first class, general purpose programming language. A comfortable majority of components will still be written in Java for performance and broad reuse but Ruby will come in from the top and begin to steal away much of the application level logic on new projects – basically anything where readability and maintainability are more important than performance and reuse.

Ruby’s impact on the Java community will be almost entirely due to JRuby reaching increasing levels of bad-assedness. That is to say, Ruby’s adoption by Java heads on Java projects won’t have as much to do with the GPL and better packaging support as it does with the virtues of the Ruby language and the JRuby interpreter.

Suns increased financial support for, and clueful positioning of, the Ruby language in the Java ecosystem will also be critical. There’s all kinds of problems with this part, of course. Mentioning “Sun” and “clueful” in the same sentence generally requires a negation but there’s cause to believe that might be changing. A boy can dream, anyway.

While all this is playing out with the Java folks, the JVM will begin to be taken seriously as an alternative to CRuby in the Ruby community. Unlike in the Java world, this has less to do with JRuby and more to do with Java going GPL and the ready availability of the mainline JVM on Linux and BSD. You will begin to hear reports of Rails running on the JVM in both development and production environments with very little disruption to existing code bases. This will be followed by blog post after blog post extolling the virtues of being able to drop down into Java for performance critical pieces of code and how nifty JFreeChart is.

By the end of 2007, much of the Ruby web development community will consider the JVM to be a viable runtime environment. In Java land, Ruby on the JVM will be a growing sensation but it will take another year (2008) before Ruby is considered a peer to The Java Language.

On the other hand…

Things could be horribly different. Sun could marginalize Ruby the same way it has marginalized every other language that’s tried to run on the JVM in the past. JRuby could poop out before reaching a satisfactory level of stability and performance (see comments). The Ruby community could reject Java because of its historically poor reputation among hackers. The larger Java community could reject Ruby as a kiddy language.

One thing’s for sure: we’ll be hearing a lot about Ruby and Java in 2007.

Wot no Tests?

Posted over 7 years back at The Hobo Blog

Just looking through my referers and found this post. Quote:

I’ll start writing some tests for it when I get back (looks like it’s in need of some)

*Blush* :-) Actually I have 128 tests with 298 assertions (which is better than nothing but not nearly enough for something on the scale of Hobo). But they’re sitting in the app that gave birth to Hobo. It wasn’t until recently that I discovered how to set up a full Rails testing environment within a plugin. To be honest the app and tests are kind of a mess. I really want to extract the tests into the plugin test directory, and clean everything up a bit in the process.

Meanwhile if anyone really wants access to the tests you might be able to twist my arm :-)