IBM Poopheads: "LAMP Users Need to Grow Up"

Posted almost 9 years back at Ryan Tomayko's Writings

IBM says LAMP users need to grow up.

Let’s do it:

According to Daniel Sabbah, general manager of IBM’s Rational division, LAMP — the popular Web development stack — works well for basic applications but lacks the ability to scale.

Nope. We call bullshit. After wasting years of our lives trying to implement physical three tier architectures that “scale” and failing miserably time after time, we’re going with something that actually works.

If you look at the history of LAMP development, they’re really primative tools … the so-called good enough model. The type of businesses being created around those particular business models are essentially going to have to grow up at some point.

No. The LAMP stack is a properly constructed piece of software. Features are added when an actual person has an actual need that arises in the actual field, not when some group of highly qualified architecture astronauts and marketing splash-seekers get together to compete for who can come up with the most grown-up piece of useless new crap to throw in the product.

The LAMP model works because it was built to work for and by people building real stuff. The big vendor / big tools model failed because it was built to work for Gartner, Forrester, and Upper Management whose idea of “work” turned out to be completely wrong.

Now you’re saying that the primitive yet successful LAMP model should adopt the traits of the sophisticated yet failing big vendor model.

I believe that in the same way that some of those simple solutions are good enough to start with, eventually, they are going to have to come up against scalability, Sabbah said during a press conference at the IBM Rational User Conference in Las Vegas.

We can’t scale? Really? Are you insane?

Alright, that last jab may have been a bit unfair. I think what Sabbah is really talking about is PHP. I can’t be sure but none of Yahoo!, Amazon, Ebay, or Google seem to be using PHP widely on their public sites. But then again, they aren’t using Websphere/J2EE, .NET, or other scalable physical three tier architectures either.

UPDATE: See comments for interesting notes on PHP usage at Yahoo!.

While we’re talking about architectures, I'd like to jump into a brief commentary on what’s really at the root of the debate here.

There arewere two widely accepted but competing general web systems architectures: the Physical Three Tier Architecture and the Logical Three Tier Architecture. IBM (and all the other big tool vendors) have been championing one of them and LAMP is a good framework for the other (although you’ll rarely hear anyone admit that LAMP provides an overall architecture).

The Physical Three Tier Architecture

Many large enterprise web applications tried really hard to implement a Physical Three Tier Architecture, or they did in the beginning. The idea is that you have a physical presentation tier (usually JSP, ASP, or some other *SP) that talks to a physical app tier via some form of remote method invocation (usually EJB/RMI, CORBA, DCOM) that talks to a physical database tier (usually Oracle, DB2, MS-SQL Server). The proposed benefits of this approach is that you can scale out (i.e. add more boxes) to any of the physical tiers as needed.

Great, right? Well, no. It turns out this is a horrible, horrible, horrible way of building large applications and no one has ever actually implemented it successful. If anyone has implemented it successfully, they immediately shat their pants when they realized how much surface area and moving parts they would then be keeping an eye on.

The main problem with this architecture is the physical app box in the middle. We call it the remote object circle of hell. This is where the tool vendors solve all kinds of interesting what if type problems using extremely sophisticated techniques, which introduce one thousand actual real world problems, which the tool vendors happily solve, which introduces one thousand more real problems, ad infinitum…

It’s hard to develop, deploy, test, maintain, evolve; it eats souls, kills kittens, and hates freedom and democracy.

Over the past two years, every enterprise developer on the planet has been scurrying to move away from this architecture. This can be witnessed most clearly in the Java community by observing the absolute failure of EJB and the rise of lightweight frameworks like Hibernate, Spring, Webwork, Struts, etc. This has been a bottom up movement by pissed off developers in retaliation to the crap that was pushed on them by the sophisticated tool vendors in the early century.

Which brings us nicely to an architecture that actually works some times and loves freedom.

The Logical Three Tier Architecture

More specifically, the Shared Nothing variant of the Logical Three Tier Architecture says that the simplest and best way to build large web based systems that scale (and this includes enterprise systems goddamit) is to first bring the presentation and app code together into a single physical tier. This avoids remote object hell because the presentation code and the business logic / domain code are close to each other.

But the most important aspect of this approach is that you want to push all state down to the database. Each request that comes into the presentation + app tier results in loading state for a set of objects from the database, operating on them, pushing their state back down into the database (if needed), writing the response, and then getting the hell out of there (i.e. releasing all references to objects loaded for this request, leaving them for gc).

That’s the rule.

So the physical database tier and the physical presentation + app tier make up our logical three tier architecture but I'd like to talk about one other latch-on piece of this setup because it’s interesting to contrast it with how the Physical Three Tier purists deal with the same problem.

Fine Grained Caching

Some mechanism for caching becomes really important when you decide that you are spending too much money on hardware (note that both of these architectures will scale up and out, on each physical tier independently, for as far and wide as you can pay for hardware). Adding some form of caching reduces the amount of hardware needed dramatically because you've reduced utilization somewhere.

In the physical three tier architecture, there is generally a lot of sophisticated mechanisms for caching and sharing object state at a very granular level in the app tier to reduce utilization on the the database and increase response time. This is cool and all but it increases utilization on the app tier dramatically because so much time is now spent managing this new state.

The introduction of state (even just a little state for caching objects) forces the app tier to take on a lot of the traits of the database. You have to worry about object consistency and be fairly aware of transactions. When that’s not fast enough what ends up happening is that more fine grained caching is added at the presentation tier to reduce round trips with the app tier.

Now you have three places that are maintaining pretty much the same state and that means you have three manageability problems. But this is, you know, cool because it’s really complex and sophisticated and the whiteboard looks interesting and lots of arm waving now.

Screw Fine Grained Caching

Shared Nothing says, screw that – the database is the only thing managing fine grained state because that’s it’s job, and then throws up caching HTTP proxy server(s) in a separate (and optional) top physical tier. Cached state is maintained on a much simpler, coarse grained level with relatively simple rules for invalidation and other issues.

When the Shared Nothing cache hits, it provides unmatched performance because the response is ready to go immediately without having to consult the lower tiers at all. When it misses, it misses worse than the fine grained approach because chances are good you’ll be going all the way to the database and back. But it turns out that it usually doesn’t matter. My experience says that you get as good or better performance with the coarse grained approach as you do with the fine grained approach for much less cost, although it’s hard to measure because the savings are distributed in very different ways.

The Shared Nothing + Caching Proxy setup scales like mad and I don’t just mean that it scales to really massive user populations. It scales low too. It’s easy to work with when you’re developing and testing on a single machine. It’s easy to have a simple prototype app done in a day. It’s easy to teach someone enough that they can go play and figure stuff out as they go. It’s easy to write tests because the entire system is bounded by the request and there’s no weird magic going on in the background.

The big vendor / big tool architectures sacrificed simplicity and the ability to scale low because they decided that every application was going to have one million users and require five 9’s from the first day of development.

As I write this, Bill de hÓra postulates: All successful large systems were successful small systems. I believe him and what that means to us right now in this article is that it is exceedingly hard to build large systems with the big vendor / big tool approach because it is exceedingly hard to build small systems with the same.

Let’s get back to the woodshed

While Sabbah was critical of LAMP’s capabilities, he said IBM is going to ensure companies which started with that model will be able to “grow and change with the rest of the world”.

He believes most businesses want technology that is stable, evolutionary, historical and had support.

L A M P = (S)table (E)volutionary (H)istorical (S)upport

“What we are trying to do is make sure businesses who start there [with LAMP] have a model, to not only start there but evolve into more complex situations in order to actually be able to grow,” he said.

This is where I really wanted to jump in because I think this mentality is holding back adoption of very simple yet extremely capable technology based purely on poor reasoning. This view of systems design says that complexity is required if growth is desirable and that complex situations can only be the result of complex systems.

There’s a guy who just spent 50 years or something locked in a room writing a 1200 page book proving that this is just wrong. It would appears that there is very little relationship between the complexity of a program and the complexity of the situation it produces.

The complexity for complexity mindset is the bane of a few potentially great technologies right now:

  • Static vs. Dynamic Languages
  • J2EE vs. LAMP
  • WS-* vs HTTP

I like to complain when someone calls Python a scripting language because the connotation is that it is simple. But it is simple, right? So there shouldn’t be any complaining. I'm not objecting to someone calling Python simple, I'm objecting to then saying that because it is simple, it must only be capable of simple things.

The Need For Complex Systems

“You've seen us do a lot with PHP and Zend and you’ll see us do more. I can’t say more. It [PHP] needs to integrate with enterprise assets but it needs to remember where it came from and where its value was. It can’t start getting too complex as it will lose its audience,” Sabbah said.

The need for complex systems in the enterprise was and still is greatly overestimated. The trick isn’t to make PHP more complex, it’s to make the enterprise less complex. You need to equate complex requirements with complex systems less and start asking “do we really need this?” more.

The funny thing about all this is that my opinion on this matter has formed largely based on concepts that you guys told me, so I'm sure you’ll pull through on this one.


Posted almost 9 years back at Ryan Tomayko's Writings

The site is back up after two days of downtime but if my theory is correct then no one noticed. Our records indicate that 95% of the (20) people reading this site do it via a proper RSS reader, so no one should have noticed a difference between the site being down and me just going on a quiet spell for a few days. The other traffic comes from search, and seeing as how my pages look almost flawless coming from google’s cache, I don’t see why I have to care about downtime all that much. Which is good because this site runs on and old P300 GNU/Linux box off my cable connection and has been since 2003 (talk about the ‘ilities!).

Anyway, cha-changes… I usually don’t get very personal in posts — tending more toward commentary on larger issues — but there’s so much going on right now that I have to write some of this down if just for the archives.

One of the things I never talk about here is my day job. This isn’t because it isn’t related (most of my posts on technology are at least relevant to my day job) but because my employer had no formal policy on public discourse. That combined with the fact that I really enjoy being able to say whatever the fuck I want here led me to keeping work completely out of the picture.

But today, while waiting on The Exit Interview With The Human Resources Department, I was reading over my non-disclosure agreement when I noticed that it was fairly sane, forbidding only the disclosure of confidential company information. Me being employed there isn’t confidential so I guess it’s okay to say that I was employeed by Sterling Commerce from March 2000 until about five hours ago.

Sterling has been around a loooong time for a tech company. This PR blurb does a surprisingly good job of providing some actual information on the company’s background:

A pioneer of electronic data interchange (EDI) and secure file transfer technology, it has provided business process automation solutions to Fortune 500 companies and the world’s largest banks for 30 years. Formerly a division of Sterling Software, Sterling Commerce became an independent corporation in March 1996, through an initial public offering. The company quickly grew to become one of the world’s largest independent providers of multi-enterprise collaboration solutions. SBC Communications acquired Sterling Commerce in March 2000.

They’re one of the few NASDAQ tech companies that didn’t pop with the bubble. I attribute this mainly to the fact that they actually did something: boring old valuable-as-hell EDI and other tough business integration tasks. Most of their current initiatives involve doing the same boring old valuable-as-hell EDI stuff but this time over internet pipes instead of bisync modems.

I won’t say much more about Sterling because talking about it is still gray to me and I don’t want to piss anyone off (at Sterling). I really don’t even know if I can talk about talking about it.

I would like to mention one product I worked on there because it was a kind of epiphany to me. The product was Webforms (rebranded Web Forms it seems <sigh>). It’s just a web app that let smaller Mom-and-Pop shops talk EDI to all the big boys running them thar' expensive mainframes and bisync modems.

Now this was an important situation for a couple of reasons. First, Webforms was an extremely functional web app for the time (~2000). The web was all about animated GIFs, Java applets, <blink> and <marquee> tags, and was just very oh-we-can-make-money-by-just-looking-really-cool-ish in general. And these guys show me this plain old ugly white pages with big forms all over the place web-app. Maybe you'd see a small company logo here and there, maybe not. Fuck it, your company logo isn’t helping anyone with their orders.

It was the plainest, most vanilla piece of pure gold I'd ever seen. Clay Shirky figured out the internet in 1996, I figured it out in 2000 while at Sterling Commerce. The internet is here to do the exact same shit we did before the internet was here but cheaper, broader, and with less fuss. ROCK ON! Let’s do it. I'm pumped.

Unfortunately that leads me to the second reason Webforms was an enlightening experience. It was developed in a company that really hated the web. We owned a massive EDI VAN and we had desktop Windows and UNIX products… that let you talk to the VAN. This is what 2000 people at Sterling knew and understood. Combine that with the following snippet from’s definition of VAN:

… Before the arrival of the World Wide Web, some companies hired value-added networks to move data from their company to other companies. With the arrival of the World Wide Web, many companies found it more cost-efficient to move their data over the Internet instead of paying the minimum monthly fees and per-character charges found in typical VAN contracts.

And then these Webforms guys go and plop a web app down in the middle of all this – it wasn’t pretty. Needless to say, excitement about the product never really made it out of development for the years I worked on it but seems to finally be picking up a little now.

So the other big thing “I got” about the internet was that providing real value would usually require being extremely disruptive, which would require pissing a lot of people off, which I will now dub Tomayko’s Law of The Internet:

To provide value on the internet, you must piss someone off.


  • Napster / RIAA
  • Wikipedia / Britannica
  • RSS / Journalism
  • Skype / The Entire Phone Industry (holy shit!)
  • Free and Open Source Software / Microsoft (the whole industry is being flipped on its head in case no one noticed)
  • BitTorrent / Television (yes, bittorrent will replace your television stations whether you like it or not).
  • Google / Yahoo <snark>
  • Corey Doctorow / Disney
  • And the list goes on…

All provide value by pissing someone off.

Notable exceptions from this list are AOL, WS-*, Semantic Web, Java Applets, RealAudio streaming, Microsoft Passport, top posting, and DRM. Each of these technologies piss everyone off; this somehow lessens the value dramatically. So the trick is to find a group that’s roughly the size of, let’s say, an industry and piss them off as much as possible.

That’s an insanely brief description of what I've learned at Sterling and this one is already running long so more on the ch-changes later.

Who are they?

Posted almost 9 years back at Ryan Tomayko's Writings

Bill Moyers lashes out against the Corporation for Public Broadcasting to a large audience at the National Conference on Media Reform:

Who are they? I mean the people obsessed with control using the government to threaten and intimidate; I mean the people who are hollowing out middle class security even as they enlist the sons and daughters of the working class to make sure Ahmad Chalabi winds up controlling Iraq’s oil; I mean the people who turn faith-based initiatives into Karl Rove’s slush fund; who encourage the pious to look heavenward and pray so as not to see the long arm of privilege and power picking their pockets; I mean the people who squelch free speech in an effort to obliterate dissent and consolidate their orthodoxy into the official view of reality from which any deviation becomes unpatriotic heresy. That’s who I mean. And if that’s editorializing, so be it. A free press is one where it’s okay to state the conclusion you’re led to by the evidence.

Holy crap, there’s audio too.

Why RedMonk Must Succeed

Posted almost 9 years back at Ryan Tomayko's Writings

If you’re wondering what tech analyst firms will look like in the future, take a peek at RedMonk. That’s a link to their company page but let’s face it, faceless corporate pages suck so here’s where you should really be going:

And here’s why:

We support our subscribers by tailoring our analysis to their needs, helping them understand a world that is changing ludicrously fast, with insights, contexts, and narratives, rather than trying to sell them a place on a quadrant or a pay for play research note, or a library of research they will never read.

Anyone that has worked at a company whose strategy is driven by Gartner magic quadrants instead of customer demand will understand why RedMonk is important.

So how is RedMonk doing financially? I don’t know. What I do know is that my aspirations for providing competent and honest technical innovation to solve real problems relies heavily on their model of providing competent and honest analysis being successful somehow.

Roxio is Apple's Bitch

Posted almost 9 years back at Ryan Tomayko's Writings

Okay, today’s piece of complete bullshit comes from Roxio, makers of the popular CD/DVD burner, Toast:

“Following discussions with Apple, this version will no longer allow customers to create audio CDs, audio DVDs, or export audio to their hard drive using purchased iTunes music store content,” notes Roxio in the version 6.1 release notes.

What is Apple trying pull? And why would Roxio ever agree to completely screw their paying customers by placing unwanted restrictions on purchased (read: you own it) music? Just to oblige Apple? Are they getting into bed together or is Roxio just bent on losing customers?

I'm beginning to wonder whether there is any real difference between Apple’s iTunes Music Store and the loathful subscription services like Rhapsody, Napster2, and now Yahoo!’s Music Unlimited.

OS X Network Location Support From The Command Line

Posted almost 9 years back at Ryan Tomayko's Writings

I move between three different network configurations with my powerbook in an average day. Two of these configurations have proxy servers and one does not. Mac OS X has really excellent network location support that lets me configure this stuff once so that switching locations is as simple as invoking QuickSilver, typing the first few letters of the network location and BAM.

Most applications automatically pick up the new proxy configuration but some do not, like Firefox (which is one of three big reasons I still use Safari). I do a lot of work from the command line with network based tools such as curl, wget, port, http_ping, links, svn, etc. None of these use the system proxy settings but most support specifying a proxy server via the http_proxy environment variable.

I've searched high and low for a mechanism that would handle setting the http_proxy variable based on my current network location but have come up with nothing.

First, you need to create a file /etc/http_proxy that specifies the proxy servers for each Network Location you have setup in your Network Preferences (If anyone can figure out how to get the proxy information directly please let me know. I can get the current network location but not information about it). The file might look something like this:

Work =
Library =

The keys are the names of your network locations and the values are in http://proxy-host:proxy-port form.

Next, you’ll need to put the following script somewhere along your $PATH named proxy-config and give it a chmod +x too.


 # source this into your current to have the proxy 
 # environment variables needed by many command line 
 # apps setup correctly.

 # get the current network location name
 netloc=$(/usr/sbin/scselect 2>&1 | egrep '^ \* ' | \
          sed 's:.*(\(.*\)):\1:')

 # find the proxy in /etc/http_proxy based on the 
 # current location
 http_proxy=$(egrep "$netloc[ \t]*=" /etc/http_proxy | \
              sed 's/.*=[ \t]*\(.*\)/\1/')

 if [ -n "$http_proxy" ]; then
   export http_proxy
   export HTTP_PROXY="$http_proxy"
   unset http_proxy
   unset HTTP_PROXY

 # the rest of this is used for symlink commands
 bn=$(basename $0)
 if [ "$bn" != "proxy-config" ]; then
     dn=$(cd $(dirname $0); pwd)
     for p in $(echo $PATH | sed 's/:/ /g'); do
       [ "$p" != "$dn" ] && [ -x "$p/$bn" ] && exec "$p/$bn" "$@"

This script has two usage scenarios. You can source this into your current shell to have the http_proxy set correctly based on your current network location:

$ . proxy-config
$ echo $http_proxy

Alternatively, you can create symlinks to the proxy-config script using the names of commands that require http_proxy and the script will automatically set the variable and exec the real command.

Got that? No? Okay, let’s move on.

Pretend you have /usr/bin/curl and you put the script from above at /usr/local/bin/proxy-config. You can get curl to use the approriate proxy settings by doing something like this:

# mkdir /usr/local/proxybin
# cd /usr/local/proxybin
# ln -s ../bin/proxy-config curl
# ls -l
total 4
lrwxr-xr-x  1 root wheel 19 May 11 13:30 curl -> ../bin/proxy-config

As long as /usr/local/proxybin is on your $PATH before /usr/bin, executing curl will actuall call proxy-config. proxy-config will then setup the proxy settings and exec /usr/bin/curl.

Now just create a symlink just like the one made for curl for anything else that requires proxy settings and enjoy network location support from the command line.

The Winer Decoder Ring

Posted almost 9 years back at Ryan Tomayko's Writings

Dave Winer invites us to get out the secret decoder ring:

Thankfully there hasn’t been much format-bashing going on in the last year or so, but it looks like it might be starting again, so let’s do something before it becomes a problem.

I've been reading a certain mail list, and observing that a certain person is now posting again, after a long hiatus that allowed us to find this relatively peaceful status.

Let’s make it clear that we like the way things are now, and won’t stand by and say nothing if the level of discourse starts slipping. Thanks for listening.

Eek! That last paragraph reads like some kind of threat, Dave, wtf?

My guess is that the mystery antagonist is Mark Pilgrim and the mailing list is that of the IETF Atom Syntax WG and maybe one of these is what has Dave giving out threats or mandates or whatever?

Well, I for one would love to see Mark back and stirring up the pot in full force. Somewhere deep down inside Dave must feel the same way because it’s painfully obvious that he’s provoking the exact behavior he is claiming to oppose. Mark is sometimes.. errmm.. harsh/pointed/blunt but he’s also right most of the time.

Dave: you don’t have a monopoly on being an asshole. Mark does a great job too. We like both of you for the assholes you are.

Mark: piss on Dave, man… Start posting again.

P.S. How’s that for “Canons of Conduct”?

Turn HTML off completely in

Posted almost 9 years back at Ryan Tomayko's Writings

I hate HTML mail. Apple’s Mail has a preference that allows you to turn on plain text emails for sending but there’s nothing obvious that let’s you specify that you always want to read mail using plain text. There’s a hidden preference that can be set by dropping the following into a shell:

defaults write PreferPlainText -bool TRUE

You should now get all mail as boring old readable plain text.

My last experience with

Posted almost 9 years back at Ryan Tomayko's Writings

I put Tiger on order from on April 17. They had a pretty good deal at $94.99 (suggested retail is $129) and promised to ship on April 28.

At work on Friday, somebody mentioned that microcenter had Tiger boxes on the shelves for $79.99. I was tempted to cancel my amazon order but figured it had already shipped—it was the day after the promised ship date.

Logging on I found that it had not shipped but was Shipping Soon:

We are preparing these items for shipment and this portion of your order cannot be canceled or changed.

So not only had it not shipped the day after promised but I'm now unable to cancel the order. It looked like the earliest I could expect Tiger would be today (May 2nd). I was mildly pissed off but shrugged it off as I was going to be away from the computer all weekend.

This morning I jump on to see what I missed over the weekend. Reviews, tips, tricks, hints, guides, and everything else Tiger dominated my aggregator. This jogs my memory and so I go to amazon to check the status of my order. Imagine my chagrin when the following advert popped up on the same page telling me that my order had still not shipped.

OS X Tiger

I'm no longer an customer.

Such precision

Posted almost 9 years back at Ryan Tomayko's Writings

Bill de hÓra’s recent piece makes an interesting connection between two waffling technologies – Semantic Web and Web Services:

The case of the DL and ontology world coming to the Semantic Web and worrying over queries that will blow up in the engines is much like the case of the enterprise world coming to the Web worrying over type systems and discovery languages. The likeness is not fleeting – both the Semantic Web and Web Services advocates have been busy building competing technology stacks in the last decade. They have valid points and good technology but the need or demand for such precision in the Web context has been overestimated.

This reminded of an excellent yet rarely cited piece by Cory Doctorow published by O'Reilly in late 2001 entitled The Carpet Baggers Go Home:

The Internet is unpredictable. It’s non-goddamned-deterministic.

The Internet is full of fantastically useful and frustratingly unavailable services, from the elegant simplicty of’s XML-RPC interface that accepts a URL and a link-title and shoves ‘em on top of the stack of recently updated sites, to the unaffiliated public CVS servers that pock the Internet like so much acne. They work well enough, on average, and if they were all to fail suddenly and at once, the Internet would kind of suck until they came back online. But there are enough of these little tools, enough ways of finding and manipulating information, that users can interpret unreliability as damage and route around it, finding alternate means of communicating and being communicated at.

The next generation of Internet entrepreneurs will be people who understand this. They’ll be working to provide unreliable services that work in concert with other unreliable services to provide a service that works on average, but not predictably at any given moment. They’ll challenge the received wisdom that customers are hothouse flowers, expensive to acquire and prone to wilting at the first sign of trouble. These entrepreneurs will build services that are so compelling that they’ll be indispensable, worth using even if the service flakes out when you want it the most.

And Clay Shirky:

Much of the proposed value of the Semantic Web is coming, but it is not coming because of the Semantic Web. The amount of meta-data we generate is increasing dramatically, and it is being exposed for consumption by machines as well as, or instead of, people. But it is being designed a bit at a time, out of self-interest and without regard for global ontology. It is also being adopted piecemeal, and it will bring with it with all the incompatibilities and complexities that implies. There are significant disadvantages to this process relative to the shining vision of the Semantic Web, but the big advantage of this bottom-up design and adoption is that it is actually working now.

And even Richard P. Gabriel:

From 1984 until 1994 I had a Lisp company called “Lucid, Inc.” In 1989 it was clear that the Lisp business was not going well, partly because the AI companies were floundering and partly because those AI companies were starting to blame Lisp and its implementations for the failures of AI. One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, “because, well, worse is better.” We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.

Why I love Sean McGrath

Posted almost 9 years back at Ryan Tomayko's Writings


Mr. McGrath is looking for a few good docheads (oxymoron, <ducks>) for an upcoming project. Experience in working with massive amounts of content a must, yadda yadda yadda.. But here’s why I love Sean: he doesn’t hire people (yes, docheads are people too) without the following qualification:

If you cannot think of 3 good reasons why dynamically typed programming languages have a role to play in this universe, you don’t want the job.

On HTTP Abuse

Posted almost 9 years back at Ryan Tomayko's Writings

There’s been a lot of good discussion around Udell’s End HTTP abuse article. We need to get this figured out because it’s almost embarrassing to be an advocate of standard approaches to building web applications when something as fundamental as the correct use of HTTP GET is butchered so often. Unfortunately, misuse of HTTP GET is just the tip of the iceberg. There’s a whole slew of HTTP abuse going on out there (often in the form of neglect) that can be laid at the footstep of two parties: frameworks and evangelists. The frameworks suck and the evangelists aren’t trying hard enough (I consider myself in both camps, btw).

The small community that is coalescing around REST/HTTP should prioritize the following objectives above anything else at this point:

  1. Raise awareness of what HTTP is capable of.
  2. Fix the tools.

HTTP Kicks Ass

We need to raise awareness of what HTTP is really about. If you’re reading and understanding this, then you've likely had the Ah-ha moment based on the realization that HTTP isn’t just a beat up old protocol for transferring files from a web server to browsers (if you haven’t, read this); but this understanding is not common. The large majority of smart technical people believe that HTTP is legacy technology: an old protocol, maybe a step above gopher, that has somehow hung around through the years. Something to be dealt with, not taken advantage of. We need to show how limiting this mind-set is.

Many of us were first introduced to the true capabilities of HTTP via the REST architectural style. If you were like me, Mark or Paul (or both at the same time) forcefully induced the REST epiphany on you against your will and then you went and re-read RFC 2616 while slapping yourself on the forehead repeatedly. The correlation between the architectural concepts described in Roy’s thesis and the implementation semantics described in the RFC were clear as a bell. HTTP was no longer a simple mechanism for exposing a directory of files and executable processes to a browser, it was the essence of web architecture.

Here’s the thing though: most people don’t read RFCs! They hate RFCs. Reading RFCs ranks close to doing taxes for most people. The only thing worse than reading RFCs is reading PhD theses.

The evangelist needs to reach these people somehow and I really don’t think it’s going to be through describing the architectural concepts of REST so much as it will be through describing the here’s-what-you-can-do-right-now-in-the-real-world benefits of HTTP. It’s a fine line, I know, but I think it’s important.

How do we give people the Ah-ha! without requiring that they read a thesis and an RFC?

A Three Legged Dog

I look at what Zeldman, Meyer, and others are accomplishing with the Designing with Web Standards movement and it seems a good model. They emphasis the correct use of standard CSS, (X)HTML, and DOM scripting (the three legged stool) to achieve enormous benefits over proprietary web design techniques. They have books and a cluster of weblogs that show designers how to reap the rewards of this system. It’s been a smashing success and is only gaining traction.

Three Legged Dog

Can we take a page from their book? In my opinion, we’re just as entitled to the phrase Designing with Web Standards as they are. We have a three legged stool too: HTTP, URIs, and XML. Except our stool is more like a three legged dog – you can get around but it is not quite optimal. Why is this?

My feeling is that we haven’t done a good enough job of showing examples of what the correct use of HTTP, URIs, and XML looks like in the real world. Joe Gregorio’s excellent column aside, there just is not a lot of here’s-how-you-get-shit-done-with-http type content available on the web, led alone books or magazine articles. We’re amazingly talented at pointing out when something is being done wrong (e.g. WS-*, non-safe/idempotent GET, incorrect charset, etc.) but we suck at showing how to do it right in the first place.

(Here’s some more pictures of three legged dogs because three legged dogs are the shit.)

To Hell With Bad Web Frameworks

Why are we having such a hard time showing correct use of HTTP and URIs? Because our tools suck. How are we suppose to show how to use HTTP and URIs properly when the tools and frameworks actively discourage practicing standards? Our incessant ranting on correct use of web technology is filed by many into the not living in the real world drawer. We’re assholes.

In many ways this is the same battle that the Designing with Web Standards crowd has been fighting with their tools – the browsers. How can you preach standard use of (X)HTML, DOM, and CSS when the browsers have such poor support for them? Those guys drew a line and said To Hell With Bad Browsers and it’s paying off. I think we need to take on a similar attitude and start expecting more from our web frameworks.

Every web framework I've ever worked with (Apache, CGI, Java Servlets, Quixote, Webware, Ruby on Rails, PHP, ASP.NET, CherryPy) were extremely limited in their support for the full set of capabilities provided by HTTP.

For instance, which frameworks …

  • Help implement content negotiation properly?

  • Provide facilities for implementing a smart caching strategy for dynamic content? (proper use of If-Modified-Since, Expires, Cache-Control, etc.)

  • Make dealing with media types easy?

  • Make dealing with character encodings easy?

  • Encourage the use of standard HTTP authentication schemes?

  • Have a sane mechanism for attaching different behavior to different verbs for a single resource?

  • Help ensure that URIs stay cool?

  • Make dealing with transfer encodings (gzip, compress, etc.) easy?

  • Help you use response status codes properly? (e.g. Nearly all dynamic content returns either a 200 or 500).

Sure, some frameworks have tricks for portions of the list but there should be documented, well-thought-out mechanisms for implementing these facets of HTTP. Let’s face it, if you want to do something outside of exposing well-known static representation types from disk for GET, or process application/x-www-urlencoded data via POST, you’re off the radar for most web frameworks. I'm not saying that other things aren’t possible, I'm saying they aren’t supported well.

To sum up, we need a good implementation of HTTP/1.1 that provides a real framework for building standards based web applications. We then need to advocate and illustrate the correct use of HTTP/URIs/XML as a killer technology that has been hiding right under our noses by showing the benefits of using the system correctly. Until we get this stuff straightened out, expecting people to use GET properly is unrealistic.

Sidebar: I just noticed that Leigh Dodds beat me to it:

[Udell] then continues by exploring the ease of using GET versus POST on the client-side. I think the fault actually lies on the server-side. Specifically, with existing web applications frameworks.


Not to bring up an old topic but..

Posted about 9 years back at Ryan Tomayko's Writings

… I was running through the archives and found an interesting entry that could have been written about the Google Auto-Link fiasco. The title was, Who Owns Your Browser? and it is about per-site user style sheets. I had forgotten that Simon Willison, Adrian Holovaty, myself and many others hashed through a lot of this stuff almost a year and a half before Google’s auto-link even hit the street and the issues are pretty much the same.

The discussion came out just as fractured then as it has this time around with the A-listers. Adrian’s friends thought people using per-site user stylesheets to modify content would be a serious issue and that content providers would eventually sue, Simon was open to the idea that there might be some questions around ethics but didn’t want to hurt innovation, and I said screw the content producers it’s my goddam browser and I’ll do whatever I please, thank you. :)

Maybe next holloween we can dress up like Winer, Scoble, and Doctorow and yell a lot? :)

Python and Peak Oil

Posted about 9 years back at Ryan Tomayko's Writings

The blogosphere is truly weird and amazing. I found out that there’s a book I have to read that goes into the peak oil situation, expanding on this article. What’s interesting is to follow the chain of events that will lead me to purchasing this book:

  1. I wrote about the potential for a Ruby on Railsish stack for Python.

  2. People link to this entry quite a bit placing it first in Google’s results for the query python+“ruby on rails”.

  3. Alec was looking for info on Python and Rails earlier today and finds my ramblings.

  4. Alec sees a totally unrelated link on my site to The Long Emergency, an article on peak oil that I read and bookmarked a couple of days ago.

  5. Alec had been interested in peak oil for a while and jots down his thoughts on The Long Emergency article.

  6. My technorati watch list notifies me that Alec linked to me.

  7. I read Alec’s piece and decide I need to purchase the book version of the article.

  8. I purchase book.

Who could have predicted that web programming and the energy problem could possibly be related? The only real link between these two topics is interest on the part of a few individuals.

IMO, it’s these types of serendipitous connections that make blogging a really interesting and unprecedented communications medium.

Insects and Entropy

Posted about 9 years back at Ryan Tomayko's Writings

Jon was a Computer Science major at Ohio State University taking a course in artificial intelligence. The professor had set up an interesting group project where each student was responsible for writing an insect program that would be matched against all the other student’s insect programs in a really cool network based insect war simulation environment thing that rocked.

The insect programs had certain constraints set by the professor. Size, shape, speed, and other traits were selected by each student but there were rules such that you couldn’t just turn all the dials to full.

Once the basic properties of the insect were fleshed out, code was written to specify how the insect should act. There was an API for determining where your insect was located on the grid, approximating positions of other insects, moving your insect, attacking, rotating, etc. Pretty standard stuff.

The professor decided to pair students up: one smart kid with one dumb kid; the students who were having a hard time in the class would be able to work closely with a student that was excelling. Each student was responsible for their own insect but they were to debate their designs with each other.

Jon was one of the smart kids and was paired with a kid that wanted to change majors. Jon’s dumb kid rarely attended class and seemed to dislike CS in general. He wasn’t even in class the day the assignment was handed out and so Jon set out on his own to build the coolest and most advanced insect program ever created.

Over the course of a few weeks he burned through code until his insect was capable of responding intelligently to a myriad of changes in environment. It knew to run when outmatched by judging the relative strengths and weaknesses of an opposing insect. It would attempt to strafe and stay behind other insects. It would stay close to corners to reduce the potential attack positions of other insects. It was The Coolest Insect Ever.

The day before the competition, Jon’s dumb kid decided to come to class. Jon asked him if he had finished his insect, to which the dumb kid replied he hadn’t even started but would finish it that day, in class. Jon grinned smugly and tried to explain to the poor fool that he himself had spent all week working on his insect and that it still was not yet complete. The dumb kid shrugged and started in coding something that would get him the damn credit for the project.

Right before the class ended the dumb kid asked Jon to take a look at his insect. Jon had to fight the urge to laugh out loud when he saw that the entire insect was a mere 25 lines of code that barely made it through the compiler and with some lines having no chance of even being executed. The dumb kid had not even configured his insect’s basic set of traits but had left them at the professor provided defaults.

Looking more closely, Jon found that the insect was programmed to do the same thing every time it had a turn to move:

  1. Rotate 90 degrees.
  2. Attack.

Turn and then attack. That’s it? Jon asked, to which the dumb kid replied, Do you think I’ll pass?

Jon tried to give the dumb kid some ideas on making his insect more advanced but the dumb kid wasn’t interested. Jon decided that the dumb kid would most assuredly not pass.

The next day the competition was on. The professor loaded up the simulation program and everyone hooked their insects into the system. The dumb kid was late and then couldn’t figure out how to get his insect loaded up. Jon helped him out while mumbling something about futility…

Finally the simulation began and Jon was excited to see his insect perform well through the first full round. In the second round, Jon’s insect would get stuck in one of the corners, enter an infinite loop, and be forcefully removed by the professor. One by one all other insects would be killed by other insects or removed by the professor due to logic problems – that is, all but the dumb kid’s insect.

As he sat watching the dumb kid’s lonely insect turn-and-attack, turn-and-attack, turn-and-attack, as if to mock the whole class, Jon was forced to re-evaluate his definition of cool in relation to computer programs.

This story was told to me by Jon Miller (UNIX sysadmin) in the first person. It has stuck with me as an excellent illustration of the power of simplicity and the devil that is the human tendency toward complexity.