Sometimes in order to succeed, you have to give up

We went camping at Wallowa Lake, OR this summer.  What a beautiful place!  Of course, it took forever to get there from Bend, but it was well worth the trip.

As is typical when we go camping, Beth and I slept in our sleeping bag atop an air mattress.  This particular air mattress had been replaced several years ago because its predecessor started to leak right around the base of the circular baffles that join the top and the bottom of the air mattress together.

Well, apparently it was time for this air mattress to suffer the same structural defect.  I awoke one morning to find my hip touching the ground and realized I would need to fix this situation if I wanted a good night’s sleep going forward.

In the past, I had been able to super-inflate the mattress and use soapy water to spot the leak.  However, this leak was not cooperating — I would have to get more serious if I wanted to sleep well for the rest of the trip.  So, I took the air mattress to the lake, figuring that if I submerged the mattress, it would yield its leak(s) in the form of air bubbles.  After twenty minutes of pushing, prodding, and otherwise cajoling the mattress into giving up the secret location of the escaping air, I was defeated.  I knew it was there, but I couldn’t find the leak.

I was resigned to several more nights of subpar sleep.  I gave up.

What do you do when you are in a beautiful lake in the afternoon with an air mattress in front of you with the sun heading toward the horizon, much like your summer is sunsetting into autumn?   You jump on your air mattress and soak up some of summer’s final rays — that’s what you do!

It was a split-second after I landed on the air mattress that the telltale WHOOSH from the leak exposed its location.  I mapped the breach immediately by counting the x and y baffle “coordinates,” took the mattress back to camp, let it dry, and finally fixed the leak.

I slept just fine that night.

Occam versus House

When I’m troubleshooting a problem, the first tool I reach for is Occam’s Razor. 99% of the time, I find a simple solution to a problem, possibly write a unit test (assuming we’re not talking about a bad ground in my stupid old car), and move on.

Some days, though, I endure my own personal episode of House.

So it was, after trying to track down a particularly unsettling problem where my model observer — which tested out just fine over and over again — simply failed to fire in a staging environment AFTER it worked a few times first. After slashing at the problem with Occam’s Razor to the point where there was nothing left but very tiny bits of flesh (ew), I put on House’s cap (or cane, as it were).

What if the code was, literally, disappearing?

Well, it turned out that was exactly what was happening. For other reasons that remain unresolved, I was instantiating my observer class using ObserverClass.instance as opposed to adding the class to the config.active_record.observers in environment.rb. This, coupled with the unfortunate configuration mistake that kept the site running in “development” mode, and this reloading-all-of-Creation before every new request caused the observer simply to fall off into oblivion as if it had never existed.

And why did it work a couple of times before failing? nginx –> 3 mongrel servers. Succeeded once for each server before the observer went completely bye-bye.

Although the true solution lies in reworking my application-environment-specific configuration, the short term solution was simply to add the observer to the config.active_record.observers after I had instantiated it.

Finding :last

Rails makes it easy to find the first row of a query:

Order.find(:first)

But what if you want the last one? It’d be great to be able to go:

Order.find(:last)

… especially if you could pass in conditions, etc.

Well, this isn’t general purpose, but it tends to get you the last one that was created. Can be useful in testing circumstances:

last_order = Order.find(:first, :conditions => 'id = (select max(id) from orders)')

created_at and updated_at in fixtures

I’ve been working on some code that gets a list of “old” orders based on the created_at value for an order. Of course, I wrote a test using a fixture that gets the list of old orders and makes assertions about it. I was not as rigorous as I should have been and simply checked to make sure the correct count of orders was returned and went along my merry way.

Today the test broke when I made some other changes. During my investigation, I printed out the contents of the orders and discovered the created_at and updated_at values were being set to all zeros. OK, that would explain why they’re considered “old” if they’re from “the beginning of time.” But I still wanted to be able to put values in for created_at that would cause an order to be too new to be included in the query. So, I figured I’d do something like this:

created_at: <%= Time.now %>

No workie; still zeros. Well, thanks to this blog entry, I was able to get it right:

created_at: <%= Time.now.to_s(:db) %>

Rails REST testing using XML

One of the things that has been bugging me about my REST interfaces is that, although they are thoroughly tested in functional tests with all the GETs and POSTs and PUTs (and occasionally DELETEs), it just isn’t quite the same as literally POSTing the XML.

So, this morning I took the time to figure out a way to do this. It turns out that with an integration test, it’s quite easy. It’s also probably the Right Place™ to do this.

Witness:

class RestXmlTest < ActionController::IntegrationTest
  fixtures :model_fixture

  # Test creating a new resource by actually POSTing the XML.
  def test_create_resource
    post "/path_to_resource.xml",
      "<resource><attribute_1>attribute value</attribute_1>...</resource>",
      {:content_type => "application/xml"}

    assert_response 201
  end
end

If it’s broke, fix it!

As I reach the end of the day at the end of the week, I run my tests one more time before checking them in, and … BANG! Failure!

Nuts. I wanted to check stuff in and call it a day. What do I do: check it in anyway, or take the time to fix it before checking it in?

Of course, if you’re even here, you know the answer: take the time to fix it. Here’s two reasons why:

1) If I leave it unfixed, it is more likely than not that something will prevent me from taking the time on Monday to fix it. Then Monday will pass by, and so will Tuesday… and by the end of the week a simple testing error becomes a burden to fix. Consequently, I no longer get the warm fuzzies and confidence of a clean test run at will. So, maybe I’ll just let the whole test suite decay since it’s obviously now broken. Not good — I’ve just given up the value of my entire test suite because I wouldn’t take the time to track down and fix an error in a test case that used to pass. And trust me — the longer you wait to fix something like this, the harder it becomes to get yourself psyched up to deal with it.

2) It took me so long to write #1 that I’ve forgotten #2. No matter — I did go ahead and fix the test case and I ended up with a better test anyway.

Test First, Code Later

I just wrote the following missive to some of my colleagues. Then I realized it would make a good blog entry, so here ya go…

 

The seasons are turning here [in Central Oregon] and it’s a little gray outside. I actually had to turn on the heater in my office! So, I’m dragging a little bit as a result and am looking to “shake things up” a little bit in my little office world in order to keep making progress. So I am listing what I need to do on my current project in priority order:

 

1) Write test client for normal life cycle

 

 

Wow, two years ago, that would’ve been at the bottom of my list (and thus fallen off) of a project. Then my mind speeds off to how different that thinking is than the way I was taught (albeit that was 20+ years ago but probably unchanged until relatively recently). And I thought, if I were teaching an introduction to programming class (or intro to Ruby or Java or whatever), the very first topic of discussion and the first assignment would be:

 

Writing tests.

 

I can’t think of any other fundamental change during the life of my career other than the advent of object-oriented programming that has had such a positive impact on my code quality and productivity. Being able to write tests FIRST and then write code that satisfies those tests is a lot like having a simple coloring book that has the picture drawn — you just need to color inside the lines. Sometimes it’s “too much” to think about everything that comprises a programming problem, and I’ve found that I can get traction simply by writing a test and then making it work. I also discover TONS about how I want the code to look, and I find that by writing tests I don’t like the way I thought I was going to write it and now I can fix it. Finally, you end up with a PILE of test code as a happy side-effect which you can run any time and feel confident you haven’t fucked anything up along the way.

 

The net effect of Test Driven Development is:

– better quality code, by far

– faster resolution of problems

– ultimately faster time to completion

 

Let me underline that last point: faster time to completion. Typically when we’re “under the gun,” we think it’ll be faster to dive in and be “quick and efficient” to write something, obviating test code. Although this is true some of the time, I’d argue that it is not true the majority of the time.

 

I hope that my relentless preaching about testing eventually tires you all out enough that you start doing it if you haven’t already. There are very few things in my life that I am zealous about — TDD is one of them.