I recently finished playing an excellent video game, Mass Effect 2 (just before Mass Effect 3 comes out), on the advice of one of my favorite writers, Tom Bissell. His video game writing on Grantland is extraordinary and his book Extra Lives is a great exploration of video games as an art form. In the book he describes how storytelling in video games has become "formally compelling" (even if the stories being told are overly baroque or poorly written) and uses Mass Effect as an example.
I totally agree with Bissell. There's a pivotal scene halfway through the game where virtually your entire crew gets captured. You're given a choice: do you want to go on a high-risk rescue mission, for which the team you lead is unprepared, or do you want to continue to prepare so you're ready for the mission? If you go on the rescue mission at that moment you're also potentially leaving a bunch of side missions undone.
While some characters were beseeching me to move quickly "while there's still time to save our crew", I had other characters cautioning me to wait, telling me that proceeding would be suicidal. I thought this was the standard sort of fake-dramatic choice offered up by video games. Why would I want to rush and get creamed by the Collectors when I had more fun stuff to do?
So I prudently put off the rescue mission, but had an eerie feeling walking through my ship. Once, it had been populated with chattering NPCs, but now was silent. I went about completing all of the side missions and got my rescue team fully dialed-in: plussed-up attributes, awesome weapons, useful upgrades, and so on. Only then did I order the rescue attempt.
My team prevailed, and the mission turned out to be relatively easy given how prepared we were (I should say how prepared "we" were, since we're talking about virtual characters - yet they felt real enough to me). But here's the thing: when we discovered where our crew was being held, there was only one survivor left. And I saw something horrible happen to the second-to-last survivor just as we showed up. The sole survivor then lashed out at me (at my character), protesting the length of time it had taken us to arrive. She said stuff like, "I'm sorry, it was so hard watching them all die." (All this hit home because the voice acting in Mass Effect is superb, something Tom Bissell also wrote about)
It was then that I realized: had I chosen to immediately go on the rescue mission, Mass Effect 2 was programmed to let more crew members survive. That choice I made unconsciously really mattered! Maybe I could have saved all of them! A glitch in the rendering underlined this for me, somehow more poignant for being erroneously disclosed. In the background of the frames with the surviving, distraught crewmember castigating me, I could see another crew member, whom I had already been told I did not save. So obviously there was code in there to control how many survivors I got to rescue based on how long I took to get there, yet this small bug caused one of the dead crew members to show up anyway.
The scene gave me chills. It made me feel horror and sadness and regret. I can't think of any other storytelling medium that could create a similar effect, because the game implicated me in these events. I made a careless choice, even if it was the right choice, and pretend people suffered horribly. I wish I could go back and start the game over again, knowing that my decisions in this game, unlike so many others, would bear such weight. I'm also really curious to see how hard the rescue mission would have been had I listened to my gung-ho virtual colleagues, and how many more crew members I could have saved.
This is what Bissell means when he says these games have become formally compelling: a good story-driven video game can grab hold of you; can involve you; can entertain you in a deep, rich way that no other medium can. I predict that just as we are seeing with recent television, some of the best writers will start to gravitate towards this medium because it offers such rich narrative possibilities.
Monday, February 20, 2012
Friday, February 3, 2012
Speaking at RubyNation and Moderating Are There Angels Among Us?
I'll be giving a talk at RubyNation in Reston, VA on March 23rd or 24th, tentatively titled "Coding for Uncertainty: How to Design Maintainable, Flexible Code for a Startup Application". I plan to discuss lessons learned from building OtherInbox and subsequent projects, and how I try to hit my maximum sustainable development speed.
I will also be moderating a Think Big Baltimore event called Are There Angels Among Us? on 2/16 here in Baltimore, all about angel investment in the mid-Atlantic region. I have a couple of free tickets to share if anyone reading this blog would like to check it out. Email me if interested.
Hope to see you there!
I will also be moderating a Think Big Baltimore event called Are There Angels Among Us? on 2/16 here in Baltimore, all about angel investment in the mid-Atlantic region. I have a couple of free tickets to share if anyone reading this blog would like to check it out. Email me if interested.
Hope to see you there!
Thursday, February 2, 2012
The one best way I know of to write software tests
Early in 2011 I had a prophetic conversation with fellow Baltimore hacker Nick Gauthier that radically changed the way I think about testing web applications. He described a methodology where you almost exclusively write high-level acceptance or integration tests that exercise several parts of your code at once (vs. the approach I had previously used, writing unit tests for every component backed up by a few integration tests). For a Ruby app this means using something like Capybara (depicted below), Cucumber, or Selenium to test the entire stack, the way a user would interact with your site.
These tests aren't meant to be exhaustive - you don't test every possible corner case of every codepath. Instead you use them to design and verify the overall behavior of the system. For example, you might write a test to make sure your system can cope with invalid user input:
Usually with this technique you would not write a separate test for each type of invalid data since these tests like these are fairly expensive. Instead, you combine the test above with a series of unit tests which examine the components involved in the above behavior in an isolated fashion. Typically these tests will run much more quickly because they don't involve the overhead of setting up a request, hitting the database, etc.
In the above example we could cover all of the invalid cases with a model unit test that looks like this:
What you end up with is a small number of integration tests which thoroughly exercise the behavior of your code combined with a small number of extremely granular tests that run quickly and cover the edge cases.
One Criticism of This Approach
This idea has been working wonderfully for me. I feel like it gives me excellent code coverage without creating a massively-long running test suite. But I did notice Nick Evans critiquing this style of testing awhile ago on Twitter:
These tests aren't meant to be exhaustive - you don't test every possible corner case of every codepath. Instead you use them to design and verify the overall behavior of the system. For example, you might write a test to make sure your system can cope with invalid user input:
describe "Adding cues" do let(:user) { Factory(:user) } before { login_as_user(user) } it "handles invalid data" do visit "/cues/new" select "Generic", from: "Type" expect { click_button "Create Cue" }.not_to change(Cue,:count) should_load "/cues" end end end
Usually with this technique you would not write a separate test for each type of invalid data since these tests like these are fairly expensive. Instead, you combine the test above with a series of unit tests which examine the components involved in the above behavior in an isolated fashion. Typically these tests will run much more quickly because they don't involve the overhead of setting up a request, hitting the database, etc.
In the above example we could cover all of the invalid cases with a model unit test that looks like this:
describe Cue do it { should validate_presence_of :name } it { should validate_presence_of :zip_code } end
What you end up with is a small number of integration tests which thoroughly exercise the behavior of your code combined with a small number of extremely granular tests that run quickly and cover the edge cases.
One Criticism of This Approach
This idea has been working wonderfully for me. I feel like it gives me excellent code coverage without creating a massively-long running test suite. But I did notice Nick Evans critiquing this style of testing awhile ago on Twitter:
lots of integration tests + very few unit tests => a system that mostly works, but probably has ill-defined domain models and internal APIs.The fact that it got retweeted and favorited a number of times makes me think he's onto something, though I haven't run into this problem yet, and I'm rigorous about keeping domain models and APIs clean. I have no problems refactoring in order to keep my average pace of development high. In my experience adhering to a strict behavior-driven development approach has kept me from running into the problem he describes, but that might not hold if I was part of a team. Time will tell.
Subscribe to:
Posts (Atom)